id
stringlengths
20
20
content
stringlengths
211
2.4M
meta
dict
BkiUdPk5qU2Ap3mnjO0P
\section{Introduction} \label{sec:introduction} Consider the booking or renting of a particular place on a reservation site. Specifically, we consider booking a round of golf through a golf booking service. Golf courses can be booked by users with a variety of options and prices: party size, a caddie to carry golf clubs, lunch, a guarantee of two-person \emph{pair} parties, a competition option for several parties, and a certain start time, among many other options. We would like to recommend to users not only courses they may like, but also priced packages of options that they may like on top of recommended parent courses. Such options as described above are wide ranging. Extensive options are provided not only for golf courses on Rakuten GORA, but hotels as well: rooms on Rakuten Travel ~can be booked with many options ranging from breakfast/lunch/dinner to late checkout, and an outdoor bath. Additionally, coupons can be considered as short-lived dynamic packages that target regular E-commerce products. Car rental services also use such packages. In the present paper, we refer to these packages as \textit{short-lived dynamic booking packages}. Recommendations exist in the form of advertisement for this type of item in golf, but a preliminary survey\footnote{Survey conducted in 2013 on the following websites: \ifanonymous \emph{hidden for anonymity.} \else Alba (Japan), GDO (Japan), Golf-Jalan (Japan), GolfDigest (US), TeeOffTimes (UK).\fi} tells us that non-trivial recommendation systems seem not to exist. Considering the scale of the target industry given its user mass\footnote{On the order of 3 million users on Rakuten GORA ~alone, approximately \ifanonymous 2.x\% of the population of country X.\else 2.5\% of the population of Japan.\fi} as well as its average order value\footnote{The AOV is approximately 90 USD per golf reservation\ifanonymous(USD used as anonymous currency)\fi.}, the recommendation of packages is essential for business success. However, to the best of our knowledge, such recommendations have not been studied in traditional research. There are \ifschedule three \else two \fi main challenges in recommending such packages. The first relates to their \emph{short-lived} aspect; on a B2B2C site, merchants (e.g., golf course owners or hotel owners) input the packages for their course and set different prices according to options, season, trends, and target customers. We found that most such packages expire in a month after the start of their active period, including very short time-limited special offers (Figure~\ref{fig:package_lifespans}). Moreover, the price trends is what makes the packages \emph{dynamic} (Figure~\ref{fig:price_trends}). This puts the package recommendation system under a regime of a permanent cold start. Ratings, the objective variable most favored by classical collaborative filtering approaches for atomic items, are not available. Co-counting using purchasing/browsing history is also very limited, as customers book an average of 4.5 courses per year, i.e., one package every 2.7 months. This means that they do not book packages fast enough on average for traditional models to be learned and used in the short term. \begin{figure}[t] \centerline{\includegraphics[width=0.5\textwidth]{plan_lifespans.eps}} \caption{\label{fig:package_lifespans}Histogram of package lifespans in the period of June 2012 through May 2013.} \end{figure} \begin{figure}[t] \centerline{\includegraphics[width=0.5\textwidth]{price_trends.eps}} \caption{\label{fig:price_trends}Weekly price trends of packages.} \end{figure} In recommendation, the straightforward approach under a cold start regime or working on long-tailed items is to turn to content-based methods of information retrieval. This gives rise to the second challenge: uninformative data. Package contents comprise flags and categorical variables for various options, but analysis of the items alone based on their content results in poor clusters or latent packages, because options have different importance to the package value from the point of view of the user. Thus, direct application of clustering using similarities such as the Jaccard index, where every attribute is weighed the same, performs poorly. In addressing the challenges mentioned, we leverage reservation histories enriched with package and course data to assess user behavior, and conduct an analysis of package pricing. This allows us to construct a similarity score that performs well. Our approach is threefold, in that we: 1. extract user behavioral characteristics, 2. conduct collaborative filtering on parent items, and 3. perform content-based information retrieval using user preferences and the package price. \section{Recommending Golf Packages} \subsection{User-weighed package similarity} \label{sec:preliminary_analysis} \label{sec:clustering} \begin{figure}[t] \centerline{\includegraphics[width=0.5\textwidth]{clusters_color.eps}} \caption{\label{fig:clusters}Cluster centroids of users having booked at least two courses. $Pairs$ and $friends$ each comprise 35\% of users, while the others each comprise 10\%.} \end{figure} Plan data themselves are uninformative for similarity (as shown in Section ~\ref{sec:experiments} by Jaccard's poor precision). We analyze packages through users, enriching their reservation history with package and course data. The data were aggregated to build user behavior vectors, which we Z-transformed and clustered using Euclidean k-means. Figure~\ref{fig:clusters} shows vastly different behaviors with regard to package options and price (e.g., spending deviation growing with the spending average), and a need to develop adequate similarity metrics. We define the user-weighed option similarity score for a package $p$ with respect to a user $u$ as \iffalse \begin{equation} \tilde{S}_{opt}(p|p^{(ref)},u) = \sum_{k \in O} w_k P(p_k|p^{(ref)},u) \end{equation} \\ where $w_k$ is a weight, $k$ belongs to the subset $O$ of vector indices of $p$ denoting categorical option attributes (which are binary or dummy-coded). P is defined as:\\ \begin{equation} P(p_k|p^{(ref)},u) = \frac {1}{1+e^{-(\beta_{0,S_u} + \beta_{1,S_u}^T p^{(ref)})}}, \end{equation} \\ \else \[ \tilde{S}_{opt}(p|u) = \sum_{k \in O} P(p_k|u) \] \noindent where $k$ belongs to the subset $O$ of vector indices of $p$ denoting flags or dummy-coded categorical attributes, and P is a logistic factor such as \\ \[ P(p_k|u) = \frac {1}{1+e^{-(\beta_{0,S_u} + \beta_{1,S_u}^T u)}}, \] \fi \noindent where $\beta_{0,S_u}$ and $\beta_{1,S_u}$ are respectively the intercept and coefficient vector for cluster $S_u$ to which $u$ belongs after clustering. This probability corresponds to user $u$ choosing the option $p_k$ for the next booking. $P(P_k|U)$ is learned by leaving the last package $p^{(last)}$ booked by a user and using its $p_{k \in O}^{(last)}$ as dependent variables and the user vector as independent variable for learning. We show example logistic weights for three output probabilities in Table ~\ref{tab:logistic_models}, each predicting the future occurrence of an option\footnote{We used Weka version 5.3.001 for computing logistic regression (http://www.cs.waikato.ac.nz/ml/weka/).}. \begin{table}[tbp] \caption{\label{tab:logistic_models}Weights of logistic models predicting occurrence of options in the next booking. Only weights that are over $10^{-1}$ for a response (in bold) are shown.} \begin{center} \begin{tabular}{lrrr} \cline{2-4} & \multicolumn{3}{c}{Options}\\ \hline User attribute & \multicolumn{1}{l}{Caddie} & \multicolumn{1}{l}{Holiday} & \multicolumn{1}{l}{Lunch} \\ \hline Lunch rate & \textbf{-0.315} & \textbf{-0.1829} & \textbf{1.4113} \\ Competition rate & 0.0351 & -0.0166 & \textbf{0.1656} \\ Holiday rate & \textbf{-0.351} & \textbf{1.6612} & \textbf{-0.1516} \\ Caddie rate & \textbf{3.0924} & \textbf{-0.6099} & \textbf{-0.1776} \\ Avg. spending & \textbf{0.201} & 0.092 & \textbf{-0.103} \\ Avg. course rating & \textbf{0.3673} & \textbf{-0.3004} & \textbf{0.2138} \\ Std. course rating & \textbf{0.295} & \textbf{-0.3063} & 0.0637 \\ Avg. \# of parties & \textbf{-0.9268} & \textbf{-0.1061} & \textbf{-0.1751} \\ Std. \# of parties & \textbf{0.1424} & 0.0083 & \textbf{0.117} \\ Avg. party size & \textbf{0.2164} & 0.0064 & 0.0667 \\ Intercept & \textbf{-5.6731} & \textbf{-0.6552} & 0.0487 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Reference course and package} \label{sec:reference_selection} The reference package is used mainly to compute the price similarity described in Section~\ref{sec:price_similarity}. It is also necessary to perform the subsequent experiments of this paper for the Jaccard baseline (Section~\ref{sec:experiments}). We first extract the reference course that was played the most in the season closest to the target season, using a simple scoring function. For example, if we would like to recommend packages in June, a course played twice around June scores higher than one played twice in December. This is based on the assumption that a user likes a course if he has booked it several times, and the observation that users have affinities to courses that are seasonal. Once the reference course is selected, we simply select the last package booked as the reference package. \subsection{Filtering of parent items} \label{sec:course_filtering} Collaborative filtering should be leveraged wherever possible, even if impractical for the granularity of packages. In our case, there are \ifanonymous 2,000 courses\footnote{Exact number hidden for anonymity.} \else 1,951 courses \fi in our system, each generating packages that we want to recommend on top of them to users. This gives us a parent course item/item co-occurrence matrix for courses of rank \ifanonymous 2,000\else 1,951\fi. Because we have at least 100,000 active users in the period to populate the co-occurrence matrix, the matrix is very dense, which works well with collaborative filtering. As the course recommender already performs well on Rakuten GORA, we choose to use it in a collaborative filtering step to filter courses of interest to the user. \subsection{Price similarity} \label{sec:price_similarity} To improve the scoring function, we develop a similarity component based on package price, which should be leveraged because our analyses reveal two important patterns: 1. 90\% of users do not deviate by more than 30\% from their average spending (the remaining 10\% belong to the cluster of \emph{refined} users in Figure~\ref{fig:clusters}), and 2. the price itself contains enough information about the package to make it a potent similarity measure. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{price_predictions.eps} \caption{\label{fig:price_regression}True prices vs. predictions for a course.} \end{figure} \begin{table}[tbp] \caption{\label{tab:regression_features}Sets of features for the linear model of price. The final set is $\{m\} \times \{d\} \times A \times P$.} \begin{center} \begin{tabular}{ll} \hline \multicolumn{1}{l}{Set} & \multicolumn{1}{l}{Features} \\ \hline Temporal & Month of year (m), day of week (d) \\ Attributes (A) & Lunch, caddie, competition, \\ & pair party, min. party size, \\ & min. nb. of parties, nb. of laps \\ Promotional (P) & Promotion type, shortness of package \\ \hline \end{tabular} \end{center} \end{table} To demonstrate point (2), we run a regression on a course in the data that has generated many packages. Using the Cartesian feature set of Table~\ref{tab:regression_features}, we build a linear model that gives prices of packages that are fairly close to the truth (see Figure~\ref{fig:price_regression}). Note that the plot in Figure~\ref{fig:price_regression} is heteroscedastic, which shows that pricing becomes loose at the high end, as consistently does spending (Section~\ref{sec:clustering}). We define price similarity score for a package $p$ with respect to a user $u$ and a reference plan $p^{(ref)}$ selected from his/her history as \[ \tilde{S}_{price}(p|p^{(ref)},u) = \frac{1}{1 + \frac{r_{t_p,t_{p^{(ref)}}}}{\omega + \sigma_{price}^{(u)}} \| price_p - price_{p^{(ref)}} \| }, \] where $r_{t_p,t_{p^{(ref)}}}$ is the ratio of seasonal averages between $p$ and $p^{(ref)}$ compensating for seasonal trends, $\omega$ is a currency scaling factor, and $\sigma_{price}^{(u)}$ is the user's spending deviation. \subsection{Final score} \label{sec:final_score} The final score is defined as \[\tilde{S}(p|p^{(ref)},u) = ( w_p \tilde{S}_{price} + w_o \tilde{S}_{opt} + w_c \tilde{S}_c ) [ p|p^{(ref)},u ], \] where $\tilde{S}_c$ is the parent course score after filtering (Section~\ref{sec:course_filtering}), and $w_p$, $w_o$ and $w_c$ are weights whose optimal values can be found through hill-climbing with respect to EMP@n. \section{Experiments} This section details the offline and online evaluation. For offline evaluation, we compared the precision of our approach with that of the basic similarity method, and tested three different options. We then launched an e-mail campaign for online evaluation. None of our data contains personal information such as names or addresses. \subsection{Offline evaluation} \label{sec:experiments} We tested our proposed methods on golf booking data collected from June 2012 to May 2013. The number of unique users in this period is 521,442 and the total number of bookings is 2,499,678. We used the booking history from June 2012 to May 2013 to generate the package recommendations. We then checked what packages users actually booked from 1 June 2013 to 15 June 2013. We call this evaluation index the \textit{expected minimum precision} (EMP), because the number of booked packages would increase if users interact with our recommendation results. This setting has been widely used in the evaluation of recommender systems; e.g.,~\cite{zhu2014bundle}. The EMP is defined as \[ P_n = \sum_{u \in U} \frac{| recommendations_{u,n} \cap truth_u |}{| truth_u |}. \] We test four settings in this experiment: 1. the Jaccard score computed as \[ S_{J}(p,p^{(ref)}) = \frac{|p \cap p^{(ref)}|}{|p \cup p^{(ref)}|} \] in place of $S_{price}$ and $S_{opt}$, where the $p$ values are here used to denote the attribute sets, 2. the user-weighed option similarity $S_{opt}$ without $S_{price}$, 3. the final score incorporating $S_{opt}$ and $S_{price}$ without $r_{t_p,t_{p^{(ref)}}}$ for price adjustment, and 4. the final score incorporating $S_{opt}$ and $S_{price}$ with $r_{t_p,t_{p^{(ref)}}}$ for price adjustment. \\ Each of these settings incorporate the same reference course and package selection step (Section~\ref{sec:reference_selection}) and course filtering step (Section~\ref{sec:course_filtering}). \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{EMP_color.eps} \caption{\label{fig:EMP}EMP@n curves for each tested method.} \end{figure} Figure~\ref{fig:EMP} shows that the scores incorporating price similarity perform best, independently of price trend adjustment. User-weighed similarity performs reasonably well but not nearly as well as when price is incorporated. Finally, the Jaccard baseline shows very poor performance. For a top-five returned list, the EMP of the proposed method is 25\% more efficient than that of user-weighed similarity, and 30\% more than that of the Jaccard baseline. \subsection{Online Evaluation} We performed online evaluation by conducting a personalized e-mail campaign. We sent e-mails that contained six recommended packages on one day in December 2014 and evaluated the click-to-open-rate (CTOR) and conversion-to-open-rate (CVR). Here, the CVR refers to the event that a customer clicked on the recommended package and made a reservation. We compared the performance to that of previous and following e-mail campaigns. E-mails in these two campaigns contained approximately 100 packages selected by specialists based on their contents and target demographics. Our approach achieved the highest CTOR and CVR with the maximum improvement in the CTOR was 200\%. \section{Related Work} \label{sec:related_work} We briefly describe previous research on dynamic items and a cold start. Schein et al.~\cite{Schein:2002:MMC:564376.564421} raised the cold-start problem in recommendation. To overcome the lack of user preference information that is essential for collaborative filtering, they proposed a probabilistic model that combines content and collaborative filtering. Chu and Park~\cite{Chu:2009:PRD:1526709.1526802} combined user and item profiles in a dynamic bilinear model for time-aware recommendation for the Today module on the Yahoo! front page. Matrix factorization techniques that solve a cold-start problem were also proposed in several works~\cite{Koenigstein:2011:YMR:2043932.2043964,Saveski:2014:ICR:2645710.2645751}. The crucial difference between short-lived dynamic items and the items in the research body on permanent cold-start regimes is their short life-span and non-retrievable aspects. To be more specific, \textit{short-lived} items will expire within a month after the start of their active period, as is shown by our observations (Figure~\ref{fig:package_lifespans}). Such items cannot be retrieved (i.e., booked, searched for, browsed, or recommended) by/to users once they expire. They also have generally poor statistical value in a user/item matrix because of their capped counts, especially when this matrix does not embody a notion of time and relevance to the present. On the other hand, Zhu et al.~\cite{zhu2014bundle} proposed the bundle recommendation problem in e-commerce. According to the observation that users usually buy more than one item on e-commerce sites and that displaying related items together improves conversion, they proposed a recommender system that maximizes the reward function (conversion rate and revenue/profit of the bundle). This work is different from ours in that it focused on creating bundles, whereas our work focuses on recommending packages that are created by merchants using different values of limited attributes such as lunch, and competition. \section{Conclusion and Future Works} \label{sec:conclusion} \label{sec:future_work} In this work, we identified a pervasive subset of items that ought to be the subject of a recommendation research: short-lived dynamic booking packages. We showed that the problem is unique in terms of the short lifespan of items and the uninformative nature of the data, which calls for an original recommendation approach. We performed an experiment over a subset of users that resulted in appreciable improvements in EMP when leveraging user analysis and price analysis of the package to define adequate scoring functions. We also performed an actual A/B test for a mail recommendation and found that the click-to-open rate was twice that achieved with human selection of packages. In further work, we would like to refine the metrics and address a third challenge, namely that booking packages are designed to \emph{book} a parent item, which inherently adds a notion of \emph{schedule} to the problem constraints and makes recommendation challenging when the user schedule is unknown. \ifanonymous \else \section*{Acknowledgements} We would like to thank Satoko Marumoto, Takahiro Kuroda, Yusuke Sasamori, Ryo Yoneda, Yoshiro Matsuda, Yu Hirate, and all contributors to this research for their support. \fi \bibliographystyle{acm} {\small
{ "attr-fineweb-edu": 1.535156, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdec5qX_AYwyj8V73
\section{Introduction} Travel and tourism is a trillion dollar industry worldwide~\cite{statista}. To improve travel and tourism experiences, many location-based services are built. \emph{Next POI recommendation} is one of such services that is gaining an increasing interest as more POI check-in data become available. Next POI recommendation aims to suggest a POI to visit next, given a user's POI visit history. Such a service is beneficial to both the users and the tourism industry, since it alleviates the burden of travel planning for the users while also boosts the visibility of POIs for the tourism industry. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figs/POI_sequence.pdf} \caption{A POI check-in sequence in New York City (dashed line indicates movements across days)} \label{fig:sequence_example} \end{figure} Next POI recommendation is often modeled as a \emph{sequential recommendation} problem~\cite{cheng2013you,feng2015personalized,feng2017poi2vec}, to take advantage of the sequential patterns in POI visits. For example, tourists at New York City often visit Central Park right after The Metropolitan Museum of Art (``the Met'', cf.~Fig.~\ref{fig:sequence_example}). If a user has just checked-in at the Met, Central Park is the next POI to recommend. Recently, \emph{self-attentive networks} (SAN)~\cite{vaswani2017attention}, a highly effective and efficient \emph{sequence-to-sequence} learning model, have been introduced to \emph{general} sequential recommendation problems. The resultant model named \emph{SASRec}~\cite{Kang2018Self} yields state-of-the-art performance in recommending next product or video game to purchase, next movie to watch, etc. Applying SASRec to make next POI recommendations directly, however, may produce sub-optimal results. This is because SASRec is designed for general recommendation scenarios and focuses only on the sequential patterns in the input sequence. It does not consider any spatial or temporal patterns, which are inherent in POI visit sequences and are critical for POI recommendations. In terms of spatial patterns, as illustrates in Fig.~\ref{fig:sequence_example}, POI visits demonstrate a clustering effect~\cite{cheng2013you}. POIs nearby have a much larger probability to be visited consecutively than those far away. This offers an important opportunity to alleviate the data sparsity problem resulted from relying only on historical check-in sequences. In an extreme case, we may recommend Central Park to a user at the Met, even if the two POIs had not appeared in the historical check-in sequences, since both POIs are just next to each other and may be visited together. In terms of temporal patterns, historical check-ins made at different times ago shall have different impact on the next POI visit. For example, in Fig.~\ref{fig:sequence_example}, the solid arrows represent transitions made within the same day, while the dashed arrow represents a transition made across days. Check-ins at Central Park and the Met may have a strong impact on the check-in at Museum of the New York City, as they together form a day trip. In contrast, the check-in at Museum of the New York City may have little impact on the check-in at Times Square, as they are at different days and may well be different trips. SASRec ignores such time differences and just focuses on the transition probabilities between actions (i.e., check-ins). In SASRec, the impact of the check-in at the Met on the check-in at Central Park may be the same as that of the check-in at Museum of the New York City on the check-in at Times Square. Such an impact modeling strategy may be inaccurate for next POI recommendations. To address these limitations, in this paper, we introduce self-attentive networks to next POI recommendation and integrate \underline{s}patial and \underline{t}emporal pattern learning into the model. We name our resultant model the \emph{SANST} model. To integrate spatial pattern learning, we learn a spatial embedding for each POI and use it as the input of the self-attentive networks. Our embedding preserves the spatial proximity between the POIs, such that the self-attentive networks can learn not only the \emph{sequential} patterns between the check-ins but also the \emph{spatial} patterns between the check-ins. To learn our spatial embedding, we first hash the POIs into a grid where nearby cells are encoded by strings with common prefixes (e.g., following a \emph{space-filling curve} such as the \emph{Z-curve}). We then learn character embeddings from the hashed strings corresponding to the POIs using \emph{Bi-LSTM}~\cite{hochreiter1997long}. The Bi-LSTM output is then used as the embedding of the POI. Since POIs at nearby cells are encoded by similar strings, they are expected to obtain similar embeddings in our model. Thus, we preserve the spatial proximity patterns in POI check-ins. To integrate temporal pattern learning, we follow~\cite{shaw2018self} to adapt the attention mechanism. We add a parameter to represent the relative position between two input elements $s^u_i$ and $s^u_j$ (i.e., check-ins) in the input sequence. We define the relative position based on the time difference between $s^u_i$ and $s^u_j$ instead of the number of other check-ins between $s^u_i$ and $s^u_j$ (which was done by~\cite{shaw2018self}). This way, our SANST model represents the temporal patterns in check-ins explicitly and can better learn their impact. To summarize, we make the following contributions: \begin{enumerate} \item We propose a self-attentive network named SANST that incorporates spatial and temporal POI check-in patterns for next POI recommendation. \item To incorporate the spatial patterns of POI check-ins, we propose a novel POI embedding technique that preserves the spatial proximity of the POIs. Our technique hashes POIs into a grid where nearby cells are encoded by strings with common prefixes. The POI embeddings are learned from the hashed strings via character embeddings, such that POIs at nearby cells yield similar embeddings. \item To incorporate the temporal patterns of POI check-ins, we extend the self-attentive network by adding a parameter to represent the relative time between the check-ins. This enables the network to learn the temporal patterns of the check-ins explicitly. \item We study the empirical performance of our SANST model on three real-world datasets. The results show that SANST outperforms state-of-the-art sequential next POI recommendation models and adapted models that combine self-attentive networks with spatial feature learning directly by up to 13.65\% in terms of nDCG@10. \end{enumerate} \section{Related Work} Like many other recommendation problems, POI recommendation has attracted extensive research interests. Earlier studies on this topic use \emph{collaborative filtering} (CF) techniques, including both \emph{user-based CF} (UCF)~\cite{ye2011exploiting} and \emph{item-based CF} (ICF)~\cite{levandoski2012lars}. These techniques make recommendations based on either user or item similarity. \emph{Factorization models}~\cite{gao2013exploring,li2015rank,Lian:2014:GJG:2623330.2623638} are also widely used, where the user-POI matrix is factorized to learn users' latent interests towards the POIs. Another stream of studies use probabilistic models~\cite{cheng2012fused,kurashima2013geo}, which aim to model the mutual impact between spatial features and user interests (e.g., via Gaussian or topic models). Details of these studies can be found in a survey~\cite{yu2015survey} and an experimental paper~\cite{Liu:2017:EEP:3115404.3115407}. \textbf{Next POI recommendation.} In this paper, we are interested in a variant of the POI recommendation problem, i.e., \emph{next POI recommendation} (a.k.a. successive POI recommendation). This variant aims to recommend the very next POI for a user to visit, given the user's past POI check-in history as a sequence. Users' sequential check-in patterns play a significant role in this problem. For example, a tensor-based model named \emph{FPMC-LR}~\cite{cheng2013you} recommends the next POI by considering the successive POI check-in relationships. It extends the \emph{factorized personalized Markov chain} (FPMC) model~\cite{Rendle:2010:FPM:1772690.1772773} by factorizing the transition probability with users' movement constraints. Another model named \emph{PRME}~\cite{feng2015personalized} takes a \emph{metric embedding} based approach to learn sequential patterns and individual preferences. \emph{PRME-G}~\cite{feng2015personalized} further incorporates spatial influence using a weight function based on spatial distance. \emph{POI2Vec}~\cite{feng2017poi2vec} also makes recommendations based on user and POI embeddings. To learn the embeddings, it adopts the \emph{word2vec} model~\cite{DBLP:journals/corr/abs-1301-3781} originated from the natural language processing (NLP) community; users' past POI check-in sequences form the ``word contexts'' for training the word2vec model. The POI check-in time is also an important factor that is considered in next POI recommendation models. For example, \emph{STELLAR}~\cite{zhao2016stellar} uses \emph{ranking-based pairwise tensor factorization} to model the interactions among users, POIs, and time. \emph{ST-RNN}~\cite{liu2016predicting} extends \emph{recurrent neural networks} (RNN) to incorporate both spatial and temporal features by adding distance-specific and time-specific transition matrices into the model state computation. \emph{MTCA}~\cite{li2018next} and \emph{STGN}~\cite{zhao2019go} adopt LSTM based models to capture the spatio-temporal information. \cite{yuan2013time} split a day into time slots (e.g., by hour) to learn the periodic temporal patterns of POI check-ins. \emph{LSTPM}~\cite{sun2020lstpm} map a week into time slots. They propose a context-aware long and short-term preference modeling framework to model users' preferences and a geo-dilated RNN to model the non-consecutive geographical relation between POIs. \textbf{Attention networks for recommender systems.} Due to its high effectiveness and efficiency, the \emph{self-attention} mechanism~\cite{vaswani2017attention} has been applied to various tasks such as machine translation. The task of recommendation makes no exception. \emph{SASRec}~\cite{Kang2018Self} is a sequential recommendation model based on self-attention. It extracts the historical item sequences of each user, and then maps the recommendation problem to a sequence-to-sequence learning problem. \emph{AttRec}~\cite{zhang2018next} uses self-attention to learn from user-item interaction records about their recent interests, which are combined with users' long term interests learned by a metric learning component to make recommendations. These models have shown promising results in general sequential recommendation problems, e.g., to recommend products, video games, or movies. However, they are not designed for POI recommendations and have not considered the spatio-temporal patterns in POI recommendations. In this paper, we build a self-attentive network based on the SASRec model to incorporate spatio-temporal patterns of user check-ins. We will compare with SASRec in our empirical study. We omit AttRec since SASRec has shown a better result. \begin{table}[t] \centering \begin{small} \setlength{\belowcaptionskip}{5pt}% \caption{Frequently Used Symbols} \begin{tabular}{llcc} \toprule Symbol & Description\\ \hline \midrule $U$ & A set of users \\ $L$ & A set of POIs \\ $T$ & A set of check-in time \\ $S^{u}$ & The historical check-in sequence of a user $u$ \\ $s^{u}_i$ & A historical check-in of a user $u$ \\ $\ell$& The maximum check-in sequence length\\ $\mathbf{E}$& The input embedding matrix\\ $r_{l, i}$& A relevant score computed by SASRec\\ \bottomrule \end{tabular} \label{tab:effect_num_headers} \end{small} \end{table} \begin{figure*}[t] \begin{center} \includegraphics[width=.99\textwidth]{figs/model_combined.pdf} \end{center} \footnotesize \hspace{.25cm} (a) Transformer layer (Trm) \hspace{2cm} (b) SASRec model structure \hspace{3.25cm} (c) Grid cell ID string embedding \caption{The architecture of our SANST model} \label{fig:architecture} \end{figure*} \section{Preliminaries} We start with basic concepts and a problem definition in this section. We then present the \emph{SASRec} model~\cite{Kang2018Self}, based on which our proposed model is built. This model will also be used in our experiments as a baseline. We summarize the frequently used symbols in Table~\ref{tab:effect_num_headers}. \subsection{Problem Definition} We consider a set of users $U$ and a set of POIs $L$. Each user $u \in U$ comes with a historical POI check-in sequence $S^u$ = $\langle s_1^u, s_2^u, ..., s_{|S^u|}^u \rangle$, which is sorted in ascending order of time. Here, $|S^u|$ denotes the size of the set $S^u$. Every check-in $s_i^u \in S^u$ is a tuple $\langle s_i^u.l, s_i^u.t\rangle$, where $s_i^u.l \in L$ is the check-in POI and $s_i^u.t$ is the check-in time, respectively. Given a user $u\in U$ and a time $t_q$, our aim is to predict $s_{|S_u|+1}^u.l \in L$, i.e., the next POI that $u$ will visit at $t_q$. In our model, we consider check-in times in the day granularity, to alleviate the data sparsity issue. \subsection{SASRec}\label{sec:sasrec} SASRec is a two-layer \emph{transformer} network~\cite{vaswani2017attention}. It models sequential recommendation as a sequence-to-sequence learning problem that translates an input sequence $\langle s_1, s_2, ..., s_\ell\rangle$ to an output sequence $\langle s_2, s_3, ..., s_{\ell+1}\rangle$. Here, $\ell$ is a hyperparameter controlling the input length. The last element in the output, $s_{\ell+1}$, is the recommendation output. SASRec can be adopted for next POI recommendation directly, by treating a user's check-in sequence $S^u = \langle s^u_1, s^u_2, ..., s^u_{|S^u|} \rangle$ as the model input. If $|S^u| > \ell$, only the latest $\ell$ check-ins are kept; if $|S^u| < \ell$, the input sequence is padded by 0's at front. Fig.~\ref{fig:architecture}b illustrates the SASRec model structure. The input sequence of SASRec first goes through an embedding layer to convert every input element $s_i$ (e.g., a POI id $s^u_i.l$) into a $d$-dimensional vector $\mathbf{E_i} \in \mathbb{R}^d$. The embeddings are then fed into two stacking transformer layers. In the first transformer layer, as shown in Fig.~\ref{fig:architecture}a, the transformer input first goes through a self-attention layer to capture the pairwise dependency between the POI check-ins. Specifically, the embeddings of the input elements are concatenated rowwise to form a matrix $\mathbf{E} = [\mathbf{E_1}^{T}; \mathbf{E_2}^{T}; ... \mathbf{E_\ell}^{T}]^{T}$. Then, three linear projections on $\mathbf{E}$ are done using three projection matrices $\mathbf{W^{Q}} , \mathbf{W^{K}} , \mathbf{W^{V}} \in \mathbb{R}^{d\times d}$, respectively. These matrices will be learned by the model. The linear projections yield three matrices $\mathbf{Q} = \mathbf{E}\mathbf{W^Q}$, $\mathbf{K} = \mathbf{E}\mathbf{W^K}$, and $\mathbf{V} = \mathbf{E}\mathbf{W^V}$, respectively. The self-attention for $\mathbf{E}$, denoted by $sa(\mathbf{E})$, is then computed as follows, where $\mathbf{Q}$, $\mathbf{K}$, and $\mathbf{V}$ are let be the same: \begin{equation} sa(\mathbf{E}) = \textrm{softmax}(\frac{\mathbf{QK}^T}{\sqrt{d}}) \mathbf{V} \end{equation} To endow the model with non-linearity, a \emph{point-wise feed-forward network} is applied on $sa(\mathbf{E})$. Let $\mathbf{S_i}$ be the $i$-th output of of self-attention module. Then, the feed-forward network on $\mathbf{S_i}$ is computed as: \begin{equation} \mathbf{F_{i}}= \textrm{ReLU}(\mathbf{S_{i}W}^{(1)}+\mathbf{b}^{(1)})\mathbf{W}^{(2)}+\mathbf{b}^{(2)} \end{equation} Here, $\mathbf{W}^{(1)}, \mathbf{W}^{(2)} \in \mathbb{R}^{d\times d}$ and $\mathbf{b}^{(1)}, \mathbf{b}^{(2)} \in \mathbb{R}^{d\times 1}$ are learnable parameters. Layer normalization and dropout are adopted in between these layers to avoid overfitting. Two transformer layers are stacked to form a deeper network, where the point-wise feed-forward network of the first layer is used as the input as the second layer. Finally, the prediction output at position $i$ is produced by computing the relevance score $r_{l, i}$ between a POI $l$ and the position-$i$ output $\mathbf{F^*_{i}}$ of the point-wise feed-forward network of the second transformer layer. \begin{equation} r_{l,i} = \mathbf{F^*_{i}} \mathbf{E}(l)^T \end{equation} Here, $\mathbf{E}(l)$ denotes the embedding of $l$ fetched from the embedding layer. The POIs with the highest scores is returned. To train the model, the binary cross-entropy loss is used as the objective function: \begin{equation}\label{eq:loss} -\sum_{u \in U} \sum_{i=1}^{\ell} \Bigg[ \textrm{log}(\sigma(r_{s^u_i.l,i})) +\sum_{l \notin S^{u}} \textrm{log}(1-\sigma(r_{l,i}))\Bigg] \end{equation} \section{Proposed Model} As discussed earlier, while SASRec can be adapted to make next POI recommendations, a direct adaptation may produce sub-optimal results. This is because SASRec does not consider any spatial or temporal patterns, which are inherent in POI visit sequences and are critical for POI recommendations. In this section, we present our \emph{SANST} model to address this limitation via incorporating spatial and temporal pattern learning into self-attentive networks. As illustrated in Fig.~\ref{fig:architecture}, our SANST model shares the overall structure with SASRec. We detail below how to incorporate spatial and temporal pattern learning into this structure. \subsection{Spatial Pattern Learning} To enable SANST to learn the spatial patterns in POI check-in transitions, we update the embedding $\mathbf{E_i}$ of the $i$-th check-in of an input sequence to incorporate the location of the checked-in POI. This way, our SANST model can learn not only the transitions between the POIs (i.e., their IDs) but also the transitions between their locations. A straightforward approach to incorporate the POI locations is to concatenate the geo-coordinates of the POIs with the SASRec embedding $\mathbf{E_i}$. This approach, however, suffers from the data sparsity problem -- geo-coordinates of POIs are in an infinite and continuous space, while the number of POIs is usually limited (e.g., thousands). To overcome the data sparsity, we discritize the data space with a grid such that POI locations are represented by the grid cell IDs. We learn embeddings for the grid cell IDs (detailed next). Then, given a check-in $s^u_i$, we locate the grid cell in which the check-in POI $s^u_i.l$ lies. We use the grid cell ID embedding as the spatial embedding of $s^u_i$, denoted by $\mathbf{ES_i}$. We concatenate (denoted by $\oplus$) $\mathbf{ES_i}$ with the SASRec embedding $\mathbf{E_i}$ to form a new POI embedding in our SANST model, denoted by $\hat{\mathbf{E_i}}$: \begin{equation} \hat{\mathbf{E_i}} = \mathbf{E_i} \oplus \mathbf{ES_i} \end{equation} \subsubsection{Grid Partitioning and Grid Cell Encoding} We use a \emph{space-filling curve} to partition the data space and to number the grid cells. We encode the grid cell numbers into strings and use the encoded strings as the grid cell IDs. The purpose is to generate ID strings such that nearby grid cells have similar ID strings. This way, the cell IDs can preserve the spatial proximity of the corresponding cells. We adapt the \emph{GeoHash} technique to generate the strings as detailed below.\footnote{https://en.wikipedia.org/wiki/Geohash} Fig.~\ref{fig:geohash_overview} illustrates the steps of the grid partitioning and grid cell encoding scheme. A \emph{Z-curve} is used in this figure, although other space-filling curves such as \emph{Hilbert-curves} may be used as well. Suppose that an order-$n$ curve is used to partition the grid. Then, there are $2^n \times 2^n$ grid cells, and each curve value is represented by an $2n$-bit integer. We break a curve value into segments of $\gamma$ bits from the left to the right consecutively (a total of $\lceil n/\gamma \rceil$ segments). Each segment is then mapped to a character in a size-$2^\gamma$ alphabet. In the figure, we use $\gamma = 2$ and an alphabet of $2^\gamma = 4$ characters (i.e., `a', `b', `c', and `d'). As the figure shows, using this encoding, nearby grid cells obtain similar ID strings -- they share common prefixes by the definition of the curve values. The longer the common prefix is, the nearer the two cells are. In our experiments, we use $n=60, \gamma=5$, and $\lceil n/\gamma \rceil = 12$ . \begin{figure}[t] \begin{center} \includegraphics[width=0.95\columnwidth]{figs/GeoHash.pdf} \caption{Example of grid cell encoding} \label{fig:geohash_overview} \end{center} \end{figure} \subsubsection{Grid Cell ID Embedding Learning} Geohash encodes coordinates in a hierarchical way. It can create adjacent cells without a common prefix, e.g., cells 'cdb' and 'dca' in Fig.~\ref{fig:geohash_overview}. We address this problem by a natural language processing approach -- we learn an ID string embedding via learning character embeddings using a Bi-LSTM network. As Fig.~\ref{fig:architecture}c shows, each character (a randomly initialized $d_s$-dimensional vector to be learned) of an ID string (e.g., ``baca'') is fed into Bi-LSTM. The final output of Bi-LSTM in both directions are caught (i.e., $\mathbf{I}_{baca}$ and $\mathbf{R}_{baca}$), which are used as the ID string embedding (i.e., spatial embedding $\mathbf{ES_i}$) to form our POI check-in embedding $\hat{\mathbf{E_i}}$ in SANST. Since the character embeddings are trained jointly with the entire model, the characters which are adjacent to each other in the grid cells will have similar weights in there embedding vectors. Such as 'a' and 'b', 'a' and 'c'. Therefore, for the cells labeled with 'cbd' and 'dca', even though they do not share a common prefix, the adjacency relation between the characters in each layer (e.g., 'c' and 'd' in the top layer of the hierarchical hash codes) can still be captured. We envision that this spatial embedding method can not only be used in our problem but also other tasks that incorporate spatial information. \subsection{Temporal Pattern Learning} To learn the temporal patterns in POI check-in transitions, we follow~\cite{shaw2018self} to adapt the attention mechanism. We add a parameter to represent the relative position between two check-ins $s^u_i$ and $s^u_j$ in the input sequence. We define the relative position based on the time difference between $s^u_i$ and $s^u_j$ instead of the number of other check-ins between $s^u_i$ and $s^u_j$ (which was done by~\cite{shaw2018self}). This way, we model the temporal patterns explicitly and can better learn their impact. We detail our adaptation next. In self-attention, each output element $\mathbf{S_i}$ is computed as a weighted sum of linearly transformed input elements, i.e., $\mathbf{S_{i}} = \sum_{j=1}^{\ell}\alpha_{ij} (\mathbf{x_{j}}\mathbf{W^{V}})$, where $\mathbf{x_{j}}$ denotes the position-$i$ input element (e.g., a POI check-in). \cite{shaw2018self} add an edge between two input elements $\mathbf{x_{i}}$ and $\mathbf{x_{j}}$ to model their relative position in the input sequence. The impact of this edge is learned from two vectors $\mathbf{a_{ij}^{V}}$ and $\mathbf{a_{ij}^{K}}$, and the self-attention equation is updated to: \begin{equation} \mathbf{S_{i}} = \sum_{j=1}^{\ell}\alpha_{ij} (\mathbf{x_{j}}\mathbf{W^{V}}+\mathbf{a_{ij}^{V}}) \end{equation} Here, $\alpha_{ij}$ is computed as: \begin{equation} \alpha_{ij} = \text{softmax}(\frac{\mathbf{x_{i}W^{Q}}(\mathbf{x_{j}W^{K}}+\mathbf{a_{ij}^{K}})^{T}}{\sqrt{d}}) \end{equation} We adapt $\mathbf{a_{ij}^{K}}$ and $\mathbf{a_{ij}^{V}}$ as follows to learn the temporal pattern between $\mathbf{x_{i}}$ and $\mathbf{x_{j}}$ (i.e., the relative position in time): \begin{gather} \mathbf{a_{ij}^{K}} = \mathbf{w^{K}_{clip(T_{j}-T_{i},k)}} \\ \mathbf{a_{ij}^{V}} = \mathbf{w^{V}_{clip(T_{j}-T_{i},k)}} \\ clip(x,k) = \max\{-k,\min\{k,x\}\} \end{gather} Here, $T_{i}$ and $T_{j}$ represent the temporal label on input sequence at position ${i}$ and ${j}$, respectively. We compute the temporal label for the $i$-th input element of user $u$ as $T_i =(t_q - s_i^u.t)$. We then learn relative temporal representations $\mathbf{w^{K}}=(\mathbf{w^{K}_{-k}},...,\mathbf{w^{K}_{k}})$ and $\mathbf{w^{V}}=(\mathbf{w^{V}_{-k}},..., \mathbf{w^{V}_{k}})$, where $k$ is a hyperparameter that represents the size of the time context window that we examine. We illustrate these concepts in Fig.~\ref{fig:time_overview}, where each input element is a user check-in $s_i^u$. \begin{figure}[t] \begin{center} \includegraphics[width=0.95\columnwidth]{figs/relative.pdf} \caption{Temporal pattern learning} \label{fig:time_overview} \end{center} \end{figure} \subsection{Model Training} We use the same loss function as that of SASRec (i.e., Equation~\ref{eq:loss}) to train SANST. We randomly generate one negative POI for each time step in each sequence in each epoch. The model is optimized by the Adam optimizer. \section{Experiments} We perform an empirical study on the proposed model SANST and compare it with state-of-the-art sequential recommendation and next POI recommendation models. \begin{table}[t] \centering \small \setlength{\belowcaptionskip}{5pt}% \setlength{\tabcolsep}{2pt} \caption{Dataset Statistics}\ \begin{tabular}{lllll} \toprule Dataset &$\#$ users & $\#$ POIs & $\#$ check-ins & Time range\\ \midrule Gowalla & 10,162 & 24,250 & 456,988 & 02/2009-10/2010\\ Los Angeles & 3,202 & 4,368 & 101,327 & 03/2009-10/2010\\ Singapore & 2,321 & 5,596 & 194,108 & 08/2010-07/2011\\ \bottomrule \end{tabular} \label{tab:dataset} \end{table} \renewcommand{\arraystretch}{1} \begin{table*}[t] \centering \small \selectfont \caption{Summary of Results} \label{tab:performance_comparison} \begin{tabular}{clcccccc} \toprule & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Gowalla} & \multicolumn{2}{c}{Los Angeles} & \multicolumn{2}{c}{Singapore} \cr \cmidrule(lr){3-4} \cmidrule(lr){5-6}\cmidrule(lr){7-8} &&hit@10&nDCG@10&hit@10&nDCG@10&hit@10&nDCG@10\cr \midrule \multirow{6}{*}{Baseline} & FPMC-LR & 0.1197 & 0.0741 & 0.2347 & 0.1587 & 0.1784 & 0.1017 \cr & POI2vec & 0.0939 & 0.0606 & 0.2370 & 0.1700 & 0.2063 & 0.1425 \cr & PRME-G & 0.1852 & 0.1083 & 0.2367 & 0.1592 & 0.1601 & 0.1049 \cr & SASRec & 0.2023 & 0.1209 & 0.3648 & 0.2337 & 0.2245 & 0.1429 \cr & SASRec+WF & 0.2096 & 0.1184 & 0.3304 & 0.2006 & 0.2137 & 0.1251 \cr & SASRec+2KDE & 0.1842 & 0.1113 & 0.3057 & 0.1912 & 0.1900 & 0.1154 \cr & LSTPM & 0.1361 & 0.0847 & 0.2366 & 0.1580 & 0.1777 & 0.1105 \cr \midrule \multirow{2}{*}{Variants}& SANS & 0.2248 & 0.1372 & 0.3891 & 0.2519 & 0.2417 & 0.1491 \cr & SANT & 0.2028 & 0.1245 & 0.3635 & 0.2264 & 0.2296 & 0.1441 \cr \midrule \textbf{Proposed} & \textbf{SANST}&{\bf0.2273}&{\bf0.1374}&{\bf0.3941}&{\bf0.2558}&{\bf0.2417}&{\bf0.1531}\cr & &(+8.44\%)&(+13.65\%)&(+8.03\%)&(+9.46\%)&(+7.66\%)&(+7.14\%) \cr \bottomrule \end{tabular} \end{table*} \subsection{Settings} We first describe our experimental setup, including datasets, baseline models, and model implementation details. \textbf{Datasets.} We evaluate the models on three real-world datasets: the \textbf{Gowalla} dataset~\cite{yuan2013time}, the \textbf{Los Angeles} dataset~\cite{cho2011friendship}, and the \textbf{Singapore} dataset~\cite{yuan2013time}. Table~\ref{tab:dataset} summarizes the dataset statistics. Following previous studies~\cite{feng2015personalized,cui2019distance2pre}, for each user's check-in sequence, we take the last POI for testing and the rest for training, and we omit users with fewer than five check-ins. \textbf{Baseline models.} We compare with six baseline models: \begin{itemize} \item \textbf{FPMC-LR}~\cite{cheng2013you}: This is a matrix factorization model that extends the factorized personalized Markov chain (FPMC) model~\cite{Rendle:2010:FPM:1772690.1772773} by factorizing the POI transition probability with users' movement constraints. \item \textbf{PRME-G}~\cite{feng2015personalized}: This is a metric embedding based approach to learn sequential patterns and individual preferences, and it incorporates spatial influence using a weight function based on spatial distance. \item \textbf{POI2Vec}~\cite{feng2017poi2vec}: This is an embedding based model. It adopts the word2vec model to compute POI and user embeddings. Recommendations are made based on the embedding similarity. \item \textbf{LSTPM}~\cite{sun2020lstpm}: This is an LSTM based model. It uses two LSTMs to capture users' long-term and short-term preferences and a geo-dilated RNN to model the non-consecutive geographical relation between POIs. \item \textbf{SASRec}~\cite{Kang2018Self}: This is the state-of-the-art sequential recommendation model as described in the Preliminaries section. \item \textbf{SASRec+WF}: We combine SASRec with spatial pattern learning by adopting a weight function~\cite{feng2015personalized} to weight the relevance score by the spatial distance to the last POI check-in. \item \textbf{SASRec+2KDE}: We combine SASRec with spatial pattern learning by adopting the \emph{two-dimensional kernel density estimation} (2KDE) model~\cite{zhang2014lore}. The 2KDE model learns a POI check-in distribution for a user based on the POI geo-coordinations. We weight the relevance score for a POI in SASRec by the probability of the POI learned by 2KDE. \end{itemize} \textbf{Model variants.} To study the contribution of the spatial and time pattern learning to the overall performance of SANST, we further compare with two model variants: \textbf{SANS} and \textbf{SANT}, which are SANST without time pattern learning and SANST without spatial pattern learning, respectively. \textbf{Implementation details.} For FPMC-LR, PRME-G, and POI2Vec, we use the code provided by~\cite{cui2019distance2pre} (we could not compare with the model proposed by~\cite{cui2019distance2pre} because they only provided implementations of their baseline models but not their proposed model). For LSTPM, we use the code provided by~\cite{sun2020lstpm}. For SASRec, we use the code provided by~\cite{Kang2018Self}, based on which our SANST model and its variants are implemented. The learning rate, regularization, POI embedding dimensionality, and batch size are set to 0.005, 0.001, 50, and 128 for all models, respectively. Other parameters of the baseline models are set to their default values that come with the original paper. For our SANST model and its variants, we use 2 transformer layers, a dropout rate of 0.3, a character embedding size $d_s$ of 20, and the Adam optimizer. We set the maximum POI check-in sequence length $\ell$ to 100, and the time context window size $k$ to be 3, 5 and 1 for Gowalla, Los Angeles and Singapore, respectively. We train the models with an Intel(R) Xeon(R) CPU @ 2.20GHz, a Tesla K80 GPU, and 12GB memory. We report two metrics: \emph{hit@10} and \emph{nDCG@10}. They measure how likely the ground truth POI is in the top-10 POIs recommended and the rank of the ground truth POI in the top-10 POIs recommended, respectively. \subsection{Results} We first report comparison results with baselines and then report the impact of model components and parameters. \textbf{Overall performance.} Table~\ref{tab:performance_comparison} summarizes the model performance. We see that our model SANST outperforms all baseline models over all three datasets consistently, in terms of both hit@10 and nDCG@10. The numbers in parentheses show the improvements gained by SANST comparing with the best baselines. We see that SANST achieves up to 8.44\% and 13.65\% improvements in hit@10 and nDCG@10 (on Gowalla dataset), respectively. These improvements are significant as confirmed by $t$-test with $p<0.05$. Among the baselines, SASRec outperforms FPMC-LR, POI2Vec PRME-G and LSTPM, which validates the effectiveness of self-attentive networks in making next item recommendations. However, adding spatial features to self-attentive networks with existing methods do not necessarily yield a better model. For example, both SASRec+WF and SASRec+2KDE produce worse results than SASRec for most cases tested. They learn spatial patterns and sequential patterns separately which may not model the correlations between the two factors accurately. Our spatial pattern learning technique differs from the existing ones in its ability to learn representations that preserve the spatial information and can be integrated with sequential pattern learning to form an integrated model. Thus, combining self-attentive networks with our spatial pattern learning technique (i.e., SANS) outperforms SASRec, while adding temporal pattern learning (i.e., SANST) further boosts our model performance. \textbf{Ablation study.} By examining the results of the two model variants SANS and SANT in Table~\ref{tab:performance_comparison}, we find that, while both spatial and temporal patterns may help next POI recommendation, spatial patterns appear to contribute more. We conjecture that this is due to the use of the same discretization (i.e., by day) across the check-in time of all users in SANT. Such a global discretization method may not reflect the impact of time for each individual user accurately. It impinges the performance gain achieved from the time patterns. Another observation is that, when both types of patterns are used (i.e., SANST), the model achieves a higher accuracy comparing with using either pattern. This indicates that the two types of patterns complement each other well. \begin{table}[t] \setlength{\tabcolsep}{2pt} \centering \small \caption{Impact of $\ell$ (NDCG@10)} \label{tab:effect_sequence_length} \begin{tabular}{lcccccc} \toprule Model & \multicolumn{2}{c}{Gowalla} & \multicolumn{2}{c}{Los Angeles} & \multicolumn{2}{c}{Singapore} \cr \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} $\ell$ & SASRec & SANST & SASRec & SANST & SASRec & SANST \cr \midrule 20 & 0.1131 & 0.1177 & 0.2240 & 0.2369 & 0.1289 & 0.1299 \cr 50 & 0.1265 & 0.1301 & 0.2326 & 0.2409 & 0.1379 & 0.1431 \cr 100 & 0.1209 & 0.1374 & 0.2337 & 0.2558 & 0.1429 & 0.1531 \cr 200 & 0.1251 & 0.1391 & 0.2195 & 0.2355 & 0.1446 & 0.1506 \cr \bottomrule \end{tabular} \end{table} Next, we study the impact of hyperparameters and model structure on the performance of SANST. Due to space limit, we only report results on nDCG@10 in these experiments. Results on hit@10 show similar patterns and are omitted. \textbf{Impact of input sequence length $\ell$.} We start with the impact of the input sequence length $\ell$. As shown in Table~\ref{tab:effect_sequence_length}, SANST outperforms the best baseline SASRec across all $\ell$ values tested. Focusing on our model SANST, as $\ell$ grows from 20 to 100, its nDCG@10 increases. This can be explained by that more information is available for SANST to learn the check-in patterns. When $\ell$ grows further, e.g., to 200, the model performance drops. This is because, on average, the user check-in sequences are much shorter than 200. Forcing a large input length requires padding 0's at front which do not contribute extra information. Meanwhile, looking too far back may introduce noisy check-in patterns. Thus, a longer input sequence may not benefit the model. \textbf{Impact of character embedding dimensionality $d_s$.} We vary the character embedding dimensionality $d_s$ for our spatial pattern learning module from 10 to 30 and report the performance of SANST in Fig.~\ref{fig:ds_w}a. As the figure shows, our model is robust against the character embedding dimensionality (note the small value range in the $y$-axis. When $d_s=20$, the model has the best overall performance across the datasets. This relates to the fact that the size of the character vocabulary used to generate the location hashing strings is 32. If $d_s$ is much smaller than 32, the character embedding may not capture sufficient information from the hashing strings. Meanwhile, if $d_s$ is too large, the information captured by each dimension may become too weak due to data sparsity. Another observation is that the model performs much better on Los Angeles. This is because Los Angeles has the smallest number of POIs and average check-in sequence length per user. Its data space to be learned is smaller than those of the other two datasets. \begin{figure}[t] \centering \includegraphics[width=.45\textwidth]{figs/dk.pdf} \\ (a) Character embedding size\ \ (b) Time window size \caption{Impact of character embedding size $d_{S}$ and time context window size $k$} \label{fig:ds_w} \end{figure} \textbf{Impact of time context window size $k$.} We vary the time context window size $k$ for our temporal pattern learning module from 1 to 5 and report the results in Fig.~\ref{fig:ds_w}b. We see that SANST is also robust against the time context window size. We find the best time context window size to be strongly correlated to the time span and the average length of the check-in sequences in a dataset. In particular, Los Angeles covers a long time span (20 months) while its average check-in sequence length is the shortest. This means that its check-ins are less dense in the time dimension. Thus, it needs a larger time context window to achieve better performance, which explains for its increasing trend in nDCG@10 as $k$ increases towards 5. \textbf{Impact of model structure.} The transformer network can be stacked with multiple layers, while the attention network can have multi-heads. We show the impact of the number of transformer network layers $\tau$ and the number of heads $h$ in Table~\ref{tab:model_structure}. We see better results on Los Angeles with $\tau=2$, i.e., two transformer layers, which demonstrates that a deeper self-attention network may be helpful for learning the data patterns. However, deeper networks also have the tendency to overfit. The model performance drops when $\tau=2$ on the other two datasets. When $\tau=3$, the model is worse on all three datasets. We keep $\tau=2$ and further add more heads to the attention network. As the table shows, adding more heads does not help the model performance. This is opposed to observations in natural language processing tasks where different attention heads are more effective to capture various type of relations (e.g., positional and syntactic dependency relations)~\cite{voita-etal-2019-analyzing}. We conjecture that the relations between POIs are simpler (and without ambiguity) than those between words in natural language. Thus, a single-head attention is sufficient for our task. \begin{table}[t] \centering \small \caption{Impact of Model Structure (NDCG@10)} \label{tab:model_structure} \begin{tabular}{lccc} \toprule Structure & Gowalla & Los Angeles & Singapore \\ \midrule $\tau$=1, $h$=1 & \textbf{0.1426} & 0.2462 & \textbf{0.1539} \\ $\tau$=2, $h$=1 (default) & 0.1374 & \textbf{0.2558} & 0.1531 \\ $\tau$=3, $h$=1 & 0.1278 & 0.2416 & 0.1464 \\ \midrule $\tau$=2, $h$=2 & 0.1358 & 0.2516 & 0.1464 \\ \bottomrule \end{tabular} \end{table} \section{Conclusions} We studied the next POI recommendation problem and proposed a self-attentive network named SANST for the problem. Our SANST model takes advantage of self-attentive networks for their capability in making sequential recommendations. Our SANST model also incorporates the spatial and temporal patterns of user check-ins. As a result, SANST retains the high efficiency of self-attentive networks while enhancing their power in making recommendations for next POI visits. Experimental results on real-world datasets confirm the superiority of SANST -- it outperforms state-of-the-art sequential recommendation models and adapted baseline models that combine self-attentive networks with spatial features directly by up to 13.65\% in nDCG@10. \bibliographystyle{named}
{ "attr-fineweb-edu": 1.616211, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfws25V5jgCb64KOX
\section{Introduction} \label{sec:introduction} Football is a popular sport, European or World Championships, especially the finals, are among the most watched sporting events. The Euro 2016 Final was watched by more than 20 million people in France \cite{variety2016soccer}, or the Germany vs. France semifinal was watch by almost 30 million people in Germany \cite{variety2016soccer}. But what about Hungary? According to the MTVA (Media Services and Support Trust Fund), that operates the television channel M4~Sport, the first Hungarian match was watched by about 1.734 million people, the second by about 1.976 million and the third group match by about 2.318 million people\footnote{According to the Hungarian Central Statistical Office (\acrshort{ksh}), the population of Hungary was about 9.83 million in 2016 \cite{ksh22.1.1.1}.}. With these ratings, the M4~Sport, became the most watched television channel in Hungary, on those days \cite{hiradohu2016csoportgyoztes}. The whole participation of the Hungarian national football team was beyond expectations and raised interest, even among those, who generally, do not follow football matches. In the beginning, it might have been because Hungary returned to the European Championship after 44 years. Later, a good performance by the national football team, increased the interest. But, is it possible to measure/correlate this interest, with a mobile phone network? In this study, we analyzed the mobile phone network activity before, during and after the matches of the Hungarian national football team, but not directly at the location of the matches. The Call Detail Records (\acrshort{cdr}) analyzed in this paper, covers Budapest, the capital of Hungary. We present another example of social sensing using \acrshort{cdr}s, in an indirect and a direct way. Indirectly, as the mobile phone activity of the sport fans, residing in Budapest, are studied during a matches played in France. Directly, as the spontaneous festival on the streets of Budapest, after the third match, is presented from a data perspective. The rest of this paper is organized as follows. After a brief literature review in Section~\ref{sec:literature_review}, the utilized data is described in Section~\ref{sec:data}, then, in Section~\ref{sec:results} the results of this case study are introduced. Finally, in Section~\ref{sec:discussion}, the findings of the paper are summarized. \section{Literature Review} \label{sec:literature_review} Mobile phones can work as sensors, that detect the whereabouts and movement of their carrier. In this day and age, practically everyone has a mobile phone, that makes it possible to use large scale analyses. With enough data, the general mobility customs can also be studied. The home and work locations can be determined \cite{pappalardo2021evaluation}, and based on those locations, the commuting trends can be identified and validated with census data \cite{pinter2021evaluating}. Mobility indicators, such as Radius of Gyration or Entropy, are often calculated \cite{pappalardo2015returners,xu2018human} to describe and classify the subscribers' mobility customs. Furthermore, using mobility to infer about socioeconomic status is current direction of mobility analysis \cite{xu2018human,pinter2021evaluating} \acrshort{cdr} analysis is often used \cite{traag2011social,xavier2012analyzing,mamei2016estimating,marques2018understanding,pinter2019activity,rotman2020using,hiir2020impact} for large social event detection. When thousands of people are on the same place at the same time, they generate a significant `anomaly' in the data, whereas small groups usually do not stand out from the `noise'. This is especially true when the passive, transparent communication between the mobile phone device and the cell are not included in the data, but only the active communication (voice calls, text messages and data transfer) are recorded. In \cite{pinter2019activity} and \cite{rotman2020using}, mass protests are analyzed via mobile phone network data. In \cite{traag2011social}, \cite{mamei2016estimating}, \cite{xavier2012analyzing} and \cite{hiir2020impact}, the authors examined the location of stadiums, where the a football matches took place. Traag et al. \cite{traag2011social} and Hiir et al. \cite{hiir2020impact} also found that the mobile phone activity of the attendees decreased significantly, also used z-score to express the activity deviation during the social event from the average \cite{traag2011social}. Xavier et al. compared the reported number of attendees with the detected ones. These works also analyze other social events like concerts and festivals. However, in this paper, the actual matches took place in another country (France), and the local fans are examined. Mobile phone network data is also used to analyze the human mobility during COVID-19 pandemic and the effectiveness of the restrictions. Willberg et al. identified a significant decrease of the population presence in the largest cities of Finland after the lockdown compared to a usual week \cite{willberg2021escaping}. Bushman et al. analyzed the compliance to social distancing in the US using mobile phone data \cite{bushman2020effectiveness}. Gao et al. found negative correlation in stay-at-home distancing and COVID-19 increase rate \cite{gao2020association}. Still, these analyses might not be common enough. Oliver et al. asked the question: `Why is the use of mobile phone data not widespread, or a standard, in tackling epidemics?' \cite{oliver2020mobile}. This, however, is not within the scope of this paper. \section{Data} \label{sec:data} Vodafone Hungary, one of the three mobile phone operators providing services in Hungary, provided anonymized \acrshort{cdr} data for this study. The observation area was Budapest, capital of Hungary and its agglomeration, and the observation period is one month (June, 2016). In 2016 Q2, the nationwide market share of Vodafone Hungary was 25.3\% \cite{nmhh_mobile_market_report}. This data set contains \num{2291246932} records from \num{2063005} unique \acrshort{sim} cards, and does not specify the type of the activity (voice call, text message, data transfer). \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/sim_activity} \caption{\acrshort{sim} cards categorized by the number of activity records. The \acrshort{sim} cards with more than 1000 activity records (26.98\% of the \acrshort{sim} cards) provide the majority (91.31\%) of the activity.} \label{fig:vod201606_sim_activity} \end{figure} Figure~\ref{fig:vod201606_sim_activity}, shows the activity distribution between the activity categories of the \acrshort{sim} cards. The dominance of the last category, the \acrshort{sim} cards with more than 1000 activity records, is even more significant. This almost 27\% of the \acrshort{sim} cards produce the more the 91\% of the activity. Figure~\ref{fig:vod201606_activity_by_days}, shows the \acrshort{sim} card distribution by the number of active days. Only the 34.59\% of the \acrshort{sim} cards have activity on at least 21 different days. There are \num{241824} \acrshort{sim} cards (11.72\%), that has appearance at least two days, but the difference between the first and the last activity is not more the seven days. This may indicate the presence of tourists, that is usual in this part of the year. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/sim_activity_by_days} \caption{\acrshort{sim} card distribution by the number of active days.} \label{fig:vod201606_activity_by_days} \end{figure} The received data is in a `wide' format, where all of fields are present for every record, that contains a \acrshort{sim} ID, a timestamp, cell ID, the base station (site) coordinates in \acrshort{wgs84} projection, the subscriber (age, sex) and subscription details (consumer/business and prepaid/postpaid). While the subscription details are available for every \acrshort{sim} cards, the subscriber information is missing in slightly more than 40\% of the cases, presumably because of the subscribers' preferences of personal data usability. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/daily_activity} \caption{Number of daily activity records, during two weeks of June, 2016. The matches of the Hungarian national football team took place from June 14 to June 26.} \label{fig:vod201606_daily_activity} \end{figure} Figure~\ref{fig:vod201606_daily_activity}, shows number of daily activity records during the second half of the month. Weekends (brown bars) show significantly fewer activity, hence the activity during the matches compared to the weekday or weekend activity average, according to the day of the match. Although the data contains cell IDs, only the base station locations are known, where the cell antennas are located. As a base station usually serve multiple cells, these cells has been merged by the serving base stations. After the merge, 665 locations (sites) remained with known geographic locations. To estimate the covered area of these sites, the Voronoi tessellation, has been performed on the locations, that is common practice \cite{pappalardo2016analytical,csaji2013exploring,vanhoof2018comparing,candia2008uncovering,novovic2020uncovering,trasarti2015discovering} for \acrshort{cdr} processing. \section{Results} \label{sec:results} This section , Budapest downtown was analyzed spatially, and the \acrshort{cdr}s were filtered temporally, to select the match durations including two hours before and after the matches. This section discusses the results in the order of the Hungarian Euro 2016 matches. \subsection{Austria vs. Hungary} The first match was against Austria (Figure~\ref{fig:aut_hun_timeseries}) on Tuesday, June 14, 2016. Before the match, the activity level is significantly higher than the average of the weekdays, then decreases until the half-time. During the second half, the activity level went below the average, as if more and more people started to watch the match and cease their other activities. Right after the Hungarian goals, there are two peaks in the activity. Unfortunately, the data source cannot distinguish the mobile phone activities by type, so it cannot be known what kind of activities caused the peaks. It is supposed that mostly data transfer, maybe text messages, instead of phone calls. It simply does not seem to be lifelike to call someone during the match just because of a goal, but sending a line via one of the popular instant messaging services is feasible. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/aut_hun_20160614_16-22} \caption{Mobile phone activity during and after the Austria--Hungary Euro 2016 match, in comparison with the average activity of the weekdays.} \label{fig:aut_hun_timeseries} \end{figure} \subsection{Iceland vs. Hungary} The second match was against Iceland on Saturday, June 18, 2016. Figure~\ref{fig:isl_hun_timeseries}, shows the mobile phone activity levels before, during and after the match. As the weekend activity is generally lower (see Figure~\ref{fig:vod201606_daily_activity}), the average of the weekdays are used as a reference. The match began at 18:00, and from that point, the activity level is significantly below the average, except the half-time break and, again, the peak after the Hungarian goal. Interestingly, the Icelandic goal does not result such a significant peak, only a very moderate one can be seen in the time series. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/isl_hun_20160618_16-22} \caption{Mobile phone activity during and after the Iceland--Hungary Euro 2016 match, in comparison with the average activity of the weekends.} \label{fig:isl_hun_timeseries} \end{figure} \subsection{Hungary vs. Portugal} On Wednesday, June 22, 2016, as the third match of the group state of the 2016 UEFA European Football Championship, Hungary played draw with Portugal. Both teams scored three goals and with this result, Hungary won their group and qualified for the knockout phase. During the match, the mobile phone activity dropped below the average, as before, but the goals against Portugal resulted significant peaks, especially the first one (see Figure~\ref{fig:hun_prt_timeseries}). The Portuguese equalizer goal(s) did not cause significant mark in the activity. In the second half, the teams scored four goals in a relatively short time period, but only the Hungarian ones resulted peaks. After the match, the activity level is over the average, that may represent the spontaneous festival in Budapest downtown. According to the \acrshort{mti} (Hungarian news agency), thousands of people celebrated on the streets, starting from the fan zones, mainly from the Erzsébet square (Figure~\ref{fig:post_match_festival} a), the Margaret Island (Figure~\ref{fig:post_match_festival} b) and Erzsébet square (Figure~\ref{fig:post_match_festival} c) towards Budapest Nyugati railway station. The Grand Boulevard was completely occupied and the public transportation was disrupted along the affected lines. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/hun_prt_20160622_16-22} \caption{Mobile phone activity during and after the Hungary--Portugal Euro 2016 match, in comparison with the average activity of the weekdays.} \label{fig:hun_prt_timeseries} \end{figure} Figure~\ref{fig:post_match_festival_timeseries}, shows the activities of the sites (multiple cells aggregated by the base stations) in Budapest downtown. The highlighted site covers mostly the Szabadság square (for the location, see Figure~\ref{fig:post_match_festival} a), where one of the main fan zone was set up, with big screen and so on. The activity curve actually follows the trends of the whole data set (see Figure~\ref{fig:hun_prt_timeseries}). There is high activity before the match, during half-time and, for a short period, after the match. During the match, the activity decreases with the except of four not so significant peaks around the goals. In the highlighted site, in Figure~\ref{fig:post_match_festival_timeseries}, almost 10 thousand \acrshort{sim} cards had been detected between 17:00 and 20:00. 50.26\% of the subscribers were between 20 and 50 years old, 35.8\% of them, had no age data. After the match, there is a significant increase in the activity in some other sites. These sites are (mostly) around the Grand Boulevard, where the fans marched, celebrating the advancement of the national football team, to the knockout phase. Figure~\ref{fig:post_match_festival}, shows spatial distribution of this social event, using Voronoi polygons generated around the base stations locations. The polygons are colored by the mobile phone network activity increase, compared to average of the weekday activity, at 20:20. For the comparison, the standard score \footnote{The standard score (or z-score) is defined as ${z = \frac{x-\mu}{\sigma}}$, where $\mu$ is the mean and $\sigma$ is the standard deviation.} was determined for every base station with a 5-minute temporal aggregation. The darker colors indicate the higher activity surplus in an area. For the details of determining the thresholds, see Appendix~\ref{app:zscore_thresholds}. The figure also denotes the three main fan zones in the area, route of the fans by arrows, and the affected streets are emphasized. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/post_match_festival_timeseries} \caption{Site activities, in Budapest downtown, on the day of the Hungary vs. Portugal football match (June 22, 2016). The highlighted site covers mostly the Szabadság square, where one of the main fan zone was set up.} \label{fig:post_match_festival_timeseries} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/post_match_festival} \caption{After the Hungary vs. Portugal football match, the fans, delirious with joy, filled the streets. The arrows show their route from the main fan zones to and along the Grand Boulevard. Voronoi polygons colored by the mobile phone network activity at the peak of the event, at 20:20.} \label{fig:post_match_festival} \end{figure} \subsection{Hungary vs. Belgium} On Sunday, June 26, 2016, Hungary played the fourth and last Euro 2016 match against Belgium. Figure~\ref{fig:hun_bel_timeseries}, shows the mobile phone network activity before, during and after the match. During the match, the activity level was below the weekend average. Although, after the match, the activity is slightly higher than the average, but the match ended late (on Sunday), when the activity average is very low. This activity surplus may only indicate that the fans were simply leaving the fan zones and going home. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/hun_bel_20160626_16-22} \caption{Mobile phone activity during and after the Hungary--Belgium Euro 2016 match, in comparison with the average activity of the weekends.} \label{fig:hun_bel_timeseries} \end{figure} \section{Discussion} \label{sec:discussion} In this case study, we demonstrated that mobile phone network activity follows precisely the football fans' behavior, even if the matches are played in another country. So, in this analysis, people watched the matches on TV (at home) or big screens in the fan zones, but not in the stadium, where the matches were played. The time series clearly show that the activity was below the average during the matches, indicating that many people did nothing other than rooting for their team. This coincide with other studies, where the activity of cells at the stadium was analyzed. However, this study does not focused on a small location like a stadium, but a large city, where people watched the matches on screens. We managed to demonstrate that a remote football match can also have significant effect on the mobile phone network. Moreover, the joy felt after the Hungarian goals, is clearly manifested in the data, as sudden activity peaks. The spontaneous festival after the Hungary vs. Portugal match is, however, a direct application of social sensing and comparable to mass protests from a data perspective. During the event, the mobile phone network activity was significantly higher than the average in the affected areas. We also presented an analysis of a fan zone, that is comparable to other studies, where smaller locations were analyzed. The mobile phone network activity at the fan zone, shows similar trend as whole data set. The activity decreases during the match, except for the half-time. There are also peaks right after the goals, although not so significant ones. \vspace{6pt} \section*{Author Contributions} Conceptualization, G.P. and I.F.; methodology, G.P.; software, G.P.; validation, G.P.; formal analysis, G.P.; investigation, I.F.; resources, I.F.; data curation, G.P.; writing---original draft preparation, G.P.; writing---review and editing, G.P.; visualization, G.P.; supervision, I.F.; project administration, I.F.; funding acquisition, I.F. All authors have read and agreed to the published version of the manuscript. \section*{Funding} This research supported by the project 2019-1.3.1-KK-2019-00007 and by the Eötvös Loránd Research Network Secretariat under grant agreement no. ELKH KÖ-40/2020. \section*{Acknowledgments} The authors would like to thank Vodafone Hungary for providing the Call Detail Records for this study. For plotting the map, OpenStreetMap data is used, that is copyrighted by the OpenStreetMap contributors and licensed under the Open Data Commons Open Database License (ODbL). \section*{Conflicts of Interest} The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. \printglossary[title=Abbreviations, toctitle=Abbreviations, nogroupskip=true] \begin{appendices}
{ "attr-fineweb-edu": 1.725586, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdog25V5itcKZFtR1
\section{Introduction} \section{Introduction}\label{sec:intro} Quantitative analytics have been a key driving force for advancing modern professional sports, and there is no exception for professional basketball \citep{kubatko2007starting}. In professional basketball, analyses of shooting patterns offer important insights about players' attacking styles and shed light on the evolution of defensive tactics, which has aroused substantial research interests from the statistical community \citep[e.g.,][]{reich2006spatial, miller2014factorized, jiao2019bayesian, hu2020bayesiangroup}. As shot location data are naturally represented by spatial points, developments of novel methods for analyzing spatial point patterns are of fundamental importance. The literature on spatial point pattern data is voluminous \citep[see, e.g.,][]{illian2008statistical, diggle2013statistical, guan2006composite, guan2010weighted, baddeley2017local, jiao2020heterogeneity}. The most frequently adopted class of models in empirical research is nonhomogeneous Poisson processes (NHPP), or more generally, Cox processes, including log-Gaussian Cox process \citep{moller1998log}. Such parametric models impose restrictions on the functional forms of underlying process intensity, which can suffer from underfitting of data when there is a misfit between the complexity of the model and the data available. In contrast, nonparametric approaches provide more flexibility compared to parametric modeling as the underfitting can be mitigated by using models with unbounded complexity. Several important features of the shot location data need to captured in any realistic nonparametric method. First, near regions are highly likely to have similar intensities. This makes that certain spatial contiguous constraints on the intensity surface are desirable. Existing method mixture of finite mixtures (MFM) of nonhomogeneous Poisson processes \citep{geng2019bayesian} is lack of this aspect. Second, spatial contiguous constraints should not dominate the intensity surface globally \citep{hu2020bayesian, zhao2020bayesian}. This is because two spatially disconnected regions that are sufficiently similar with respect to the intensity values can still belong to the same cluster. For example, a player may shoot equally frequently at the two corners due to the symmetry of the court, which is not well accommodated by the penalized method \citep{li2019spatial}. Finally, the extent to which the spatial contiguous affects the intensity surface may differ from player to player, and needs to be learned from the data. To address these challenges, we consider a spatially constrained Bayesian nonparametric method for point processes to capture the spatial homogeneity of intensity surfaces. Our contributions are three-fold. First, we develop a novel nonparametric Bayesian method for intensity estimation of spatial point processes. Compared to existing methods, the proposed approach is capable of capturing both locally spatially contiguous clusters and globally discontinuous clusters and the number of clusters. Second, an efficient Markov chain Monte Carlo (MCMC) algorithm is designed for our model without complicated reversible jump MCMC. Lastly, we gain important insights about the shooting behaviors of NBA players based on an application to their shot location data. The rest of the paper is organized as follows. In Section \ref{sec:data}, we introduce and visualize the shot charts of several representative NBA players from the 2017–2018 regular season. In Section~\ref{sec:model}, we introduce spatial point processes for field goal attempts and develop a nonparametric Bayesian method for capturing the spatial homogeneity of intensity surface. Detailed Bayesian inference procedures, including a collapsed Gibbs sampler and a post MCMC inference method, are presented in Section~\ref{sec:bayesInf}. Extensive simulation studies are reported in Section~\ref{sec:simu}. The method is applied to $20$ key NBA players' shot location data from the 2017-2018 regular season in Section~\ref{sec:app}. Section~\ref{sec:disc} concludes with a discussion. For ease of exposition, additional results are relegated to the Supplementary Material. \section{NBA Shot Location Data}\label{sec:data} Shot chart data for NBA players from the 2017--2018 regular season were retrieved from the official website of NBA \url{stats.nba.com}. The data for each player contain information about all his shots in regular season including game date, opponent team name, game period when each shot was made (4~quarters and a fifth period representing extra time), minutes and seconds left, success indicator (0~represents missed and 1~represents made), action type (like ``Cutting dunk shot'', ``Jump shot'', etc.), shot type (2-point or 3-point shot), shot distance, and shot location coordinates. From the data, the half court is positioned on a Cartesian coordinate system centered at the center of rim, with $x$ ranging from $-250$ to 250 and $y$ ranging from $-50$ to 420, both with unit of 0.1 foot (ft), as the size of an actual NBA basketball half court is $50 \ \text{ft} \times 47 \ \text{ft}$. \begin{figure}[th] \centering \includegraphics[width = \textwidth]{plots/EDAplot.png} \caption{Shot data Display. On half court image, each point represents one shot. From left to right: Stephen Curry, Kevin Durant, James Harden, DeMar DeRozan.} \label{fig:EDA} \end{figure} \begin{table}[tbp] \centering \caption{Shot data summary. Period is for the 1st, 2nd, 3rd, 4th quarter and the extra time.} \label{table:EADsummary} \begin{tabular}{cccc} \toprule Player & Shot Count & 2PT shot percentage ($\%$) & Period percentage ($\%$)\\ \midrule Stephen Curry & 753 & 42.6 & (35.0, 20.6, 34.3, 9.7, 0.4)\\ James Harden & 1306 & 50.2 & (28.7, 22.4, 27.9, 20.8, 0.3)\\ Kevin Durant & 1040 & 66.5 & (30.8, 23.8, 30.6, 14.6, 0.3)\\ DeMar DeRozan & 1274 & 79.9 & (29.1, 28.6, 33.3, 17.3, 1.6)\\ \bottomrule \end{tabular} \end{table} We visualize and summarize the shot data of four key players, Stephen Curry, James Harden, Kevin Durant and DeMar DeRozan. Figure~\ref{fig:EDA} shows their field goal attempts' locations and Table~\ref{table:EADsummary} summarizes their other field goal attempts information. As we can see from the plots, most of the shots are made either close to the rim or right outside the arc (i.e., 3-point line). This is in line with the recent trend in the basketball development since it is more efficient for players to pursue higher success rates near the rim or go after higher rewards by making 3-point shots. \section{Model}\label{sec:model} \subsection{NHPP} Spatial point process models provide a natural framework for capturing the random behavior of event location data. Let $\mathbf{S} = \{\bm{s}_1, \bm{s}_2, \dots, \bm{s}_N\}$ with $\bm{s}_i = (x_i, y_i)$, $i=1,\ldots,N$, be the set of observed locations in a pre-defined, bounded region $\mathcal{B} \subseteq \mathcal{R}^{2}$. Let the underlying stochastic mechanism that gives rise to the observed point pattern $\mathbf{S}$ be denoted as spatial point process $\mathbf{Y}$. Process $N_{\mathbf{Y}}(A) = \sum_{i=1}^{N} \mathbbm{1}(\bm{s}_{i} \in A)$ is a counting process associated with $\mathbf{Y}$, which counts the number of points falling into area $A \subseteq \mathcal{B}$. The NHPP model assumes conditionally independent event locations given the process intensity $\lambda(\mathbf{s})$. For an NHPP, the number of events in area~$A$, $N_{\mathbf{Y}}(A)$, follows Poisson distribution with rate parameter $\lambda(A) = \int_{A} \lambda(\mathbf{s}) \mathrm{d} \mathbf{s}$. In addition, $N_{\mathbf{Y}}(A_{1})$ and $N_{\mathbf{Y}}(A_{2})$ are independent if two areas $A_1 \subseteq \mathcal{B}$ and $A_2 \subseteq \mathcal{B}$ are disjoint. Given the observed point pattern $\mathbf{S}$ on fixed region $\mathcal{B}$, the likelihood of the NHPP model is \begin{equation} \label{eq:NHPP_lik} \frac{\prod_{i=1}^{N} \lambda(\mathbf{s}_{i})} {\exp(\int_{\mathcal{B}} \lambda(\mathbf{s}) d\mathbf{s})}, \end{equation} where $\lambda(\mathbf{s}_{i})$ is the intensity function evaluated at location $\mathbf{s}_{i}$. The NHPP reduces to a homogeneous Poisson process (HPP) when $\lambda(\mathbf{s})$ is constant over the entire study region $\mathcal{B}$, and it is synonymous with \emph{complete spatial randomness} (CSR). \subsection{Nonparametric Bayesian Methods for NHPP} As the CSR assumption over the entire study region rarely holds in real-world problems, and to simplify the potentially overcomplicated problem induced by complete non-homogeneity on intensity values, \citet{teng2017bayesian} proposed to approximate the intensity function $\lambda(\bm{s})$ by a piecewise constant function. Specifically, the study region $\mathcal{B}$ is partitioned into $n$ disjoint sub-regions and the intensity over each sub-region is assumed to be constant. Let $A_{1}, A_{2}, \ldots, A_{n}$ be a partition of $\mathcal{B}$, i.e., $\cup_{i=1}^{n} A_{i} = \mathcal{B}$ and $A_{i} \cap A_{j} = \emptyset, \forall i \neq j$. For each region $A_{i}, i = 1, \ldots, n$, we have $\lambda(\mathbf{s}) = \lambda_{i}, \forall \ \mathbf{s} \in A_{i}$. Therefore, the likelihood~\eqref{eq:NHPP_lik} can be written as \begin{equation} \label{eq:NHPP_Poisson_lik} \prod_{i=1}^{n} f_{\text{poisson}}(N_{\mathbf{Y}}(A_{i}) | \lambda_{i} \mu(A_{i})), \end{equation} where $\mu(A_{i} = \int_{A_{i}} 1 d\bm{s}$ is the area of sub-region $A_i$ $f_{\text{poisson}}(\cdot | \lambda)$ is the probability mass function of the Poisson distribution with rate parameter $\lambda$. For ease of notation, we use $N(A_{i})$ for $N_{\mathbf{Y}}(A_{i})$ in the sequel. The heterogeneity in the intensity function across different sub-regions can be naturally represented through a latent clustering structure. The conventional finite mixture modeling framework \citep{mclachlan1988mixture, bouveyron2019model} assumes that the heterogeneity can be characterized by a discrete set of subpopulations or clusters, such that the points located in the sub-regions belonging to any given subpopulation tend to be produced by similar intensities. The selection of the number of clusters (or components) in finite mixture models are often recasted as statistical model selection problems which can solved using information criteria \citep{fraley2002model} or cross-validation \citep{fu2020estimating}, among others. Despite many successful applications in empirical research, such model selection procedures are fraught with in that they ignore the uncertainty in the number of clusters, which may in turn lead to increased erroneous cluster assignments. The Chinese restaurant process (CRP) \citep{pitman1995exchangeable, neal2000markov} mixture models is a class of Bayesian nonparametric approaches that offer a powerful alternative to conventional finite mixture models. Under CRP mixture models, the latent cluster indicator variables $\mathbf{Z} = (Z_1, Z_2, \ldots, Z_n)$ are distributed according to a CRP, which is defined through the following conditional distributions or a P\'{o}lya urn scheme \citep{blackwell1973ferguson} \begin{align} \label{eq:polya_urn} \Pr(Z_i = c | Z_j, j< i; \alpha) \propto \begin{cases} |c|, & c \text{ is an existing cluster label} \ $c$, \\ \alpha, & \text{otherwise}, \end{cases} \end{align} where $|c|$ refers to the size of cluster labeled $c$, and $\alpha$ is the concentration parameter of the underlying Dirichlet process (DP). Specifically, Equation~\eqref{eq:polya_urn} implies that the trivial partition $\left\{\left\{1 \right\}\right\}$ is obtained with probability~$1$ at the beginning, and one new element is either added to one of the existing blocks of the partition $\mathcal{C}_{n}$ with probability $|c|/(n+\alpha)$ or to the partition $\mathcal{C}_{n}$ as a new singleton block with probability $\alpha/(n+\alpha)$ in subsequent steps. More intuitively, we may think of customers choosing tables in a restaurant, where the first customer always chooses the first table and the $i$th customer chooses the first unoccupied table with probability $\alpha/(n+\alpha)$ and an occupied table with probability proportional to the number of customers currently sitting at that table (i.e., how popular this table is), $|c|/(n+\alpha)$. The CRP mixture models admit the uncertainty in the number of clusters and circumvents the model selection problems by allowing the number of clusters and cluster assignments to be inferred simultaneously. To facilitate statistical inference, it is often helpful to consider the full conditional distributions of $Z_{i}$, $i = 1, \ldots, n$ under the CRP, \begin{align} \label{eq:CRP_Z_full_cond} \Pr(Z_i = c | \bm{Z}_{-i}, \alpha) \propto \begin{cases} n_{c}(\bm{Z}_{-i}), & \text{at an existing cluster label} \ $c$, \\ \alpha, & \text{at a new cluster}, \end{cases} \end{align} where $\bm{Z}_{-i} = \{Z_j: j \ne i\}$, and $n_{c}(\bm{Z}_{-i}) = \sum_{i=1,i \neq j}^{n} \mathbbm{1}(Z_{i}=c)$ is the number of observations in cluster labeled~$c$ without the $i$th~observation. That is, each $Z_{i}$ is either a new label with probability proportional to $\alpha$, or an existing label~$c$ with probability proportional to the observations assigned to cluster~$c$, $n_{c}(\bm{Z}_{-i})$. The concentration parameter~$\alpha$ controls the distribution of the number of clusters, with smaller values of $\alpha$ favoring smaller number of clusters \emph{a~priori}. We formulate a Bayesian hierarchical CRP-NHPP model as \begin{equation} \begin{split} Z_{1}, \ldots, Z_{n} &\sim \text{CRP}(\alpha),\\ \lambda_1, \ldots, \lambda_K &\stackrel{i.i.d.}{\sim} \mathrm{Gamma}(a, b)\\ N(A_{i}) \mid \bm{\lambda}, Z_{1}, \ldots, Z_{n} & \sim \text{Poisson}(\lambda_{Z_{i}}\mu(A_{i})), \ \text{independently for} \ i=1,\ldots,n, \end{split} \label{eq:CRP_NHPP} \end{equation} where $K$ is the number of components of the piecewise constant function, $\bm{\lambda} = (\lambda_1, \lambda_2, \dotsm \lambda_K)$ is the vector of the unique values of the intensity function, $\text{Gamma}(a, b)$ is the gamma distribution with shape $a$ and rate $b$, $(a, b, \alpha)$ are hyperparameters for the prior distributions, and $i.i.d.$ stands for indepent and identicall distributed. The hyperparameters can be specified according to the \emph{a~priori} information available for the practitioners. An alternative model which will be used as a benchmark for comparison is the MFM of NHPP (MFM-NHPP) \citep{geng2019bayesian}. This model is built upon the MFM modeling framework \citep{miller2018mixture} to mitigate the potential \emph{inconsistency} in estimating the number of clusters caused by CRP mixture. The MFM-NHPP model is \begin{equation} \begin{split} K &\sim p_{K},\\ \lambda_{k} &\stackrel{i.i.d.}{\sim} \mathrm{Gamma}(a, b),\quad k = 1, 2, \dots, K, \\ (\pi_1,\ldots,\pi_k) \mid K = k & \sim \text{Dirichlet}_{k}(\alpha,\ldots,\alpha), \\ Z_{1}, \ldots, Z_{n} \mid \pi_1, \ldots, \pi_k &\stackrel{i.i.d.}{\sim} \text{Categorical}_k(\pi_1, \ldots, \pi_K), \\ N(A_{i}) \mid \bm{\lambda}, Z_{1}, \ldots, Z_{n} & \sim \text{Poisson}(\lambda_{Z_{i}}\mu(A_{i})), \ \text{independently for} \ i=1,\ldots,n, \end{split} \label{eq:MFM_NHPP} \end{equation} where $p_{K}$ is the prior distribution on the number of clusters, that is, a probability mass function on $\mathbb{N} = \left\{1,2,\ldots \right\}$. \citet{geng2019bayesian} adopt Poisson with mean~1 truncated to be positive for $p_K$ as recommended by \citet{miller2018mixture}, which we follow here. \subsection{Incorporating Spatial Homogeneity} Spatial events typically obey the so-called first law of geography, ``everything is related to everything else, but near things are more related than distant things'' \citep{tobler1970computer}. This means spatial smoothness, also known as spatial homogeneity. To incorporate such spatial homogeneity, we propose to modify~\eqref{eq:CRP_Z_full_cond} by adding a multiplier that encourages the customer to choose the table where many spatial neighbors are also sitting at. In particular, we consider the following full conditionals \begin{align} \label{eq:SCCRP_Z_full_cond} & \Pr(Z_i = c | \bm{Z}_{-i}, \alpha, \eta, D) \\ \propto & \begin{cases} n_{c}(\bm{Z}_{-i}) \exp \big(\eta \sum_{j \in \partial(i)}d_{ij} \mathbbm{1}(Z_j = c)\big), & c \text{ is an existing cluster labeled} \ $c$, \\ \alpha, & \text{otherwise}, \end{cases} \end{align} where $D$ comprises the information on spatial distance and neighbor relationships, $\partial (i)$ represents the set of spatial neighbors of the~$i$th customer (observation), $d_{ij}$ denotes the spatial distance between the~$i$th and the~$j$th customer (observation), and $\eta \geqslant 0$ is a smoothing parameter controlling the relative weight of spatial homogeneity. The full conditionals~\eqref{eq:SCCRP_Z_full_cond} can be specified by DP mixtures (DPM) constrained by a Markov random field (MRF) \citep{orbanz2008nonparametric}. Combining the NHPP with the MRF constrained DPM, we have a MRF-DPM-NHPP model \begin{equation} \begin{split} G &\sim \text{DP}(\alpha,G_{0}) \\ (\lambda_{1},\ldots,\lambda_{n}) &\sim M_{\eta, D}(\lambda_{1},\ldots,\lambda_{n}) \prod_{i=1}^{n} G(\lambda_{i}) \\ N(A_{i}) \mid \lambda_1, \ldots, \lambda_n &\sim \text{Poisson}(\lambda_{i}) \ \text{independently for} \ i=1,\ldots,n, \end{split} \label{eq:MRF_DPM_NHPP} \end{equation} where $DP(\alpha, G_0)$ is a DP with base measure $G_0 \equiv \text{Gamma}(a,b)$ and concentration parameter $\alpha$, $G(\lambda_{i})$ is defined by a single draw from a DP \citep{ferguson1973bayesian}, and $M_{\eta,D}(\lambda_{1},\ldots,\lambda_{n})$ is a MRF with full conditionals \[ M_{\eta,D}(\lambda_{i} | \bm{\lambda}_{-i}) = M_{\eta,D}(\lambda_{i} | \bm{\lambda}_{\partial (i)}) \propto \exp\big(\eta \sum_{j \in \partial(i)}d_{ij} \mathbbm{1}(\lambda_i = \lambda_j)\big) \]. It is worth noting that the existence of joint distribution $M_{\eta,D}(\lambda_{1},\ldots,\lambda_{n})$ is guaranteed by the Hammersley–-Clifford theorem \citep{hammersley1971markov}. The definition of neighborhood $\partial (i)$ is subject to the nature of the data and the modeler's choice. Common choices include the \emph{rook} contiguity (i.e., the regions which share a border of some length with region $i$), and the \emph{queen} contiguity (i.e., the regions which share a border of some length or even a point-length border with region $i$). The smoothing parameter $\eta$ controls the extent of spatial homogeneity, with larger values dictating larger extent of spatial homogeneity. The MRF-DPM-NHPP model~\eqref{eq:MRF_DPM_NHPP} reduces to the CRP-NHPP model~\eqref{eq:CRP_NHPP} when $\eta=0$. \section{Bayesian Inference}\label{sec:bayesInf} In this section, we present an efficient MCMC sampling algorithm for our proposed method, post MCMC inference, and model selection criteria. \subsection{A Collapsed Gibbs Sampler} We introduce latent indicator variables $\bm{Z} = (Z_1, \ldots, Z_n)$ and denote the parameters in~\eqref{eq:MRF_DPM_NHPP} as $\bm{\Theta} = \{\bm{\lambda}, \bm{Z}\}$. The posterior density of $\bm{\Theta}$ is \[ \pi(\bm{\Theta} | \textbf{S}) \propto L(\bm{\Theta} | \mathbf{S}) \pi(\bm{\Theta}) \], where $\pi(\bm{\Theta})$ is the prior density of $\bm{\Theta}$, and the likelihood $L(\bm{\Theta} | \mathbf{S})$ takes the form of~\eqref{eq:NHPP_lik}. We first derive the full conditional distribution for each parameter as follows. The full conditional probability that sub-region $A_i$ belongs to an existing component $c$, i.e., $\exists j \ne i,\, Z_j = c$, is \begin{align}\label{eq:post_zexist_MRF_DPM_NHPP} \begin{split} \Pr(Z_i = c \mid \mathbf{S}, \bm{Z}_{-i}, \bm{\lambda}) &\propto \frac{n_{c}(\bm{Z}_{-i}) \exp \big(\eta \sum_{j \in \partial(i)}d_{ij} \mathbbm{1}(Z_j = c)\big)} {n-1+\alpha} \frac{(\lambda_{c} \mu(A_{i}))^{N(A_{i})}} {\exp(\lambda_{c}\mu(A_i))}. \end{split} \end{align} The full conditional probability that $A_i$ belongs to a new component, i.e., $\forall j\ne i,\, Z_j \ne c$, is \begin{align}\label{eq:post_znew_MRF_DPM_NHPP} \begin{split} &\phantom{ =\,\, } \Pr(Z_i = c\mid \mathbf{S}, \bm{Z}_{-i}, \bm{\lambda}) \\ & \propto \frac{\alpha}{n-1+\alpha} \int \frac{ (\lambda_{c} \mu(A_{i}) )^{ N(A_{i}))}} {\exp\left(\lambda_{c}\mu(A_i)\right)} \frac{b^a}{\Gamma(a)}\lambda_{c}^{a-1} e^{-b\lambda_{c}} \mathrm{d} \lambda_{c}\\ &= \frac{\alpha}{n-1+\alpha} \frac{b^a}{\Gamma(a)} \mu(A_{i})^{N(A_i)} \int \lambda_{c}^{N(A_{i})+a-1}e^{-(b+\mu(A_i))\lambda_{c}} \mathrm{d} \lambda_{c}\\ &= \frac{\alpha b^a \Gamma(N(A_{i})+a) \mu(A_{i})^{N(A_i)}} {(n-1+\alpha) (b+\mu(A_i))^{N(A_{i})+a} \Gamma(a)}. \end{split} \end{align} Combining~\eqref{eq:post_zexist_MRF_DPM_NHPP} and~\eqref{eq:post_znew_MRF_DPM_NHPP} gives the full conditional distribution of $Z_i$ in the following Proposition. \begin{proposition} \label{thm:z_MRF_DPM_NHPP} Under the model and prior specification~\eqref{eq:MRF_DPM_NHPP}, the full conditional distribution of $Z_i$, $i = 1, \ldots, n$, is \begin{equation} \begin{split} &\phantom{ =\,\, }\Pr(Z_i = c\mid \mathbf{S}, \bm{Z}_{-i},\bm{\lambda}, \bm{\beta})\\ &\propto \begin{cases} \displaystyle{ \frac{n_{c}(\bm{Z}_{-i}) \exp \big(\eta \sum_{j \in \partial(i)}d_{ij} \mathbbm{1}(Z_j = c)\big) (\lambda_{c} \mu(A_{i}))^{N(A_{i})} } {\exp(\lambda_{c}\mu(A_i))} } & \exists j \ne i, \, Z_j = c \, \mbox{(existing label)},\\ \displaystyle{ \frac{\alpha b^a \Gamma(N(A_{i})+a) \mu(A_{i})^{N(A_i)}} {(b+\mu(A_i))^{N(A_{i})+a} \Gamma(a)} } & \forall j \ne i, \, Z_j \ne c \, \mbox{(new label)}, \end{cases} \end{split} \label{eq:post_z_MRF_DPM_NHPP} \end{equation} where $\bm{Z}_{-i}$ is $\bm{Z}$ with $z_i$ removed, and $\mu(A_i)$ is the area of region $A_i$. \end{proposition} For the full conditional distribution of $\lambda_{k}$, only data points in the $k$th component should be considered for simplicity. The full conditional density of $\lambda_{k}$, $k = 1, \ldots, K$, is \begin{align}\label{eq:post_lambda_MRF_DPM_NHPP} \begin{split} q(\lambda_{k} \mid \mathbf{S}, \bm{Z}, \bm{\lambda}_{-k}) &\propto \frac{\prod_{\ell:\bm{s}_\ell \in A_j, Z_j = k}\lambda(\bm{s}_\ell)} {\exp(\int_{\bigcup_{j:Z_j = k}A_j}\lambda(\bm{s}) \mathrm{d} \bm{s})} \lambda_{k}^{a-1}\exp\left(-b\lambda_{k}\right)\\ &= \frac{\prod_{\ell:\bm{s}_\ell \in A_j, Z_j = k} \lambda_{k}} {\exp\left(\int_{\bigcup_{j:Z_j = k}A_j} \lambda_{k} \mathrm{d} \bm{s}\right)} \lambda_{k}^{a-1}\exp\left(-b\lambda_{k}\right)\\ &\propto \lambda_{k}^{N_k+a-1} \exp\left(-\left(b + \sum_{j:Z_j = k} \mu(A_j)\right)\lambda_{k}\right), \end{split} \end{align} which is the kernel of $\mbox{Gamma}\big(N_k+a, b+\sum_{j:Z_j = k}\mu(A_j) \big)$, where $N_k = \sum_{\ell: \bm{s}_{\ell} \in A_{j}, Z_{j} = k} 1$ is the number of data points in the sub-regions belonging to the $k$th component. Algorithm~\ref{alg:MRF-DPM-NHPP} summarizes the steps of a Gibbs sampling algorithm using the full conditional distributions from~\eqref{eq:post_z_MRF_DPM_NHPP}--\eqref{eq:post_lambda_MRF_DPM_NHPP}. \begin{algorithm}[tbp] \caption{Collapsed Gibbs sampler for MRF-DPM-NHPP.} \label{alg:MRF-DPM-NHPP} \begin{algorithmic}[1] \INPUT \hspace{0pt}\newline \indent Data: point locations $\bm{s}_{i}$, $i = 1, \ldots, N$; sub-regions and their neighbors $\{A_i, \partial (i): i = 1, \ldots, n\}$. \newline \indent Prior hyperparameters : $a, b, \alpha$. \newline \indent Tuning parameter: $\eta$. \newline \indent Burn-in MCMC sample size: $B$, post-burn-in MCMC sample size: $L$. \newline \indent Initial values: $K$, $Z_1, \ldots, Z_n$, $\bm{\lambda} = (\lambda_1, \ldots, \lambda_{K})$, iter = 1. \newline \WHILE{iter $\leqslant B+L$} \STATE Update $\lambda_{k} | \cdot$, $k=1,\ldots,K$ as $$ \lambda_{k} | \cdot \sim \text{Gamma}\left(N_k+a, b+\sum_{j:Z_j = k}\mu(A_j) \right) $$ \indent where $N_k = \sum_{\ell: \bm{s}_{\ell} \in A_{j}, Z_{j} = k} 1$ is the number of points belonging to the $k$th component. \STATE Update $Z_i | \cdot$, $i=1,\ldots,n$ following Proposition~\ref{thm:z_MRF_DPM_NHPP}. \STATE iter = iter + 1. \ENDWHILE \newline \OUTPUT Posterior samples $Z_{1}^{(l)},\ldots,Z_{n}^{(l)}$, $\bm{\lambda}^{(l)}$, $l=B+1,\ldots,B+L$. \end{algorithmic} \end{algorithm} Convergence check for the auxillary variables $(Z_1, \ldots, Z_n)$ can be done with the help of the Rand Index (RI) \citep{rand1971objective}. The auxillary variables themselves are nominal labels which cannot be compared from iteration to iteration. The RI is the proportion of concordant pairs between two clustering results with value of $1$ indicating the two results are exactly the same. The trajectory of the RI for successive MCMC iterations provides a visual check for convergence. Further, RI values closer to~1 indicate good agreement in the clustering in the MCMC samples. \subsection{Post MCMC Inference}\label{sec:post_mcmc} We carry out posterior inference on the group memberships using Dahl's method \citep{dahl2006model}, which proceeds as follows. \begin{enumerate} \item Define membership matrices $\mathcal{H}^{(l)} =(\mathcal{H}^{(l)}(i,j))_{i,j \in \left\{1,\ldots,n\right\} } = (\mathbbm{1}(Z_{i}^{(l)} = Z_{j}^{(l)}))_{n \times n}$, where $l = 1, \ldots, L$ indexes the number of retained MCMC draws after burn-in, and $\mathbbm{1}(\cdot)$ is the indicator function. \item Calculate the average membership matrix $\bar{\mathcal{H}} = \frac{1}{L} \sum_{l=1}^{L} \mathcal{H}^{(l)}$ where the summation is element-wise. \item Identify the most \emph{representative} posterior sample as the one that is closest to $\bar{\mathcal{H}}$ with respect to the element-wise Euclidean distance $\sum_{i=1}^{n} \sum_{j=1}^{n} (\mathcal{H}^{(l)}(i,j) - \bar{\mathcal{H}}(i,j))^{2}$ among the retained $l = 1,\ldots,L$ posterior samples. \end{enumerate} Therefore, the posterior estimates of cluster memberships $Z_1,\ldots,Z_n$ and model parameters $\bm{\Theta}$ can be based on the draws identified by Dahl's method. \subsection{Selection of Smoothing Parameter}\label{sec:eta_selection} We recast the choice of smoothing parameter $\eta \geqslant 0$ as a model selection problem. In particular, we consider the deviance information criterion (DIC; \citet{spiegelhalter2002bayesian}), logarithm of the Pseudo-marginal likelihood LPML; \citet{gelfand1994bayesian}) and Bayesian information criterion (BIC; \citet{schwarz1978estimating}) as candidates. The DIC for spatial point process can be derived from the standard DIC in a straightforward manner as \begin{equation*} \begin{split} \mbox{Dev}(\bm{\Theta}) &= -2 \left(\sum_{i = 1}^N \log\lambda(\bm{s}_i) - \int_{\mathcal{B}}\lambda(\bm{s})\mathrm{d} \bm{s}\right),\\ \mbox{DIC} &= 2\bar{\mbox{Dev}}(\bm{\Theta}) - \mbox{Dev}(\hat{\bm{\Theta}}), \end{split} \end{equation*} where $\bar{\mbox{Dev}}(\bm{\Theta})$ is the average deviance evaluated using each posterior sample of $\Theta$, and $\mbox{Dev}(\hat{\bm{\Theta}})$ is the deviance calculated using the point estimation of parameter using Dahl's method. The LPML for spatial point process can be approximated using the MCMC samples \citep{hu2019new} \begin{equation*} \begin{split} \widehat{\mbox{LPML}} &= \sum_{i = 1}^N \log\tilde{\lambda}(\bm{s}_i) - \int_{\mathcal{B}} \bar{\lambda}(\bm{s})\mathrm{d} \bm{s},\\ \tilde{\lambda}(\bm{s})_i &= \left(\frac{1}{M} \sum_{t = 1}^M \lambda(\bm{s}_i\mid \bm{\Theta}_t)^{-1}\right)^{-1},\\ \bar{\lambda}(\bm{s}) &= \frac{1}{M}\sum_{t = 1}^M \lambda(\bm{s}\mid \bm{\Theta}_t), \end{split} \end{equation*} where $\bm{\Theta}_t$ denotes the $t$-th posterior sample of parameters with a total length of $M$. The BIC is derived naturally from its general definition \begin{equation*} \begin{split} \mbox{BIC}(\bm{\Theta}) &= -2 \log L(\bm{\Theta}) + \hat{K} \log N,\\ \log L(\bm{\Theta}) & = \sum_{i = 1}^N \log\lambda(\bm{s}_i) - \int_{\mathcal{B}}\lambda(\bm{s})\mathrm{d} \bm{s}, \end{split} \end{equation*} where $\hat{K}$ denotes the estimated number of components of the piecewise constant intensity function. \section{Simulation Studies}\label{sec:simu} In this section, we report simulation studies to examine the performance of the MRF-DPM-NHPP model and the proposed Gibbs sampling algorithm. In each setting, we compare the results to that of MFM-NHPP To to show that the MRF-DPM-NHPP model indeed leads to improvements. \subsection{Design}\label{sec:simu_setup} Consider a study region $\mathcal{B} = [0, 20] \times [0, 20]$ partitioned into $n = 400$ squares of unit area, $\{A_i\}_{i = 1}^n$. The data generating model was set to be $\mbox{NHPP}(\lambda(\bm{s}))$ with a piecewise constant intensity $\lambda(\bm{s})$ over~$\mathcal{B}$. There settings were considered for $\lambda(\bm{s})$; see Table~\ref{tb:simulation_settings}. The ``ground-truth'' intensity surfaces of the three settings are displayed in the leftmost column of Figure~\ref{fig:est_lambda_MRF_all}. The first two settings with the different numbers of clusters are similar with the simulation setups in \citet{geng2019bayesian}. The third setting contains both spatially contiguous and discontinuous clusters. The point patterns were generated using the \texttt{rpoispp()} function from package \texttt{spatstat} \citep{baddeley2005spatstat}. For each setting, we generated 100 replicates. \begin{table}[tbp] \centering \caption{Simulation settings for the piecewise constant intensity function.} \label{tb:simulation_settings} \begin{tabular}{lccc} \toprule & $\bm{\lambda}$ & Number of components in $\bm{\lambda}$ & Number of grid boxes \\ \midrule Setting 1 & $(0.2, 4, 12)$ & 3 & $(90, 211, 99)$ \\ Setting 2 & $(0.2, 1, 4, 8, 16)$ & 5 & $(80,80,80,80,80)$ \\ Setting 3 & $(0.2, 4, 10, 20)$ & 4 & $(90, 145, 66, 99)$\\ \bottomrule \end{tabular} \end{table} The prior distributions were specified as in~\eqref{eq:MRF_DPM_NHPP}, with hyperparameters $a = b = \alpha = 1$. The smoothing parameter $\eta \geqslant 0$ took values on an equally-spaced grid $\eta = \left\{0,0.5,\ldots,7.5,8 \right\}$, of which the optimal value is chosen via the model selection criteria introduced in Section~\ref{sec:eta_selection}. The neighboring structure was defined based on rook contiguity, and we treat all neighbors equally by letting $d_{ij} = 1, \forall j \in \partial i$. Each MCMC chain was run for a total of $4000$ iterations with random starting values, where the first $2000$ draws were discarded as burn-in. The remaining $2000$ draws were thinned by~10 and stored for posterior inference. We used Dahl's method \citep{dahl2006model} to identify the most representative draw from the retained posterior draws as the posterior point estimate. \subsection{Results}\label{sec:simu_results} We evaluate the results of simulation studies on the following aspects, (i) probability of choosing the correct number of clusters, (ii) clustering accuracy quantified by the RI, and (iii) estimation accuracy of the intensity surface. Table~\ref{tb:Khat_meanRI_simulations} (left block) shows the proportion of times the true number of components is identified under different model selection criteria for each simulation setting. Obviously, $\eta=0$ never recovered the true number of clusters, suggesting that taking spatial contiguity information into account is crucial. For MRF-DPM-NHPP, BIC appears to be better than DIC and LPML as the BIC-selected \emph{optimal} $\eta$ recovered the true number of clusters considerably more frequently ($>70\%$). Although MFM-NHPP seems to be very competitive in terms of identifying the true number of components under setting~1, MRF-DPM-NHPP with smoothing parameter $\eta$ selected by BIC offers substantially better performance under setting~2 and setting~3. A further investigation revealed that setting $\eta = 0$ always produced overly large numbers of redundant clusters, while DIC and LPML failed more gracefully with wrong numbers of clusters that often fall into the approximate range (A histogram of~$\hat{K}$ is available in the Supplementary Material). \begin{table}[tbp] \centering \caption{Proportion of times the true number of cluster is identified, and average RI across 100 replicates for each simulation setting, under MFM-NHPP, and MRF-DPM-NHPP with $\eta=0$, optimal $\eta$ selected by BIC, DIC and LPML.} \label{tb:Khat_meanRI_simulations} \setlength\tabcolsep{5pt} \begin{tabular}{l ccccc ccccc} \toprule & \multicolumn{5}{c}{Accuracy of $\hat{K}$} & \multicolumn{5}{c}{Average RI} \\ \cmidrule(lr){2-6}\cmidrule(lr){7-11} & \multicolumn{4}{c}{MRF-DPM-NHPP} & \multicolumn{1}{c}{MFM-NHPP} & \multicolumn{4}{c}{MRF-DPM-NHPP} & \multicolumn{1}{c}{MFM-NHPP} \\ \cmidrule(lr){2-5}\cmidrule(lr){7-10} & $\eta=0$ & BIC & DIC & LPML & & $\eta=0$ & BIC & DIC & LPML & \\ \midrule Setting 1 & 0.00 & 0.75 & 0.21 & 0.22 & $\bm{0.95}$ & 0.619 & $\bm{0.974}$ & 0.890 & 0.891 & 0.905 \\ Setting 2 & 0.00 & $\bm{0.82}$ & 0.61 & 0.63 & 0.20 & 0.803 & $\bm{0.991}$ & 0.982 & 0.982 & 0.775 \\ Setting 3 & 0.00 & $\bm{0.71}$ & 0.10 & 0.11 & 0.56 & 0.739 & $\bm{0.992}$ & 0.939 & 0.942 & 0.870 \\ \bottomrule \end{tabular} \end{table} To assess the clustering performance, we examine the average RI over the $100$ replicates. Because the ``ground-truth'' class labels are known in the simulation studies, the RIs were calculated by comparing the MCMC iterations with the truth as a measure of clustering accuracy. As shown in Table~\ref{tb:Khat_meanRI_simulations} (right block), MRF-DPM-NHPP with smoothing parameter $\eta$ selected by BIC yields the highest clustering accuracy. Although Despite being more capable of identifying the true number of clusters, the clustering accuracy of MFM-NHPP is worse than that of MRF-DPM-NHPP with BIC under setting 1, which suggests that MFM-NHPP might happen to get the number of clusters right by allocating the regions into wrong clusters. For the remainder of this paper, we focus on the results that correspond to optimal $\eta$ selected by BIC. \begin{figure}[tbp] \centering \includegraphics[trim={4.5cm 0cm 4.2cm 0cm}, clip=true, width=\textwidth]{plots/est_lambda_MRF_all.pdf} \caption{Simulation configurations for intensity surfaces, with fitted intensity surfaces. Element-wise median and quantiles are calculated out of 100 replicates.} \label{fig:est_lambda_MRF_all} \end{figure} \begin{figure}[tbp] \centering \includegraphics[trim={3.5cm 0cm 3.2cm 0cm}, clip=true, width=\textwidth]{plots/est_lambda_relative_bias.pdf} \caption{Absolute value of relative bias of element-wise posterior mean estimates for intensity surfaces.} \label{fig:est_lambda_relative_bias} \end{figure} We next summarize accuracy in estimating the intensity surfaces. Figure~\ref{fig:est_lambda_MRF_all} displays the averages of the median, 2.5th percentile, and 97.5th percentile of the estimated intensity surface obtained with the optimal~$\eta$ selected by BIC from the 100 replicates, in comparison with the true surfaces, for the three settings. The median surface agrees with true surface well in all three settings. The average surfaces of the 2.5th and 97.5th percentiles of the 100 replicates have higher uncertainties occasionally at the boundaries where the true intensities jump, but in general are not far from the true surfaces. Figure~\ref{fig:est_lambda_relative_bias} shows the absolute value of relative bias of element-wise posterior mean estimates under the MFM-DPM-NHPP model and the MFM-NHPP model. The proposed method leads to substantially smaller bias than the competing method, especially for sub-regions with low true underlying intensity values and/or sub-regions at the boundaries. The advantage of the proposed method comes from leveraging the spatial contiguity in the presence of spatial homogeneity. The overlaid traceplots of RI (available in Supplementary Material) indicate that the chains converge very fast, and stablize after $2000$ iterations, regardless of the settings. This observation justifies our choice of the MCMC setting. In summary, the simulation studies confirm the advantages of the MRF-DPM-NHPP model and the validity of the proposed Gibbs sampling algorithm. The results also suggest that BIC is better than DIC and LPML in selecting the smoothing parameter $\eta$ in the studied settings. Compared to the benchmark MFM-NHPP model, the MRF-DPM-NHPP model is superior in clustering accuracy and has an advantage on identifying the true number of components under more complex settings. \section{Professional Basketball Data Analysis}\label{sec:app} We applied the MRF-DPM-NHPP model to study the shot data for NBA players in the 2017-2018 NBA regular season described in Section~\ref{sec:data}. In particular, we focus on $20$ all-star level players that are representative of their positions (Table ~\ref{tb:player_info}). The study region is a rectangle covering the first 75\% of the half court ($50 \ \text{ft} \times 35 \ \text{ft}$) as the shots made outside this region are often not part of the regular tactics. This rectangle was divided into $50 \times 35 = 1750$ equally-sized grid boxes of $1\text{ft} \times 1\text{ft}$. For each player, we run an MCMC with $\eta \in \{0,0.5,\ldots,6.5,7\}$ for $4000$ iterations, where the first $2000$ were discarded as burn-in and the remainder was thinned by~10. \begin{table}[tbp] \centering \caption{Basic information (name and the preferred position) of players and the number of clusters given by MRF-DPM-NHPP with the smoothing parameter selected by BIC, and by MFM-NHPP. Player positions: point guard (PG), shooting guard (SG), small forward (SF), power forward (PF), center (C).} \label{tb:player_info} \begin{tabular}{c c rr r} \toprule \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{2}{r}{MRF-DPM-NHPP} & \multicolumn{1}{r}{MFM-NHPP} \\ \cmidrule(lr){3-4} Player & Position & $\hat{K}_{\text{BIC}}$ & $\hat{\eta}_{\text{BIC}}$ & $\hat{K}$ \\ \midrule Joel Embiid & C & 12 & 2.5 & 9 \\ Dwight Howard & C & 6 & 2.5 & 11 \\ DeAndre Jordan & C & 6 & 4.0 & 4 \\ Karl-Anthony Towns & C & 8 & 2.5 & 9 \\ \hline LaMarcus Aldridge & PF & 7 & 2.5 & 17 \\ Giannis Antetokounmpo & PF & 6 & 3.0 & 10 \\ Blake Griffin & PF & 8 & 2.5 & 10 \\ Kristaps Porziņģis & PF & 5 & 2.5 & 9 \\ \hline Stephen Curry & PG & 5 & 3.0 & 3 \\ Kyrie Irving & PG & 5 & 3.0 & 9 \\ Damian Lillard & PG & 6 & 3.0 & 6 \\ Chris Paul & PG & 8 & 2.5 & 5 \\ \hline Jimmy Butler & SF & 5 & 3.0 & 9 \\ Kevin Durant & SF & 9 & 3.0 & 13 \\ Paul George & SF & 6 & 3.5 & 8 \\ LeBron James & SF & 6 & 3.0 & 8 \\ \hline DeMar DeRozan & SG & 8 & 3.0 & 10 \\ James Harden & SG & 8 & 3.0 & 11 \\ Klay Thompson & SG & 6 & 5.0 & 11 \\ Russell Westbrook & SG & 5 & 3.5 & 10 \\ \bottomrule \end{tabular} \end{table} Table~\ref{tb:player_info} summarizes the optimal $\eta$ selected by BIC and the resulting number of clusters. None of the selected $\hat{\eta}$ lies on the boundary, which assures the validity of candidate values of $\eta$. For comparison, the number of clusters from the MFM-NHPP model under the same MCMC setting is also included, and we note that MFM-NHPP leads to higher numbers of clusters for most of the players than that of MFM-DPM-NHPP. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{plots/real_data_mrf_5by4.pdf} \caption{Estimated shooting intensity surfaces of selected players based on MRF-DPM-NHPP.} \label{fig:real_data_MRF_results} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{plots/real_data_mfm_5by4.pdf} \caption{Estimated shooting intensity surfaces of selected players based on MFM-NHPP.} \label{fig:real_data_MFM_results} \end{figure} Figure~\ref{fig:real_data_MRF_results} and Figure~\ref{fig:real_data_MFM_results} show the estimated shooting intensity surfaces of selected players under MRF-DPM-NHPP and MFM-NHPP, respectively. Compared to the results of MFM-NHPP, it is clear that the MRF-DPM-NHPP model is capable of capturing distant regions that share similar shooting intensities while preserving the spatial contiguity, which greatly facilitates the interpretability. Taking Paul George as an example, the estimated shooting intensity surface yielded by MFM-NHPP appears to be too scattered to highlight his preferred shooting regions; the results from the MRF-DPM-NHPP model, however, shows much clearer pattern. More interesting observations are seen from the estimated shooting intensity surfaces, and we summarize these observations by the preferred positions of selected players. Among those players with preferred position as center, DeAndre Jordan and Dwight Howard rarely make shots outside the low post, while the latter seems to have made more shots from the regions between short corner and the restricted area. On the contrary, Joel Embiid and Karl-Anthony Towns are more versatile as attackers in terms of their shot locations --- Joel Embiid can attack from low post, high post, top of the key as well as the \emph{point} (i.e., right outside the middle of the arc); Karl-Anthony Towns' shots are mainly initiated either from the low block or outside the arc (right corner and from point to the wing). The selected power-forward (PF) players show fairly different shooting styles. The shot locations of Kristaps Porziņģis are very similar to those of Joel Embiid, and Kristaps Porziņģis seems to be less confined to shooting from low post regions compared to Joel Embiid. Both Giannis Antetokounmpo and LaMarcus Aldridge all make substantial amounts of mid-range shots and seldomly make three-point shots, but it is worth highlighting their differences as Giannis Antetokounmpo seems to be more inclined to make shots from the right while LaMarcus Aldridge's mid-range shots are more spread. Interestingly, the former champion of slam dunk contest, Blake Griffin has higher intensity of shooting outside the arc (in particular, from the right corner, and the regions between the wing and the point). The selected small-forward (SF) players show versatile shot locations but they differ substantially in their three-point shot locations and the intensity of making shots around restricted area. Speaking about the three-point shots, Kevin Durant prefers shooting around left and right wings, both Paul George and Jimmy Butler prefer shooting around the right corner but the former is clearly more comfortable with launching long-range shots, while LeBron James prefers shooting around the left wing. Compared to the other two SF players, LeBron James have higher intensity of making shots around the restricted area. The difference in the shooting patterns among backcourt (PG and SG) players is substantial. James Harden, Stephen Curry, Damian Lillard and Kyrie Irving all launch considerable amounts of shots within the restricted area and outside the arc, while James Harden makes shots in almost all regions from right wing to left wing right outside the arc, Stephen Curry and Kyrie Irving make more shots around left wing rather than right wing, Damian Lillard makes more shots around right wing rather than left wing. Compared to the former three players, Chris Paul, Russell Westbrook, DeMar DeRozan and Klay Thompson make more mid-range shots, but from different angles. Specifically, Russell Westbrook makes shots almost everywhere in the middle, Chris Paul's shots are mainly located in a sector-shaped area in the middle, Demar DeRozan's shots are more spread to the corners, while Klay Thompson's shots are almost evenly distributed across the entire study region. Admittedly, the above analysis is far from being exhaustive. We believe, however, that basketball professionals may leverage the proposed method to better understand the shooting patterns of the players and, therefore, design highly targeted offense and defense tactics. \section{Discussion}\label{sec:disc} The NBA shot location data appear to be modeled by the spatially constrained nonparametric Bayesian model, MRF-DPM-NHPP, reasonably well incorporating local spatial homogeneity. Building upon a combination of Dirichlet process and Markov random field, the proposed method relies on a smoothing parameter $\eta$ to effectively control the relative contribution of local spatial homogeneity in estimating the globally heterogeneous intensity surface. Statistical inferences are facilitated by a Gibbs sampling algorithm. Selection of the smoothing parameter $\eta$ is casted as a model selection problem which is handled using standard model selection criteria. Simulation studies show the accuracy of the proposed algorithm and the competitiveness of the model relative to the benchmark MFM-NHPP model \citep{geng2019bayesian} under several settings in which spatial contiguity is present in the intensity surface. In application to the shot locations of NBA players, the model effectively captures spatial contiguity in shooting intensity surfaces, and provide important insights on their shooting patterns which cannot be obtained from the MFM-NHPP model. There are several possible directions for further investigation. More sophisticated definition of neighborhood (e.g., higher-order neighborhood, incorporating covariates) than the rook contiguity, which was used in this study and found to be sufficient here, may be useful for more complex data structure. BIC was found to perform well for the purpose of selecting smoothing parameter $\eta$, but it is of substantial interest to develop a fully automated procedure that enables the smoothing parameter to be inferred along with the intensity values and the group membership indicators through a single MCMC run. The NBA players shot pattern modeling admits a natural partition for the region of interest. In general settings, however, it is worth investigating how to effectively partition the space such that the piecewise constant assumption is more plausible. As the number of parameters is proportional to the number of grid boxes, developments of more scalable inference algorithms (e.g., variational inference) are critical for finer grid. Finally, building a group learning model with pooled data from multiple players merits future research from both methodological and applied perspectives. \section*{Acknowledgements} The authors would like to thank Dr.~Yishu Xue for sharing the R code of data visualization. \bibliographystyle{abbrvnat}
{ "attr-fineweb-edu": 1.822266, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfqw5qhLB0AM-fXE3
\section{Introduction} In the field of tournament timetabling, the traveling tournament problem (TTP) is a well-known benchmark problem established by Easton, Nemhauser, and Trick~\cite{Easton2001}. The present paper considers the unconstrained traveling tournament problem (UTTP), which is a variant of the TTP\@. In the following, some terminology and the TTP are introduced. The UTTP is then defined at the end of this section. Given a set~$T$ of $n$ teams, where $n \geq 4$ and is even, a game is specified by an ordered pair of teams. Each team in~$T$ has its home venue. A double round-robin tournament is a set of games in which every team plays every other team once at its home venue and once as an away game (i.e., a game held at the home venue of the opponent). Consequently, $2(n-1)$ slots are necessary to complete a double round-robin tournament. Each team stays at its home venue before a tournament and then travels to play its games at the chosen venues. After a tournament, each team returns to its home venue if the last game is played as an away game. When a team plays two consecutive away games, the team goes directly from the venue of the first opponent to the venue of another opponent without returning to its home venue. For any pair of teams $i, j \in T$, $d_{ij} \geq 0$ denotes the distance between the home venues of $i$ and~$j$. Throughout the present paper, we assume that triangle inequality ($d_{ij} + d_{jk} \geq d_{ik}$), symmetry ($d_{ij} = d_{ji}$), and $d_{ii} = 0$ hold for any teams~$i, j, k \in T$. Denote the distance matrix $(d_{ij})$ by $D$. Given an integer parameter $u \geq 2$, the traveling tournament problem~\cite{Easton2001} is defined as follows. \medskip \noindent {\bf Traveling Tournament Problem (TTP$(u)$)}\\ {\bf Input:\/} A set of teams $T$ and a distance matrix~$D=(d_{ij})$. \\ {\bf Output:\/} A double round-robin schedule of $n$~teams such that \noindent C1. No team plays more than $u$ consecutive away games, \noindent C2. No team plays more than $u$ consecutive home games, \noindent C3. Game $i$ at~$j$ immediately followed by game $j$ at~$i$ is prohibited, \noindent C4. The total distance traveled by the teams is minimized. \medskip \noindent Constraints C1 and~C2 are referred to as the {\em atmost\/} constraints, and Constraint~C3 is referred to as the {\em no-repeater\/} constraint. \medskip Various studies on the TTP have been conducted in recent years (see \cite{Kendall2010,Rasmussen2006-2,Trick2011} for detail), and most of these studies considered TTP(3)~\cite{Trick_web}, which was recently proved to be NP-hard by Thielen and Westphal~\cite{NPhard}. Almost all of the best upper bounds of TTP instances are obtained using metaheuristic algorithms. On the other hand, little research on approximation algorithms has been conducted for the TTP\@. Miyashiro, Matsui, and Imahori~\cite{MMI} proposed a $(2+O(1/n))$-approximation algorithm for TTP(3). Yamaguchi, Imahori, Miyashiro, and Matsui~\cite{YMIM} proposed an approximation algorithm for TTP$(u)$, where $3 \le u \ll n$. Westphal and Noparlik~\cite{Westphal2010} proposed\footnote{ Westphal and Noparlik's paper~\cite{Westphal2010} and the conference version of the present paper~\cite{IMM} appeared in the same conference (PATAT, 2010). } a 5.875-approximation algorithm for TTP$(u)$, where $3 \le u$. For TTP(3), the approximation ratio of~\cite{YMIM} is the best among them. In addition, Thielen and Westphal~\cite{Thielen2010} proposed a $(1.5+O(1/n))$-approximation algorithm for TTP(2). The TTP is a simplification of an actual sports scheduling problem. Some further simplified variants of the TTP have been studied~\cite{Trick_web}. The circular distance TTP and the constant distance TTP are the problems which have specific distance matrices. For the constant distance TTP, Fujiwara, Imahori, Matsui, and Miyashiro~\cite{Fujiwara2007} proposed approximation algorithms. The unconstrained traveling tournament problem (UTTP) is another variant of the TTP, in which Constraints C1 through~C3 are eliminated. In other words, the UTTP is equivalent to TTP($n-1$) without the no-repeater constraint. On some actual sports scheduling problems, the atmost constraints ($u=3$ in particular) and the no-repeater constraint are considered. However, these constraints are not necessarily imposed, and the UTTP is a suitable simplified model for some practical scheduling problems. Bhattacharyya~\cite{NPC} recently showed NP-hardness of the UTTP\@. Although the UTTP is simpler than the TTP, no approximation algorithm has yet been proposed for the UTTP\@. The method proposed in~\cite{YMIM} cannot be applied to the UTTP because the condition $u \ll n$ is necessary. The method in~\cite{MMI}, proposed for TTP(3), can be applied to the UTTP with a few modifications. However, this leads to a $((2/3)n+ O(1))$-approximation algorithm for the UTTP, which is not a constant approximation ratio with regard to~$n$. In the present paper, we propose a 2.75-approximation algorithm for the UTTP\@. In addition, the solution obtained by the algorithm meets both the no-repeater and mirrored constraints, which are sometimes required in practice. This property indicates that our algorithm also works for TTP($n-1$), which eliminates the atmost constraints but considers the no-repeater constraint. \section{Algorithm} In this section, we propose an approximation algorithm for the UTTP\@. A key concept of the algorithm is the use of the circle method and a shortest Hamilton cycle. The classical schedule obtained by the circle method satisfies the property that, for all teams but one, the orders of opponents are very similar to a mutual cyclic order. Roughly speaking, the proposed algorithm constructs a short Hamilton cycle passing all venues, and finds a permutation of teams such that the above cyclic order corresponds to the Hamilton cycle. Let $G=(V,E)$ be a complete undirected graph with the vertex set~$V$ and edge set $E$, where $|V|=n$. We assume that there exists a bijection between the vertex set $V$ and the set of teams $T$. We put the length of edge $\{v,v'\} \in E$, denoted by $d_{vv'}$, to the distance between the home venues of the corresponding teams $t,t' \in T$. First, we assign aliases $0,1,\ldots,n-1$ to teams in $T$ as follows. \begin{enumerate} \item For each $v \in V$, compute $\sum_{v' \in V \setminus\{v\}} d_{vv'}$. \item Let $v^*$ be a vertex that attains $\min_{\, v \in V} \sum_{v' \in V \setminus\{v\}} d_{vv'}$, and designate the team corresponding to $v^*$ as team $n-1$. \item Using Christofides' 1.5-approximation algorithm~\cite{Christofides} for the traveling salesman problem with triangle inequality and symmetry, construct a Hamilton cycle on the complete graph induced by $V \setminus \{v^*\}$. For the obtained cycle $(v_{0},v_{1},\ldots,v_{n-2})$, denote the corresponding teams by $(0,1,\ldots ,n-2)$. \end{enumerate} \noindent In the rest of this paper, we define that the set of teams $T=\{0,1,2,\ldots,n-1\}$ and the vertex set $V=\{v_{0}, v_{1},\ldots, v_{n-2}, v^*\}$. We identify the vertex $v_{n-1}$ with~$v_0$ (not $v^*$) and the vertex~$v_{-1}$ with $v_{n-2}$ (not $v^*$). Next, we construct a single round-robin schedule. In the following, a ``schedule without HA-assignment'' refers to a ``round-robin schedule without the concepts of home game, away game, and venue.'' Denote the set of $n-1$ slots by $S= \{ 0, 1, \ldots, n-2 \}$. A single round-robin schedule without HA-assignment is a matrix~$K$ of which $(t,s) \in T \times S$ element, say $K(t,s)$, denotes the opponent of team~$t$ in slot~$s$. Let $K^*$ be a matrix defined by \[ K^*(t,s)= \left\{ \begin{array}{ll} s-t \; (\mod n-1) & (t \neq n-1 \mbox{ and } s-t \neq t \ (\mod n-1)), \\ n-1 & (t \neq n-1 \mbox{ and } s-t = t \ (\mod n-1)), \\ s/2 & (t = n-1 \mbox{ and } s \mbox{ is even}), \\ (s+n-1)/2 & (t = n-1 \mbox{ and } s \mbox{ is odd}). \\ \end{array} \right. \] \begin{lemma}{\rm{\cite{YMIM}}} \label{lem1} The matrix $K^*$ is a single round-robin schedule without HA-assignment. In addition, $K^*$ is essentially equivalent to the classical schedule obtained by the circle method. \end{lemma} Then, by the mirroring procedure, we form $K^*$ into a double round-robin schedule without HA-assignment. More precisely, construct a matrix~$(K^* | K^*)$ whose rows are index by teams and columns are index by a sequence of slots $(0,1,\ldots,n-2,n-1,n,\ldots,2n-3)$. So as to complete a double round-robin schedule, ``home'' and ``away'' are assigned to games of $(K^* | K^*)$ as follows: \begin{itemize} \item for team $t \in \{0, 1, \ldots, n/2-1 \}$, let the games in slots $2t, 2t+1,\ldots, n+2t-2$ be home games, and let the other games be away games. \item for team $t \in \{n/2, n/2+1, \ldots, n-2 \}$, let the games in slots $2t-n+2, 2t-n+3, \ldots, 2t$ be away games, and let the other games be home games. \item for team $n-1$, let the games in slots $0,1,\ldots,n-2$ be away games, and let the other games be home games. \end{itemize} The obtained double round-robin schedule is denoted by $K^*_{\mathrm{DRR}}$. Figure~\ref{KDRR} shows the schedule $K^*_{\mathrm{DRR}}$ of 10 teams. \begin{figure}[htb] \begin{center} {\small \noindent \begin{tabular}{r|c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c} slots \\ teams \,\,\,& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10&11 &12 &13 &14 &15 &16 &17 \\ \hline 0 & 9H& 1H& 2H& 3H& 4H& 5H& 6H& 7H& 8H& 9A& 1A& 2A& 3A& 4A& 5A& 6A& 7A& 8A \\ 1 & 8A& 0A& 9H& 2H& 3H& 4H& 5H& 6H& 7H& 8H& 0H& 9A& 2A& 3A& 4A& 5A& 6A& 7A \\ 2 & 7A& 8A& 0A& 1A& 9H& 3H& 4H& 5H& 6H& 7H& 8H& 0H& 1H& 9A& 3A& 4A& 5A& 6A \\ 3 & 6A& 7A& 8A& 0A& 1A& 2A& 9H& 4H& 5H& 6H& 7H& 8H& 0H& 1H& 2H& 9A& 4A& 5A \\ 4 & 5A& 6A& 7A& 8A& 0A& 1A& 2A& 3A& 9H& 5H& 6H& 7H& 8H& 0H& 1H& 2H& 3H& 9A \\ 5 & 4H& 9H& 6A& 7A& 8A& 0A& 1A& 2A& 3A& 4A& 9A& 6H& 7H& 8H& 0H& 1H& 2H& 3H \\ 6 & 3H& 4H& 5H& 9H& 7A& 8A& 0A& 1A& 2A& 3A& 4A& 5A& 9A& 7H& 8H& 0H& 1H& 2H \\ 7 & 2H& 3H& 4H& 5H& 6H& 9H& 8A& 0A& 1A& 2A& 3A& 4A& 5A& 6A& 9A& 8H& 0H& 1H \\ 8 & 1H& 2H& 3H& 4H& 5H& 6H& 7H& 9H& 0A& 1A& 2A& 3A& 4A& 5A& 6A& 7A& 9A& 0H \\ 9 & 0A& 5A& 1A& 6A& 2A& 7A& 3A& 8A& 4A& 0H& 5H& 1H& 6H& 2H& 7H& 3H& 8H& 4H \\ \end{tabular} } Each number corresponds to the opponent and away (home) game is denoted by A (H). \caption{The schedule $K^*_{\mathrm{DRR}}$ with 10 teams. } \label{KDRR} \end{center} \end{figure} \begin{lemma} \label{newlem} The double round-robin schedule $K^*_{\mathrm{DRR}}$ is feasible. {\rm \noindent {\bf Proof.\/} $(K^* | K^*)$ is a consistent double round-robin schedule without HA-assignment, which satisfies the mirrored constraint. We check the feasibility of HA-assignment to games. Teams $i$ and $j$ ($i < j < n-1$) have a game at slot~$i+j$. By the rule to assign home and away to games, team~$i$ plays a home game and team~$j$ plays an away game at slot~$i+j$. Teams $i$ and $j$ ($i < j = n-1$) have a game at slot~$2i$, and the rule assigns consistent home/away to the teams. Another game between teams~$i$ and $j$ is held at the opposite venue. \QED } \end{lemma} In addition, for each $m \in \{0,1,\ldots ,2n-3\}$, we construct a double round-robin schedule by rotating slots of $K^*_{\mathrm{DRR}}$ through $m$ cyclically. It means that games of $K^*_{\mathrm{DRR}}(m) \ (m \in \{0,1,\ldots ,2n-3\})$ at slot~$s$ are equal to games of $K^*_{\mathrm{DRR}}$ at slot~$s+m \ (\mod 2n-2)$. Obviously, all of the schedules $K^*_{\mathrm{DRR}}(m) \ (m \in \{0,1,\ldots ,2n-3\})$ meet both the no-repeater and mirrored constraints. Finally, output a best solution among~$K^*_{\mathrm{DRR}}(m)$ $ (m \in \{0,1,\ldots ,2n-3\})$. Here, we estimate the time complexity of the algorithm. Christofides' algorithm requires $O(n^3)$ time to construct a Hamilton cycle on the complete graph induced by $V \setminus \{v^*\}$. For the constructed Hamilton cycle, there are $2(n-1)$ possibilities to assign teams. For each assignment of teams, we consider $2n-2$ possibilities of $m \in \{0,1,\ldots ,2n-3\}$. Each double round-robin schedule can be evaluated in $O(n)$ time on average. Thus, the time complexity of the algorithm is bounded by $O(n^3)$. In the next section, we prove that the proposed algorithm guarantees an approximation ratio~2.75. \section{Approximation Ratio} In this section, we describe the proof of the approximation ratio of the proposed algorithm. Designate the length of a shortest Hamilton cycle on~$G$ as $\tau$. \begin{lemma}\label{lem2} The following propositions hold for $G$. {\rm (1)} The length of any edge is bounded by $\tau /2$. {\rm (2)} The length of any Hamilton cycle on $G$ is bounded by $n\tau /2$. {\rm (3)} $\displaystyle \sum_{v \in V} \sum_{v' \in V \setminus \{v\}} d_{vv'} \leq n^2 \tau /4$. {\rm (4)} $\displaystyle \sum_{v \in V \setminus \{v^*\}}d_{vv^*} \leq n \tau /4$. {\rm \noindent {\bf Proof.\/} (1) For the edges $\{i,j\}$ and~$\{j,i\}$, the sum of their lengths is at most the length of a shortest Hamilton cycle. Thus, the length of the edge~$\{i,j\}$ is bounded by $\tau /2$ with symmetry. \noindent (2) This immediately follows from Property~(1). \noindent (3) Given a shortest Hamilton cycle $H=(u_0,u_1,\ldots,u_{n-1})$ on $G$, let \[ h_{u_i, u_j} = \left\{ \begin{array}{ll} d_{u_i, u_{i+1}} + d_{u_{i+1}, u_{i+2}} + \cdots + d_{u_{j-1}, u_j} & (j - i \ (\mod n) \le n/2), \\ d_{u_i, u_{i-1}} + d_{u_{i-1}, u_{i-2}} + \cdots + d_{u_{j+1}, u_j} & (j - i \ (\mod n) > n/2). \\ \end{array} \right. \] Then, we have: \begin{eqnarray*} \sum_{v \in V} \sum_{v' \in V \setminus \{v\}} d_{vv'} &=& \sum_{i=0}^{n-1} \sum_{k=1}^{n-1} d_{u_i, u_{i+k \, ({\rm mod} \, n)}} \\ &\leq& \sum_{k=1}^{n-1} \sum_{i=0}^{n-1} h_{u_i, u_{i+k \, ({\rm mod} \, n)}} \\ &=& \sum_{k=1}^{n/2-1} \sum_{i=0}^{n-1} \left( d_{u_i, u_{i+1}} + d_{u_{i+1}, u_{i+2}} + \cdots + d_{u_{i+k-1}, u_{i+k}} \right) \\ & & + \sum_{k=n/2 + 1}^{n-1} \sum_{i=0}^{n-1} \left( d_{u_i, u_{i-1}} + d_{u_{i-1}, u_{i-2}} + \cdots + d_{u_{i-n+k+1}, u_{i-n+k}} \right)\\ & & + \sum_{i=0}^{n-1} \left( d_{u_i, u_{i+1}} + d_{u_{i+1}, u_{i+2}} + \cdots + d_{u_{i+n/2-1}, u_{i+n/2}} \right)\\ &=& 2\left( 1+2+\cdots+(\frac{n}{2}-1) \right)\tau + \frac{n\tau}{2} = \frac{n^2 \tau}{4}. \end{eqnarray*} \noindent (4) Since $v^*$ is a vertex that attains $\min_{\, v \in V} \sum_{v' \in V \setminus\{v\}} d_{vv'}$, the inequality obtained in (3) directly implies the desired one. \QED } \end{lemma} Now we discuss the average of the traveling distances of $K^*_{\mathrm{DRR}}(m) \ (m \in \{0,1,\ldots,2n-3\})$. The traveling distance of a schedule is subject to the following constraint, say the {\em athome\/} constraint: each team stays at its home venue before a tournament and returns to its home venue after a tournament. For simplicity of the analysis of the approximation ratio, we temporary replace the athome constraint with the following assumption. \medskip \noindent {\bf Assumption A.\@}\/ If a team plays away games at both the first and last slots, then the team moves from the venue of the last opponent to that of the first opponent, instead of the moves before the first slot and after the last slot. \medskip \noindent We discuss a traveling distance of each team under Assumption~A\@. Application of Assumption~A guarantees that a route of each team in $K^*_{\mathrm{DRR}}(m) \ (m \in \{0,1,\ldots,2n-3\})$ is a Hamilton cycle on~$G$ (see Figure~\ref{map}), and the traveling distance of~$K^*_{\mathrm{DRR}}(m)$ is invariant with respect to $m \in \{0,1,\ldots,2n-3\}$. Thus, we only need to consider $K^*_{\mathrm{DRR}}$. This assumption makes the analysis of the approximation ratio much easier. \begin{figure} \includegraphics[width=0.75\textwidth]{map.eps} \caption{Effect of Assumption A.} \label{map} \end{figure} Let the length of the cycle $(v_{0},v_{1},\ldots,v_{n-2})$ obtained by Christofides' method in the proposed algorithm be~$\tau'$. Note that $\tau' \leq (3/2) \tau$, where $\tau$ denotes the length of a shortest Hamilton cycle on~$G$. Analyzing the structure of $K^*_{\mathrm{DRR}}$ reveals the following lemma. \begin{lemma}\label{lem3} Under Assumption A, the traveling distance of team $t$ in~$K^*_{\mathrm{DRR}}$ is bounded by \[ \left\{ \begin{array}{ll} \tau' + d_{v_t, v^*} + d_{v^*,v_{t+1}} - d_{v_t, v_{t+1}} & \quad (t \in \{0, 1,\ldots, n/2-1 \}), \\ \tau' + d_{v_{t-1}, v^*} + d_{v^*,v_t} - d_{v_{t-1}, v_t} & \quad (t \in \{n/2, n/2+1, \ldots, n-2\}), \\ n\tau /2 & \quad (t = n-1). \end{array} \right. \] {\rm \noindent {\bf Proof.\/} When $t \in \{0, 1, 2,\ldots, n/2-1 \}$, team $t$ moves along a Hamilton cycle $(v_t,v^*,v_{t+1},\ldots,v_{n-2},v_0,v_1,v_2,\ldots,v_{t-1})$. Consequently, the length of the tour is $\tau' + d_{v_t, v^*} + d_{v^*,v_{t+1}} - d_{v_t, v_{t+1}}$. When $t \in \{n/2, n/2+1, \ldots, n-2\}$, a tour of team $t$ is a Hamilton cycle $(v_t,v_{t+1},\ldots,v_{n-2},v_0,v_1,v_2,\ldots,v_{t-1},v^*)$, and thus the length is $\tau' + d_{v_{t-1}, v^*} + d_{v^*,v_t} - d_{v_{t-1}, v_t}$. Since a tour of team $n-1$ is Hamiltonian, Lemma~\ref{lem2}(2) implies the desired result. \QED } \end{lemma} The above lemma implies an upper bound of the traveling distance of~$K^*_{\mathrm{DRR}}$. \begin{lemma}\label{A-Distance} Under Assumption~A, the traveling distance of~$K^*_{\mathrm{DRR}}$ is bounded by $(n-2)\tau' + 2\sum_{v \in V \setminus \{v^*\}}d_{vv^*}+ (3/2)\tau + n\tau/2$. {\rm \noindent {\bf Proof.\/} Consider the sum total of upper bounds obtained in Lemma~\ref{lem3} \[ (n-1)\tau' + L + n\tau/2 \] where \begin{eqnarray*} L &\define& \sum_{t \in \{0, 1,\ldots, n/2-1 \}} \left( d_{v_t, v^*} + d_{v^*,v_{t+1}} - d_{v_t, v_{t+1}} \right) \\ && + \sum_{t \in \{ n/2, n/2+1, \ldots, n-2 \}} \left( d_{v_{t-1}, v^*} + d_{v^*,v_t} - d_{v_{t-1}, v_t} \right). \end{eqnarray*} \noindent It is easy to see that \begin{eqnarray*} L &=& \left( \sum_{t \in \{0, 1,\ldots, n/2-1 \}} d_{v_t, v^*} \right) + \left( \sum_{t \in \{1, 2,\ldots, n/2 \}} d_{v_t, v^*} \right) \\ && - \left( \sum_{t \in \{0, 1,\ldots, n/2-1 \}} d_{v_t, v_{t+1}} \right) + \left( \sum_{t \in \{ n/2 -1, n/2, \ldots, n-3 \}} d_{v_t, v^*} \right) \\ && + \left( \sum_{t \in \{ n/2, n/2+1, \ldots, n-2 \}} d_{v_t, v^*} \right) - \left( \sum_{t \in \{ n/2-1, n/2, \ldots, n-3 \}} d_{v_t, v_{t+1}} \right) \\ &\leq& 2\sum_{v \in V \setminus \{v^*\}}d_{vv^*} - \sum_{t \in \{0, 1,\ldots, n-2 \}} d_{v_t, v_{t+1}} + d_{v_{n/2-1}, v^*} + d_{v_{n/2}, v^*} + d_{v_{n-2}, v_0} \\ &\leq& 2\sum_{v \in V \setminus \{v^*\}}d_{vv^*} - \tau' + (3/2)\tau \end{eqnarray*} where the last inequality follows from Lemma~\ref{lem2}(1). From the above, the lemma holds. \QED } \end{lemma} Here we drop Assumption~A and restore the athome constraint, and consider the increase of the traveling distance in the following lemma. \begin{lemma} For each team~$t$, let $\ell_{\rm A} (t)$ be the traveling distance of~$t$ in~$K^*_{\mathrm{DRR}}$ under Assumption~A\@. Then, with the athome constraint the average of the traveling distances of team~$t$ among $K^*_{\mathrm{DRR}}(m) \ (m \in \{0,1,\ldots,2n-3\})$ is bounded by $\ell_{\rm A} (t) + \sum_{v' \in V \setminus \{v\} }d_{vv'}/(n-1)$, where $v$ is the home venue of $t$. {\rm \noindent {\bf Proof.\/} For a choice $m \in \{0,1,\ldots,2n-3\}$, every team $t'$ different from $t$ plays away game with~$t$ at first slot just once. Thus, the average length of the moves of team~$t$ before the first slot is bounded by $\sum_{v' \in V \setminus \{v\}}d_{vv'}/(2n-2)$. Similarly, the average length of the moves of team~$t$ after the last slot is bounded by $\sum_{v' \in V \setminus \{v\}}d_{vv'}/(2n-2)$. Thus, the average of the traveling distances of team~$t$ is bounded by $\ell_{\rm A} (t) + \sum_{v' \in V \setminus \{v\}}d_{vv'}/(n-1)$. \QED } \end{lemma} Summarizing the above lemmas, we have the following theorem. \begin{theorem}\label{thB} The average of the total traveling distances of schedules~$K^*_{\mathrm{DRR}}(m)$ $(m \in \{0,1,\ldots,2n-3\})$ is bounded by \[ (n-2)\tau' + 2\sum_{v \in V \setminus \{v^* \}}d_{vv^*}+ (3/2)\tau + n\tau/2 + \sum_{v \in V} \sum_{v' \in V \setminus \{v\}} d_{vv'}/(n-1). \] \end{theorem} Lastly we show the approximation ratio of the proposed algorithm. \begin{theorem}\label{thm1} The proposed algorithm is a\/ $2.75$-approximation algorithm for the UTTP\@.\\ {\rm \noindent {\bf Proof.\/} Let $z^*$ be the average of the total traveling distances of schedules $K^*_{\mathrm{DRR}}(m)$ $(m \in \{0,1,\ldots , 2n-3\})$. From Theorem~\ref{thB} and Lemma~\ref{lem2}(3)(4), we have: \begin{eqnarray*} z^* &\leq& (n-2)\tau' + 2\sum_{v \in V \setminus \{v^*\}}d_{vv^*}+ (3/2)\tau + n\tau/2 + \sum_{v \in V} \sum_{v' \in V \setminus \{v\}} d_{vv'}/(n-1) \\ &\leq& (n-2)(3/2)\tau + 2 n\tau /4 + (3/2)\tau + n\tau /2 + (n^2 \tau /4)/(n-1) \\ &=& (3/2)n \tau -3\tau +(1/2) n\tau + (3/2)\tau + (1/2)n\tau + (1/4)n\tau + (1/4)n\tau/(n-1) \\ &=& (11/4) n\tau -(3/2)\tau + (1/4)n\tau/(n-1) \leq (11/4) n\tau. \end{eqnarray*} The proposed algorithm output a best of $K^*_{\mathrm{DRR}}(m)$ $(m \in \{0,1,\ldots , 2n-3\})$, and thus the traveling distance of the output is at most~$z^*$. Since $n\tau$ is a lower bound of the distance of any double round-robin schedule, this concludes the proof. \QED } \end{theorem} Let us consider a case that we have a shortest Hamilton cycle $H$ on $G$. In this situation, the following corollary holds. \begin{corollary}\label{thm3} If a shortest Hamilton cycle $H$ on $G$ is given, there exists a\/ $2.25$-approximation algorithm for the UTTP\@.\\ {\rm \noindent {\bf Proof.\/} We replace a cycle obtained by Christofides' method in the proposed algorithm with a cycle obtained from $H$ by skipping vertex $v^*$. Theorem~\ref{thB} implies that the average of total traveling distances of schedules, say $z^{**}$, obtained by the proposed algorithm is bounded by \begin{eqnarray*} z^{**} &\leq& (n-2)\tau + 2\sum_{v \in V \setminus \{v^*\}}d_{vv^*}+ (3/2)\tau + n\tau/2 + \sum_{v \in V} \sum_{v' \in V \setminus \{v \}} d_{vv'}/(n-1) \\ &\leq& n\tau -2\tau + 2 n\tau /4 + (3/2)\tau + n\tau /2 + (1/4)n\tau + (1/4)n\tau/(n-1) \\ &=& (9/4)n\tau -\tau /2 + (1/4)n\tau/(n-1) \leq (9/4)n\tau. \end{eqnarray*} Thus, the approximation ratio is bounded by 2.25 in this case. \QED } \end{corollary} \section{Computational Results} In this section, we describe the results of computational experiments using the proposed approximation algorithm. For the experiments, we took the distance matrices of NL and galaxy instances from the website~\cite{Trick_web}, because they are the most popular instances and one having the largest distance matrix (up to 40~teams), respectively. We ran the proposed algorithm for the UTTP version of these instances; to find a short Hamilton cycle, we use Concorde TSP solver~\cite{Concorde}. It took less than one second to obtain a shortest Hamilton cycle even for the largest case ($n=40$). To evaluate the quality of obtained solutions, we also tried to find optimal solutions of UTTP instances with integer programming. Computations using integer programming were performed on the following PC: Intel Xeon 3.33GHz$*2$, 24GB RAM, Windows~7 64bit, and Gurobi Optimizer 4.5.1~\cite{Gurobi} with 16 threads as an integer programming solver. For both NL and galaxy instances: for $n=4,6,8$ optimal solutions were obtained; for $n=10$, after 500,000 seconds of computations, branch-and-bound procedures were not terminated; for $n=12$ and larger instances, using integer programming it was difficult to find solutions better than those obtained by the proposed algorithm. Tables~\ref{NL} and~\ref{galaxy} show the results of experiments for NL and galaxy instances, respectively. The first columns of tables denote the number of teams,~$n$. The second ones are the total traveling distance obtained by the proposed algorithm. The third ones are the value of $n$ times the distance of a shortest Hamilton cycle, as a simple lower bound. The fourth ones are the percentages of the gap between the second and third columns. \begin{table} \caption{Results for the UTTP version of NL instances} \label{NL} \begin{tabular}{crrrr} \hline\noalign{\smallskip} $n$ & approx. &$n*\mbox{TSP}$ & gap (\%)${}^\dagger$& best UB \\ \noalign{\smallskip}\hline\noalign{\smallskip} 4 & 8276 & 8044 & 2.9 & 8276${}^\ddagger$ \\ 6 & 20547 & 17826 & 15.3 & 19900${}^\ddagger$ \\ 8 & 33190 & 27840 & 19.2 & 30700${}^\ddagger$ \\ 10 & 47930 & 38340 & 25.0 & 45605${}^\star$ \\ 12 & 81712 & 67200 & 21.6 & \\ 14 & 128358 & 103978 & 23.4 & \\ 16 & 156828 & 119088 & 31.7 & \\ \noalign{\smallskip}\hline \noalign{\smallskip} \multicolumn{5}{l}{${}^\dagger$gap is obtained by $(\frac{\mathrm{approx.}}{n*\mathrm{TSP}}-1)*100.0$} \\ \multicolumn{5}{l}{${}^\ddagger$optimal}\\ \multicolumn{5}{l}{${}^\star$best incumbent solution after 500,000 seconds} \end{tabular} \end{table} \begin{table} \caption{Results for the UTTP version of galaxy instances} \label{galaxy} \begin{tabular}{crrrr} \hline\noalign{\smallskip} $n$ & approx. &$n*\mbox{TSP}$ & gap (\%)${}^\dagger$& best UB \\ \noalign{\smallskip}\hline\noalign{\smallskip} 4 & 416 & 412 & 1.0 & 416${}^\ddagger$ \\ 6 & 1197 & 1068 & 12.1 & 1178${}^\ddagger$ \\ 8 & 2076 & 1672 & 24.2 & 1890${}^\ddagger$ \\ 10 & 3676 & 3020 & 21.7 & 3570${}^\star$ \\ 12 & 5514 & 4524 & 21.9 & \\ 14 & 7611 & 6216 & 22.4 & \\ 16 & 9295 & 7408 & 25.5 & \\ 18 & 12320 & 10026 & 22.9 & \\ 20 & 14739 & 11880 & 24.1 & \\ 22 & 19525 & 16522 & 18.2 & \\ 24 & 25026 & 21216 & 18.0 & \\ 26 & 32250 & 27846 & 15.8 & \\ 28 & 41843 & 36708 & 14.0 & \\ 30 & 52073 & 46410 & 12.2 & \\ 32 & 62093 & 55104 & 12.7 & \\ 34 & 77392 & 69326 & 11.6 & \\ 36 & 88721 & 78624 & 12.8 & \\ 38 & 103988 & 92568 & 12.3 & \\ 40 & 120895 & 107800 & 12.1 & \\ \noalign{\smallskip}\hline \noalign{\smallskip} \multicolumn{5}{l}{${}^\dagger$gap is obtained by $(\frac{\mathrm{approx.}}{n*\mathrm{TSP}}-1)*100.0$} \\ \multicolumn{5}{l}{${}^\ddagger$optimal}\\ \multicolumn{5}{l}{${}^\star$best incumbent solution after 500,000 seconds} \end{tabular} \end{table} Like most theoretical approximation algorithms, the obtained gaps are much better than the theoretical approximation ratio 2.75 (175\% gap). For the NL instances and the galaxy instances of up to 20~teams, the gap is around~25\%. For the galaxy instances of more than 20~teams, the gap is less than 20\%. Note that the gaps shown in the tables are from the ratio of the obtained distance to a lower bound, but not to optimal distance. Therefore the gaps between the obtained distance and the optimal value are still better than the gaps shown in the tables. \section{Conclusion} This paper proposed an approximation algorithm for the unconstrained traveling tournament problem, which is a variant of the traveling tournament problem. The approximation ratio of the proposed algorithm is 2.75, and the algorithm yields a solution satisfying the no-repeater and mirrored constraints. If a shortest Hamilton cycle on the home venues of the teams is available, the approximation ratio is improved to 2.25. Computational experiments showed that the algorithm generates solutions of good quality; the gap between the obtained solution and a simple lower bound is around~25\% for small instances (up to 20~teams) and is less than 20\% for larger instances.
{ "attr-fineweb-edu": 1.62793, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUc6c5qsFAfvFfOxOF
\section{Introduction} In this paper we consider billiards in convex bodies and estimate the minimal length of a closed billiard trajectory. This kind of estimates is rather useful in different practical applications, see further references on this subject in~\cite{bb2009}. In~\cite{aao2012} Shiri Artstein-Avidan and Yaron Ostrover presented a unified symplectic approach to handle billiards in a convex body $K\subset V$ (here $V$ is a real vector space), whose trajectory length (and therefore the reflection rule) is given by a norm with unit ball $T^\circ$ (polar to a body $T\subset V^*$ containing the origin); throughout this paper we use the \emph{possibly non-standard} notation $\|\cdot\|_T$ for this norm with $T$ lying in the dual space. We emphasize that in this work the norm need not be symmetric, that is need not satisfy $\|q\| = \|-q\|$. Usually the term ``Minkowski billiard'' was used, but Minkowski norms are usually assumed to be symmetric, and we do not restrict ourselves to this particular case. The idea of~\cite{aao2012} is to interpret a billiard trajectory in $K$ with norm $\|\cdot \|_T$ as a characteristic on the boundary of the convex body $K\times T\subset V\times V^*$. The space $V\times V^*$ is the cotangent bundle of $V$ and carries a natural symplectic structure, and the surface $\partial (K\times T)$, in a sense, carries a contact structure, although some effort has to be made to handle it because it is not smooth at $\partial K\times \partial T$. The symplectic approach was rather useful and gave certain results about the number $\xi_T(K)$, that is the minimal $\|\cdot\|_T$-length of a closed billiard trajectory in $K$. In particular, in~\cite{aao2012} this number was shown to be equal to the Hofer--Zehnder capacity $c_{HZ}(K\times T)$, and it was proved that the number $\xi_T(K)$ is monotone in $T$ and $K$ under inclusions, and satisfies a certain Brunn--Minkowski type inequality. In the next paper~\cite{aaok2013} the inequality \begin{equation} \label{equation:symm-estimate} \xi_{K^\circ}(K) \ge 4 \end{equation} for centrally symmetric convex bodies was established with rather elementary techniques and it was noticed that, assuming the Viterbo conjecture for convex bodies $X\subset \mathbb R^{2n}$ \[ \vol(X) \ge \frac{c_{HZ}(X)^n}{n!}, \] the estimate (\ref{equation:symm-estimate}) would imply the famous Mahler conjecture~\cite{ma1939} \[ \vol K\cdot \vol K^\circ \ge \frac{4^n}{n!}. \] Mahler's conjecture is known so far in a weaker form with $\frac{\pi^n}{n!}$ on the right hand side, this is a result due to Greg Kuperberg~\cite{ku2008}. More detailed information on this conjecture is given in the blog post~\cite{tao2007} of Terence Tao and the paper~\cite{aaok2013}. For the Viterbo conjecture and its possible generalizations, we recommend the paper~\cite{apb2012} and the references therein. In this paper we invoke a more elementary and efficient approach, developed by K\'aroly Bezdek and Daniel Bezdek in~\cite{bb2009} for the Euclidean norm. It turns out that this approach remains valid without change for possibly asymmetric norms\footnote{These ideas for the Euclidean norm in the plane first appeared in~\cite{bc1989}; it was already mentioned there that more arbitrary distances (norms) can be considered similarly.}; it allows to give elementary proofs of most results of~\cite{aao2012}, worry less about the non-smoothness issues, and establish the inequality \[ \xi_{K^\circ}(K) \ge 2 + 2/n \] for possibly non-symmetric convex bodies $K$ containing the origin. The latter inequality is related to the non-symmetric Mahler conjecture, see the discussion in Section~\ref{section:mahler} below. \subsection*{Acknowledgments.} The authors thank Yaron Ostrover for numerous remarks and corrections and the unknown referee for a huge list of corrections that helped us improve the text. \section{Bezdeks' approach to billiards} Let us show how the results of~\cite{aao2012} can be approached using the elementary technique of~\cite{bb2009}. First, we consider an $n$-dimensional real vector space $V$, a convex body $K\subset V$, and define \[ \mathcal P_m(K) = \{(q_1,\ldots, q_m) : \{q_1,\ldots, q_m\} \ \text{does not fit into}\ \alpha K+t\ \text{with}\ \alpha\in (0,1),\ t\in V \}. \] Observe that ``does not fit into $\alpha K+t$, with $\alpha\in (0,1)$, $t\in V$'' is equivalent to ``does not fit into the interior of $K+t$ with $t\in V$''. \begin{center} \includegraphics{pic/bezdeks-billiards-figures-3} \refstepcounter{fig} Fig. \arabic{fig}. An element of $\mathcal P_3(K)$. \end{center} Then we consider a norm on $V$ such that the unit ball $T\subset V^*$ of its dual is smooth. We denote this norm by $\|\cdot \|_T$ following~\cite{aao2012}. Note that this norm need not be reversible in what follows, that is $\|q\|_T$ need not be equal to $\|-q\|_T$. We define the length of the closed polygonal line \[ \ell_T (q_1,\ldots, q_m) = \sum_{i=1}^m \|q_{i+1} - q_i\|_T, \] where indices are always modulo $m$. So the renovated result of~\cite{bb2009} reads: \begin{theorem} \label{theorem:bezdeks} For smooth convex bodies $K\subset V$ and $T\subset V^*$, the length of the shortest closed billiard trajectory in $K$ with norm $\|\cdot \|_T$ equals \[ \xi_T(K) = \min_{m\ge 1} \min_{P\in \mathcal P_m(K)} \ell_T(P). \] Moreover, the minimum is attained at $m\le n + 1$. \end{theorem} \begin{remark} The right hand side of the above formula is well defined without any assumption on the smoothness of $K$ and $T$. In what follows we use it as the definition of $\xi_T(K)$ even when neither $K$ nor $T$ are smooth. It makes sense to call the minimizer in this theorem a \emph{shortest generalized billiard trajectory}, which coincides with a shortest closed billiard trajectory in the case of smooth $K$ and $T$, as we will see from the proof of Theorem~\ref{theorem:bezdeks}. A shortest generalized billiard trajectory has the following geometrical meaning. Let $p$ be a non-smooth point of $\partial K$, we consider a trajectory $\ell$ through the point $p$ as a trajectory satisfying the reflection rule for \emph{some} normal to $K$ at $p$, that is we can take an arbitrary support hyperplane to $K$ at $p$ as if it were a tangent plane (Figure~\ref{fig:reflection rule}). The shortest generalized billiard trajectory in an obtuse triangle is shown in~Figure~\ref{fig:obtuse triangle}. It is a well known open problem whether there is a legal (not passing through any vertex) closed billiard trajectory in every obtuse triangle. \end{remark} \begin{center} \begin{tabular}{p{7.5cm}p{7.5cm}} \includegraphics{pic/bezdeks-billiards-figures-23} & \includegraphics{pic/bezdeks-billiards-figures-24} \\ \refstepcounter{fig} Fig. \arabic{fig}. \label{fig:reflection rule} The reflection rule at a non-smooth point & \refstepcounter{fig} Fig. \arabic{fig}. \label{fig:obtuse triangle} The shortest generalized billiard trajectory in an obtuse triangle. \end{tabular} \end{center} \begin{proof}[Proof of Theorem~\ref{theorem:bezdeks}] The proof in~\cite[Lemma~2.4]{bb2009} is given for the Euclidean norm; the same argument works in this more general case. We reproduce the steps here. First, let us recall the reflection rule (see~\cite{aaok2013}, for example): For a billiard trajectory $\{q_1, \ldots, q_m\}$ we have in $V^*$ \begin{equation} \label{equation:reflection} p_{i+1} - p_i = - \lambda_i n_K(q_i),\quad \lambda_i > 0. \end{equation} This reflection rule is obtained by using the Lagrange multiplier method to optimize the expression $\|q_{i+1} - q_i\|_T + \|q_i - q_{i-1}\|_T$ varying $q_i$ under the assumption that $q_i\in \partial K$. There arise the momenta $p_i$ that are obtained from the velocities $$ v_i = \frac{q_i - q_{i-1}}{\|q_i - q_{i-1}\|_T} $$ by taking the differential $p = d\|v\|_T$ (recall that the differential is in the dual space). From this definition it follows that $p_i\in \partial T$, and if we want to go back and determine the velocity $v_i$ we just take \[ v_i = d\|p_i\|_{T^\circ}, \] resulting in $v_i\in \partial T^\circ$. Here we need the smoothness of $T$ to define velocities knowing momenta and the smoothness of $K$ to define the normals to $K$. The normal $n_K$ at a boundary point of the convex body $K$ is also considered as a linear functional in $V^*$ of unit norm, having maximum on $K$ precisely at this point. After summation over $i$ in (\ref{equation:reflection}) we obtain $$ \sum_i \lambda_i n_K(q_i) = 0, $$ that is the normals at the bounce points $q_i$ surround the origin in $V^*$. This means that the set $\{q_1, \ldots, q_m\}$ cannot be covered by a smaller positive homothet of $K$. Indeed, assume that a homothet $\alpha K + t$ with $\alpha\in(0,1)$ covers all the points $\{q_i\}$, therefore the translate $K+t$ of $K$ contains $q_i$'s in its interior; here we assume that the origin of $V$ is contained in $K$ without loss of generality. Let $n_i$ be the normal (linear form) such that \[ \max_{q\in K} \langle n_i, q\rangle = \langle n_i, q_i\rangle. \] By the assumption that $\inte (K+t) \ni q_i$, \[ \langle n_i, t\rangle + \max_{q\in K} \langle n_i, q\rangle = \max_{q\in K} \langle n_i, q+t\rangle > \langle n_i, q_i\rangle = \max_{q\in K} \langle n_i, q\rangle, \] hence $\langle n_i, t\rangle > 0$, and summing such inequalities, we obtain \begin{equation} \label{equation:no-translate} \left\langle \sum_i \lambda_i n_i, t\right\rangle = \langle 0, t\rangle > 0, \end{equation} which is a contradiction. We conclude that a shortest closed billiard trajectory $Q_{min} = \{q'_1, \ldots, q'_{m'}\}$ must be an element of some $\mathcal P_{m'}(K)$. Now we go in the opposite direction and consider a polygonal line $Q=\{q_1,\ldots, q_m\}\in\mathcal P_m(K)$ on which the minimum is attained, including the minimum with respect to varying $m$. The previous paragraph shows that $\ell_T(Q) \le \ell_T(Q_{min})$. Applying the Helly theorem, we readily see that we can replace $Q$ by a subset with at most $m\le n+1$ points keeping the property of not fitting into a smaller homothet of $K$. In order to finish the proof, we must show that $Q$ is a generalized billiard trajectory on $K$. We can find a translate $K+t$ that contains $Q$; such a translate must exist because otherwise we could take a smaller homothet of $Q$, still not fitting into the interior of $K$; so $Q$ would not be the length minimizer in $\mathcal P_m(K)$. By~\cite[Lemma~2.2]{bb2009}, the assumption that $Q$ does not fit into a smaller homothet of $K$ is certified, possibly after omitting the $q_i$ lying in the interior of $K+t$, by considering a set of halfspaces $H^+_i\supseteq K+t$, with respective complementary halfspaces $H^-_i$ supporting $K+t$ such that $q_i\in H^-_i\cap K$, and the intersection $\cap_{i=1}^m H^+_i$ is \emph{nearly bounded} (that is lies between two parallel hyperplanes). This actually means that the outer normals $n_i$ to $K+t$ at the $q_i$ can be non-negatively combined to zero. From here on we assume without loss of generality that $t=0$ and write $K$ instead of $K+t$. We then observe that varying the $q_i$ inside their respective $H^-_i$ (and allowing to get outside $K$) we never obtain a configuration that can be put into a smaller homothet of $K$, because a smaller homothet of $K$ has to miss some $H^-_i$. This is established by the same argument with normals surrounding the origin resulting in (\ref{equation:no-translate}). Now let us try to minimize the length $\ell_T (q_1,\ldots, q_m)$ over \[ \mathcal H = \{(q_1, \ldots, q_m) : \forall i\ q_i\in H^-_i\}. \] We have shown that $\mathcal H \subseteq \mathcal P_m(K)$ and therefore $Q$ is also a length minimizer in $\mathcal H$. Now we conclude from minimizing the length that every $q_i$ must either be a ``fake'' vertex where $Q$ actually does not change its direction, or a vertex where $Q$ reflects from $H^-_i$ according to (\ref{equation:reflection}); the latter is readily obtained with the Lagrange multiplier method from the minimal length assumption. The ``fake'' vertices may be again omitted keeping the property $Q\in \mathcal P_m(K)$ with $m\le n+1$, since the triangle inequality holds for asymmetric norms as usual if we keep the order of the points. The reflection points $q_i$ are on $\partial K$, and the normals to $K$ at $q_i$ must equal the normals to the respective $H^+_i$. So we conclude that $Q$ is a billiard trajectory of $K$ obeying (\ref{equation:reflection}) and $\ell_T(Q) \ge \ell_T(Q_{min})$. Since the opposite inequality is established in the first half of the proof, the proof is complete. \end{proof} \section{Derivation of classical and of one new result} \subsection{Monotonicity of $\xi_T(K)$} Let us show how the results of~\cite{aao2012} on the function $\xi_T(K)$ follow easily from Theorem~\ref{theorem:bezdeks}. First, the monotonicity \begin{equation} \label{equation:monotonicity} \xi_T(K) \le \xi_T(L)\ \text{when }\ K\subseteq L \end{equation} follows easily because $\mathcal P_m(K)\supseteq \mathcal P_m(L)$ and the minimum can only get smaller on a larger set. \subsection{Symmetry} To prove the Brunn--Minkowski type inequality, like in~\cite{aao2012}, we need the following equality: \begin{equation} \label{equation:symmetry} \xi_T(K) = \xi_K(T). \end{equation} This is obvious in the symplectic approach; the idea~\cite{tabachnikov2005geometry} is essentially that closed billiard trajectories correspond to critical points of the action functional \[ \sum_{i=1}^m \langle p_{i+1}, q_{i+1} - q_i\rangle = \sum_{i=1}^m \langle p_{i} - p_{i+1}, q_i\rangle \] with constraints $q_1,\ldots, q_m\in \partial K$ and $p_1,\ldots, p_m\in \partial T$, and the value of this functional at a critical point equals \[ \sum_{i=1}^m \|q_{i+1}- q_i\|_T = \sum_{i=1}^m \|p_{i} - p_{i+1}\|_K. \] This argument uses the smoothness of $K$ and $T$ in an essential way, but the monotonicity property allows to approximate any convex body by smooth bodies from below and from above, and then to pass to the limit. \subsection{Brunn--Minkowski-type inequality} Having noted all this, we observe that for the Minkowski sum $S+T$ in $V^*$ we have in $V$: \[ \|\cdot \|_{S+T} = \|\cdot \|_S + \|\cdot \|_T. \] Then it follows that \[ \xi_{S+T}(K) \ge \xi_S(K) + \xi_T(K) \] because the minimum of the sum of functions is no less that the sum of the minima. After applying~(\ref{equation:symmetry}) this reads: \begin{equation} \label{equation:bm} \xi_T(K+L) \ge \xi_T(K) + \xi_T(L). \end{equation} \subsection{Estimates on $\xi_T(K)$} We can even prove something new with this technique, or the technique of~\cite{aao2012}. \begin{definition} Following~\cite{bb2009}, we call $K$ \emph{$2$-periodic with respect to $T$} if one of its shortest generalized billiard trajectories bounces on $\partial K$ only twice. \end{definition} We recall the main result of~\cite{aaok2013}: \begin{theorem}[Artstein-Avidan, Karasev, Ostrover, 2013] \label{theorem:aaok} If $K$ and $T$ are centrally symmetric and polar to each other $(T=K^\circ)$ then $\xi_T(K) = 4$. $K$ is $2$-periodic with respect to $T$ and every segment $[-q, q]$, for any $q\in\partial K$, is a shortest generalized billiard trajectory. \end{theorem} \begin{remark} There may be other minimal trajectories that are not $2$-bouncing if $K$ is not strictly convex. This can be seen already for the square $K=[-1,1]^2$. \end{remark} Having developed the appropriate technique, we give: \begin{proof}[The short new proof of Theorem~\ref{theorem:aaok}] Let us show that $\xi_T(K) \ge 4$. From Theorem~\ref{theorem:bezdeks} we conclude that it is sufficient to show that any closed polygonal line of length (in the given norm) less than $4$ can be covered by an open unit ball. This is done with the well-known folklore argument that follows. Assume a closed polygonal line $P$ has length less than $4$. Take points $x,y\in P$ that partition $P$ into two parts of equal lengths, each part will have length less than $2$. For any $z\in P$, lying in either of the parts, we compare the straight line segments and the segments of $P$ and deduce \[ \|z-x\| + \|z-y\| < 2 \] from the triangle inequality. Let $o$ be the midpoint of the segment $[xy]$. From the triangle inequality we also have \[ \|z - o\| \le \frac{1}{2}\left(\|z-x\| + \|z-y\|\right) <1. \] So we have proved that $P$ is covered by an open ball (a translate of the interior of $K$) with radius $1$ centered at $o$. By Theorem~\ref{theorem:bezdeks} this is not a billiard trajectory in $K$. So $\xi_T(K) \ge 4$ and actually the equality holds since every segment $[q,-q]$ with $q\in\partial K$ is a billiard trajectory of length $4$. \end{proof} \begin{center} \includegraphics{pic/bezdeks-billiards-figures-10} \refstepcounter{fig} Fig. \arabic{fig}. \label{fig:covering by ball} Explanation of the proof of Theorem~\ref{theorem:aaok}. \end{center} \begin{remark} Let $K$ be strictly convex. If the length of $P$ were $4$ then in the above argument the equality $\|z-x\| + \|z-y\| = 2$ will hold at most once on either half of $P$. So a translate of $K$ covers $K$ and $P$ has at most $2$ bounces. Actually, one bounce is impossible, so a $2$-bouncing trajectory is the only case of equality, and this trajectory must be the segment $[q,-q]$ for some $q\in\partial K$. If $K$ is not strictly convex then other minimal trajectories also exist. \end{remark} \begin{remark} If $K$ is a square in the plane, which is not smooth and not strictly convex, then there are plenty of minimal trajectories. Here a minimal trajectory is understood as something providing the minimum to the right hand side of the defining equation in Theorem~\ref{theorem:bezdeks}. Any segment connecting the two opposite sides of $K$ is such, and some of the quadrangles with vertices on the four sides of $K$ are also such. \end{remark} As another simple exercise, we establish the following result: \begin{theorem} \label{theorem:2bounce} Let $K$ be $2$-periodic with respect to $T$ and let $T$ be centrally symmetric. Then $K+\lambda T^\circ$ is also $2$-periodic with respect to $T$ for any $\lambda$. \end{theorem} \begin{proof} Consider one of the shortest closed billiard trajectories in $K$ bouncing at $q_1$ and $q_2$. From Theorem~\ref{theorem:aaok} we also know that $\xi_T(T^\circ) = 4$ and we can find a pair $\{-q, q\}\in \partial T^\circ$ that gives a shortest closed billiard trajectory in $T^\circ$ with length $4$ and such that $q$ is proportional to $q_2-q_1$. The minimality assumption for $\{q_1, q_2\}$ implies that the normals $-p$ and $p$ to $K$ at $q_1$ and $q_2$ are the same as the normals to $T^\circ$ at $-q$ and $q$ respectively. Then the pair of points $\{q_1-\lambda q, q_2+\lambda q\}$ is in the boundary of $K+\lambda T^\circ$ and the normals to $K+\lambda T^\circ$ at these points are again $-p$ and $p$. Now it follows that $\{q_1-\lambda q, q_2+\lambda q\}$ is a closed billiard trajectory in $K+\lambda T^\circ$ of length $\xi_T(K) + \lambda \xi_T(T^\circ)$. From~(\ref{equation:bm}) it follows that this trajectory is minimal. \end{proof} \section{Attempt toward the non-symmetric Mahler's conjecture} \label{section:mahler} In~\cite{aaok2013} Mahler's conjecture $\vol K\cdot \vol K^\circ \ge \frac{4^n}{n!}$ for centrally symmetric convex $n$-dimensional $K$ was reduced, assuming the Viterbo conjecture on symplectic capacities, to proving that \[ \xi_{K^\circ} (K) \ge 4, \] which is true, see Theorem~\ref{theorem:aaok} in the previous section. Dropping the assumption of the central symmetry, the corresponding version of Mahler's conjecture becomes (see~\cite{apbtz2013}): \[ \vol K \cdot \vol K^\circ \ge \frac{(n+1)^{n+1}}{(n!)^2} \] for convex bodies $K\subset\mathbb R^n$ containing the origin in the interior. Again, assuming Viterbo's conjecture, in order to deduce from it the non-symmetric Mahler conjecture, one would have to prove: \begin{equation} \label{equation:incorrect-bound} \xi_{K^\circ}(K) \ge \left(\frac{(n+1)^{n+1}}{n!}\right)^{1/n}, \end{equation} the right hand side being asymptotically $e$ by Stirling's formula. In fact, already for $n=2$ it is easy to check by hand, or look at Theorem~\ref{theorem:billiard-nonsymm} below, that the sharp estimate is \[ \xi_{K^\circ}(K) \ge 3, \] while (\ref{equation:incorrect-bound}) gives the number \[ \left(\frac{3^3}{2}\right)^{1/2}, \] which is greater than $3$. For higher dimensions, there also remains a gap between the actual lower bound for the billiard trajectory length and the bound needed to establish the non-symmetric Mahler conjecture, assuming the Viterbo conjecture. Namely, we are going to prove: \begin{theorem} \label{theorem:billiard-nonsymm} If $K\subset \mathbb R^n$ is a convex body containing the origin in its interior then \[ \xi_{K^\circ} (K) \ge 2 + 2/n, \] and the bound is sharp. \end{theorem} This theorem shows that the non-symmetric Mahler conjecture is out of reach of the billiard approach of~\cite{aaok2013}. \begin{proof} We invoke Theorem~\ref{theorem:bezdeks} and consider a closed polygonal line $P$ not fitting into a smaller homothet of $K$. By the same theorem we can assume that $P$ has vertices $q_1,\ldots, q_m$ with $m\le n + 1$. \begin{center} \begin{tabular}{p{7.5cm}p{7.5cm}} \includegraphics{pic/bezdeks-billiards-figures-14}& \includegraphics{pic/bezdeks-billiards-figures-15} \includegraphics{pic/bezdeks-billiards-figures-16}\\ \refstepcounter{fig} Fig. \arabic{fig}. \label{fig:length of the segment} Measuring the length of a directed segment. & \refstepcounter{fig} Fig. \arabic{fig}. \label{fig:erasing} Replacing $K$ with $L$. \end{tabular} \end{center} Observe that the norm $\|w\|_{K^\circ}$ of a vector $w\in V$ has a very simple meaning: Let $v\in \partial K$ be the vector positively proportional to $w$ and take \[ \|w\|_{K^\circ} = \frac{|w|}{|v|} \] using the standard Euclidean norm $|\cdot|$ (Figure~\ref{fig:length of the segment}). Now to measure the length of $P$ we take $v_1,\ldots, v_m\in\partial K$ to be positively proportional to $q_2-q_1, \ldots, q_1-q_m$ respectively; it follows that the origin can be expressed as a positive combination of the vectors $\{v_i\}_{i=1}^m$. If we replace $K$ by the body $L = \conv \{v_i\}_{i=1}^m$ of possibly smaller dimension, then it is easy to see that $L$ still contains the origin and \[ \ell_{K^\circ}(P) = \ell_{L^\circ}(P), \] since $v_i$'s are still on the boundary of $L$ (Figure~\ref{fig:erasing}). Moreover, $P$ cannot fit into a smaller homothet of $L$, since it does not fit into a smaller homothet of the larger body $K\supseteq L$. In this argument $\dim L$ may become less than $\dim K$; in this case we use induction on dimension, since we have the monotonicity of the estimate $2+2/(n-1) > 2 + 2/n$. The other case $\dim K = \dim L = n$ is only possible when $m=n+1$. We can therefore assume from the start that $L$ is a simplex. Now we are in the following situation, changing the indexing of vertices slightly. $L$ is a simplex with vertices $v_0, \ldots, v_n$ and their respective opposite facets $F_0,\ldots, F_n$, and $P$ is a closed polygonal line with vertices $q_0,\ldots, q_n$. From the first step of our construction, the following relations hold: \begin{equation} \label{equation:polyline} q_{i+1}-q_i = t_i v_i, \quad t_i > 0. \end{equation} Also, we can assume that $q_0,\ldots, q_n$ lie on the boundary of $L$, otherwise we can translate $P$ and inflate $L$, keeping the condition that $P$ does not fit into a smaller homothet of $L$, having eventually $P\subseteq L$ (Figure~\ref{fig:homothety of simplex}). By this the quantity $\ell_{L^\circ}(P)$ may only become smaller, and either all the vertices of $P$ will be on $\partial L$ or the dimension will drop and we use induction. \begin{center} \begin{tabular}{cc} \includegraphics{pic/bezdeks-billiards-figures-17} & \includegraphics{pic/bezdeks-billiards-figures-18}\\ \refstepcounter{fig} Fig. \arabic{fig}. \label{fig:homothety of simplex} The inflation of $L$. & \refstepcounter{fig} Fig. \arabic{fig}. \label{fig:billiard in a triangle} A billiard trajectory in the triangle. \end{tabular} \end{center} So either we use induction and drop the dimension of $L$, or we use (\ref{equation:polyline}) to conclude that the segment $[q_i, q_{i+1}]$ has direction $v_i$, the vector from the origin to a vertex of $L$. The latter implies that, if we look at $L$ along the line of sight $v_i$ then we see the facet $F_i$ and (strictly) do not see the other facets. Therefore the segment $[q_i, q_{i+1}]$ must start at $F_i$ and point into the interior of $L$, its endpoint $q_{i+1}$ must lie on some other $F_j$ ($j\neq i$), and if we extend this segment to a half-line beyond $q_{i+1}$ it must leave $L$ at $q_{i+1}$. Assuming $q_i\neq q_{i+1}$ (otherwise we have less points and the dimension drops) we obtain, in particular, that the point $q_i$ can only lie in the relative interior of its respective $F_i$. We see that $q_i$ is the projection of $q_{i+1}$ onto $F_i$ parallel to $v_i$. If we apply these projections cyclically starting from $q_i\in F_i$ and ending at the same point then we obtain a map that takes $F_i$ into its relative interior and that is linear on $F_i$. Such a map has a unique fixed point. So it follows that having chosen $L$ with a cyclic order on its facets we can reconstruct the considered polygonal line $P$ uniquely. Another way to show the uniqueness is to observe that the condition (\ref{equation:polyline}) implies $\sum_{i=0}^n t_i v_i = 0$ and therefore determines the $t_i$ uniquely up to a positive multiple. Hence the polygonal line $P$ is determined uniquely up to translation and a positive homothety, and the additional property $q_i\in F_i$ fixes it completely. Now we are going to consider everything in barycentric coordinates. Let $(m_0,\ldots, m_n)$ be the barycentric coordinate of the origin in $L$. Then it is not hard to express the $q_i$ in terms of the $v_i$. We are going to index everything cyclically modulo $n+1$ and we put \[ M = \sum_{0\le k < l \le n} m_k m_l. \] From the Schur concavity of the elementary symmetric functions it follows that $M$ takes its maximum value at $m_0=\dots = m_n = \frac{1}{n+1}$ and therefore $M\le \frac{n}{2n+2}$. We have already shown the uniqueness of the $q_i$ after the choice of the order of the projections along the $v_i$ to facets. It remains to guess the expression for $q_i$ and prove that it gives the solution. Our guess is \[ q_i=\frac{\sum\limits_{j\ne i} \sum\limits_{k=i}^{j-1} m_jm_k v_j}{M}, \] where the inner summation goes cyclically from $i$ to $j-1$, so it is allowed that $j-1<i$. First, it is easy to observe that the sum of all coefficients in the numerator equals $M$, because every monomial $m_km_l$ is used precisely once. Therefore we have $q_i\in F_i$. Then we express the vector $q_{i+1}-q_i$: \[ q_{i+1}-q_i=\frac{\sum\limits_{j\ne i+1} \sum\limits_{k=i+1}^{j-1} m_jm_k v_j -\sum\limits_{j\ne i} \sum\limits_{k=i}^{j-1} m_jm_k v_j}{M} = \frac{\sum\limits_{j\ne i}m_im_j v_i-\sum\limits_{j\ne i} m_jm_i v_j}{M}. \] Since $\sum m_j v_j=0$, we obtain $\sum\limits_{j\ne i} m_jm_i v_j=-m_i^2 v_i$. And from $\sum_j m_j=1$ we get $\sum\limits_{j\ne i}m_im_j v_i+m_i^2 v_i=m_i v_i$. Finally, \[ q_{i+1}-q_i=\frac{m_i v_i}{M} \text{ and } t_i=\frac{m_i}{M}. \] Now we can bound the sum of $t_i$ from below: \[ \sum\limits_i t_i=\sum\limits_{i}\frac{m_i}{M}=\frac{1}{M}\geq \frac{2n+2}{n}. \] This means that the length of $P$ in the norm with unit ball $L$ is at least $2 +2/n$, and with all $m_i$ equal this bound is actually attained. Since it is possible to approximate $L$ by a smooth body, whose polar is also smooth, keeping the trajectory and its length the same, we conclude that the bound is sharp even in the class of smooth bodies $K$ with smooth polars. \end{proof} \begin{remark} A more rigorous analysis of the trajectory $q_1\dots q_{n+1}$ (Figure~\ref{fig:billiard in a triangle}) shows that a trajectory in the simplex passing through every facet is locally minimal if and only if its segments are parallel to the segments $ov_i$ in some order. One curious thing follows from the proof of the theorem. If we fix a simplex $L$ with the origin inside then there are $(n-1)!$ cyclic orders on the vertices, and therefore $(n-1)!$ trajectories inscribed in it with edges parallel to the respective vectors $v_i$. These (billiard) trajectories are evidently different, but all corresponding edges in all the trajectories have the same length. One consequence of this observation is that if we consider a trajectory $q_0\dots q_n$ and draw the hyperplane $h_i$ through the midpoint of every segment $q_iq_{i+1}$, parallel to the facet $F_i$ of $L$, then all these hyperplanes $h_i$ intersect in a single point. \end{remark} \begin{center} \includegraphics{pic/bezdeks-billiards-figures-19} \refstepcounter{fig} Fig. \arabic{fig}. The two trajectories in the two-dimensional triangle. \end{center} The proof of Theorem~\ref{theorem:billiard-nonsymm} also reveals the following formula: Let $\ell_i$ be the length of the Cevian\footnote{\emph{Cevians} of a simplex $L$ are $n+1$ segments connecting the vertices $v_i$ with their respective opposite facets $F_i$ and all having a common point in the interior of $L$.} of $L$ passing through the vertex $v_i$ and the origin. Then for any closed polygonal line $P=(q_0,\ldots, q_n)$ with $q_i\in F_i$ and $q_{i+1}-q_i = t_i v_i$ with $t_i>0$ we have \[ \sum \frac{|q_{i+1}-q_{i}|}{\ell_i}=2. \] Indeed, $\frac{|v_i|}{\ell_i}=\sum \limits_{j\ne i} m_j$, since the $m_i$ are the barycentric coordinates of the origin. So we obtain \[ \sum \frac{|q_{i+1}-q_{i}|}{\ell_i} =\frac{\sum \limits_i m_i\left(\sum \limits_{j\ne i} m_j\right)}{M}=\frac{2M}{M}=2. \]
{ "attr-fineweb-edu": 1.600586, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdUw4uzlh0_z-eEjo
\section{Introduction} A new sports league is being established in a city, and the league organizers need to allocate the available players to the participating teams. How can they accomplish this task in a satisfactory way, so that all parties involved can look forward to the upcoming season instead of grumble about the allocation? A principal consideration in such allocation tasks is fairness, and the problem of fairly dividing resources (in this case, players) among interested recipients (in this case, teams) has long been studied under the name of \emph{fair division} \citep{BramsTa96,Moulin03}. One of the most prominent fairness notions is \emph{envy-freeness}, which means that no team should envy another team based on the sets of players that they receive.\footnote{Due to the setting that we study, throughout this paper we will use the terms \emph{team} and \emph{player} in place of the standard fair division terms \emph{agent} and \emph{item}, respectively.} Even though an envy-free allocation may not exist (e.g., if there is one highly-coveted superstar), an intuitive relaxation called \emph{envy-freeness up to one player (EF1)}---that is, any envy that one team has toward another team can be eliminated upon the removal of some player in the envied team---can always be fulfilled \citep{LiptonMaMo04}. Another relevant criterion is \emph{balancedness}, which requires the players to be distributed as equally among the teams as possible.\footnote{One could view balancedness as EF1 with respect to the number of allocated players.} Balancedness can be especially desirable when allocating players to sports teams, as each team may need to have a fixed number of players due to the rules of the sport. Assuming that teams have additive and nonnegative values for players, an allocation that is both EF1 and balanced always exists and can be found efficiently via a simple round-robin algorithm.\footnote{See, for example, \citep[p.~7]{CaragiannisKuMo19}.} While EF1 provides a strong fairness guarantee with respect to the teams' preferences, it overlooks the fact that the players may have preferences over the teams as well, for example, depending on their familiarity with the team managers or the proximity of their residence to the training grounds. Clearly, ignoring the preferences of the players may lead to a suboptimal allocation. As an extreme case, if every team is indifferent between all players, then swapping a pair of players keeps the teams as happy as before and may make both of the swapped players much happier. In addition to our sports league example, two-sided preferences also occur in the allocation of employees to different branches of a restaurant chain or volunteers to community service clubs. Moreover, the player preferences could signify the \emph{suitability} of the teams for the players---for instance, the ``players'' could represent tasks (such as household chores or papers to be reviewed) and the ``teams'' have varying levels of ability to perform the tasks. Can we find an allocation that is fair to the teams and at the same time takes into account the preferences of the players? \subsection{Our Results} Following a large portion of work in fair division, we assume that the teams have additive valuations over the players. Some of our results allow these values to be either positive or negative; this corresponds to the allocation of \emph{indivisible goods and chores} \citep{AzizCaIg22}. For consistency of terminology, we will use the terms \emph{nonnegative-value players} and \emph{nonpositive-value players} instead of \emph{goods} and \emph{chores}, respectively. In \Cref{sec:swap-stability}, we focus on \emph{swap stability}, the requirement that no swap between two players makes at least one of the four involved parties better off and none of them worse off. First, we observe that even with nonnegative-value players, starting with an arbitrary EF1 allocation and letting players make beneficial swaps may result in an allocation that violates EF1. Despite this fact, for teams with arbitrary (positive or negative) values over players, we present a polynomial-time algorithm that produces a balanced and swap stable allocation satisfying \emph{EF[1,1]}, a relaxation of EF1 where one player may be removed from each of the envying team and the envied team.\footnote{When both positive and negative values are allowed, EF1 permits one player to be removed from either the envying team or the envied team (but not both) \citep{AzizCaIg22}.} Since EF[1,1] reduces to EF1 for nonnegative-value players as well as for nonpositive-value players, we obtain the same result for EF1 in each of these cases. We then note two ways in which the arbitrary-value result cannot be improved: EF[1,1] cannot be strengthened to EF1, and we cannot simultaneously attain \emph{individual stability}---the condition that no deviation of a player to another team makes the player better off and neither of the involved teams worse off. Nevertheless, we show that if we give up balancedness, both of these improvements become possible: an allocation that satisfies EF1, swap stability, and individual stability exists and can be found efficiently. Our results in this section are summarized in \Cref{table:summary}. \renewcommand{\arraystretch}{1.2} \begin{table*}[!t] \centering \begin{tabular}{| c | c |} \hline \textbf{Properties} & \textbf{Existence guarantee} \\ \hline \hline EF[1,1] + balanced + swap stable & Yes (\Cref{thm:balanced}) \\ \hline EF1 + balanced & No (\Cref{prop:balanced-EF1}) \\ \hline balanced + individually stable & No (\Cref{prop:balanced-IS}) \\ \hline EF1 + swap stable + individually stable & Yes (\Cref{thm:individual-stable}) \\ \hline \end{tabular} \caption{Summary of our results in \Cref{sec:swap-stability} on whether each combination of properties can always be satisfied simultaneously. Each positive result comes with a polynomial-time algorithm that computes an allocation satisfying the corresponding combination of properties.} \label{table:summary} \end{table*} Next, in \Cref{sec:PO}, we consider the notion of \emph{Pareto optimality (PO)}---no allocation can make a party (i.e., either player or team) better off without making another party worse off---which is stronger than both swap stability and individual stability. We prove that deciding whether an allocation is PO or not is coNP-complete even for two teams with identical valuations, nonnegative-value players, and a balanced allocation. On the other hand, for two teams with arbitrary valuations, we show that an extension of the \emph{generalized adjusted winner procedure} of \citet{AzizCaIg22} computes an EF1 and PO allocation in polynomial time. For any number of teams and nonnegative-value players, we observe that an EF1 and PO allocation always exists. Moreover, we demonstrate that such an allocation can be found efficiently in two special cases: (i)~the teams have binary valuations over the players, and (ii) there are three teams with identical valuations, and each player has a favorite team and is indifferent between the other two teams. We also provide a pseudopolynomial-time algorithm when there are a constant number of teams. \subsection{Related Work} Even though fair division has given rise to a sizable literature, the vast majority of the literature assumes one-sided preferences---in our terminology, the teams have preferences over the players, but not vice versa. A small number of recent papers have combined fairness concepts with two-sided preferences. \citet{FreemanMiSh21} considered many-to-many matching and proposed the notion of \emph{double envy-freeness up to one match (DEF1)}, which requires EF1 to hold for both sides simultaneously. Note that in our many-to-one setting, DEF1 is meaningless on the player side because it is always trivially satisfied. \citet{GollapudiKoPl20} studied many-to-many matching in a dynamic setting; their positive results primarily hold for symmetric binary valuations, which are much more restrictive than the valuations that we allow. \citet{PatroBiGa20} investigated fairness in two-sided platforms between producers and customers, but assumed that producers are indifferent between customers. A notion of envy that has been examined in many-to-one matching is \emph{justified envy} \citep{WuRo18,Yokoi20}. Assuming that not all players have to be matched, a player~$p_i$ is said to have justified envy toward another player~$p_j$ assigned to team~$k$ provided that $k$ prefers $p_i$ to $p_j$ and either $p_i$ is unassigned or $p_i$ prefers team~$k$ to her assigned team. In that line of work, a matching is called \emph{envy-free} if no player has justified envy toward another player. This notion of envy is fundamentally different from the one that we study, which concerns the envy between teams. Finally, while most fair division papers assume that the items (in our terminology, players) are goods and some assume that they are chores, a recent line of work has relaxed these assumptions by allowing items to be either goods or chores, with this evaluation possibly varying across agents (in our terminology, teams) \citep{AleksandrovWa20,AzizRe20,BercziBeBo20,KulkarniMeTa21,AzizCaIg22}. \section{Preliminaries} For each positive integer $z$, let $[z] \coloneqq \{1,\dots,z\}$. Let $T = [n]$ be the set of teams and $P = \{p_1,\dots,p_m\}$ the set of players; we sometimes refer to either a team or a player as a \emph{party}. Each player $p\in P$ has a weak transitive preference $\succsim_p$ over the teams; denote by $\succ_p$ and $\sim_p$ the strict and equivalence part of $\succsim_p$, respectively. The \emph{rank} of a team $i$ for a player $p$ is defined as $1$ plus the number of teams $j$ such that $j\succ_p i$. Each team $i\in T$ has a valuation function (or utility function) $v_i\colon 2^P\to\mathbb{R}$ over subsets of players. We assume that the valuations are additive, i.e., $v_i(P') = \sum_{p\in P'}v_i(\{p\})$ for all $i\in T$ and $P'\subseteq P$. For convenience, we write $v_i(p)$ instead of $v_i(\{p\})$. An \emph{instance} consists of the teams and players, as well as the valuations and preferences of both sides. Sometimes we will consider the setting of \emph{nonnegative-value players} (resp., \emph{nonpositive-value players}), which means that $v_i(p) \ge 0$ (resp., $v_i(p) \le 0$) for all $i\in T$ and $p\in P$. An \emph{allocation} $A=(A_1,A_2,\ldots,A_n)$ is an ordered partition of $P$ into $n$ parts, where the part~$A_i$ is assigned to team~$i$. We will investigate several fairness and stability notions for allocations. A basic fairness consideration on the team side is (almost) envy-freeness. \begin{definition} An allocation $A$ is said to satisfy \begin{itemize} \item \emph{EF1} if for any distinct $i,j\in T$, it holds that $v_i(A_i\setminus X)\ge v_i(A_j\setminus Y)$ for some $X\subseteq A_i$ and $Y\subseteq A_j$ with $|X\cup Y|\le 1$; \item \emph{EF[1,1]} if for any distinct $i,j\in T$, it holds that $v_i(A_i\setminus X)\ge v_i(A_j\setminus Y)$ for some $X\subseteq A_i$ and $Y\subseteq A_j$ with $|X|,|Y|\le 1$. \end{itemize} \end{definition} EF1 was first studied for nonnegative-value players by \citet{LiptonMaMo04} and subsequently extended to arbitrary-value players by \citet{AzizCaIg22},\footnote{The name EF1 was coined by \citet{Budish11}.} while EF[1,1] was recently introduced by \citet{ShoshanSeHa22}. It follows from the definition that EF1 implies EF[1,1]. Moreover, if all players yield nonnegative value, there is no reason to remove a player from $A_i$, so EF1 and EF[1,1] coincide in this case; an analogous statement holds for nonpositive-value players with $A_i$ replaced by $A_j$. Our next criterion is balancedness, which stipulates that the players are distributed as equally among the teams as possible. \begin{definition} An allocation $A$ is said to be \emph{balanced} if $\big||A_i|-|A_j|\big|\le 1$ for all $i,j\in T$. \end{definition} Observe that if there exists a constant $c\ne 0$ such that $v_i(p) = c$ for all $i\in T$ and $p\in P$, then both EF1 and EF[1,1] coincide with balancedness. We now define stability concepts, several of which take into account the preferences of both sides. \begin{definition} An allocation $A$ is said to be \begin{itemize} \item \emph{swap stable} if for any distinct $i,j\in T$, $p\in A_i$, and $q\in A_j$, it is not the case that swapping $p$ and $q$ makes at least one of the four involved parties better off and none of them worse off (we refer to such a swap as a \emph{beneficial swap});\footnote{We say that a party is \emph{better off} (resp., \emph{worse off}) if it receives a better (resp., worse) outcome with respect to its valuation function (for a team) or preference (for a player).} \item \emph{individually stable} if there is no player $p$ such that $p$ is better off deviating to another team and this deviation makes neither of the teams involved worse off (we refer to such a deviation as a \emph{beneficial deviation}). \end{itemize} \end{definition} \begin{definition} An allocation $A$ is said to be \emph{Pareto dominated} by another allocation $A'$ if no party is worse off in $A'$ than in $A$ and at least one party is better off; in this case, $A'$ is a \emph{Pareto improvement} of $A$. An allocation $A$ is \emph{Pareto optimal (PO)} if it is not Pareto dominated by any other allocation. We define \emph{team-Pareto dominated}, \emph{team-Pareto optimal (team-PO)}, \emph{player-Pareto dominated}, and \emph{player-Pareto optimal (player-PO)} similarly, with ``party'' replaced by ``team'' and ``player'', respectively. \end{definition} Although PO clearly implies both swap stability and individual stability, it implies neither team-PO nor player-PO, as the following two propositions show. \begin{proposition} PO does not necessarily imply team-PO. \end{proposition} \begin{proof} Consider the following instance with $n = m = 2$: \begin{itemize} \item $v_1(p_1) = v_1(p_2) = v_2(p_1) = 1$ and $v_2(p_2) = 0$; \item $1\succ_{p_1} 2$ and $2\succ_{p_2} 1$. \end{itemize} The allocation $A = \big(\{p_1\}, \{p_2\}\big)$ is team-Pareto dominated by the allocation $A'=\big(\{p_1,p_2\},\emptyset\big)$, so $A$ is not team-PO. However, $A$ is PO since each player is already assigned to her unique favorite team. \end{proof} \begin{proposition} PO does not necessarily imply player-PO. \end{proposition} \begin{proof} Consider the following instance with $n = m = 2$: \begin{itemize} \item $v_1(p_1) = v_1(p_2) = v_2(p_1) = v_2(p_2) = 1$; \item $1\succ_{p_1} 2$ and $1\succ_{p_2} 2$. \end{itemize} The allocation $A = \big(\{p_1\}, \{p_2\}\big)$ is player-Pareto dominated by the allocation $A'=\big(\{p_1,p_2\},\emptyset\big)$, so $A$ is not player-PO. However, one can check that $A$ is PO. \end{proof} \section{Swap Stability} \label{sec:swap-stability} In this section, we focus on swap stability. A natural idea for obtaining an EF1 and swap stable allocation is to start with an arbitrary EF1 allocation and let players swap as long as a beneficial swap exists. Note that determining whether beneficial swaps exist (and, if so, finding such a swap) can be done in polynomial time, since we can simply check all pairs of players. However, as can be seen in the following example, this approach does not always result in an EF1 allocation, even for nonnegative-value players. \begin{example}\label{ex:swapBreakEF1} Consider the following instance with $n = 3$ and $m = 6$: \begin{itemize} \item $v_i(p_j) = 0$ for $i\in[2]$ and $j\in [6]$; \item $v_3(p_1) = v_3(p_2) = 1$ and $v_3(p_3) = v_3(p_4) = v_3(p_5) = v_3(p_6) = 0$; \item each player has a unique favorite team and is indifferent between the other two teams: $p_1$ and $p_2$ prefer team~$1$, $p_4$ and $p_5$ prefer team~$2$, and $p_3$ and $p_6$ prefer team~$3$. \end{itemize} The allocation $A = \big(\{p_1,p_4\},\{p_2,p_5\},\{p_3,p_6\}\big)$ is EF1. The swap between $p_2$ and $p_4$ is the unique beneficial swap; let $A'$ be the allocation after this swap. The allocation $A'$ is swap stable, but it is not EF1 because team~$3$ envies team~$1$ by more than one player. \end{example} In spite of this example, we show that an EF[1,1] and swap stable allocation that is moreover balanced always exists and can be found efficiently. \begin{theorem}\label{thm:balanced} For any instance, a balanced allocation that satisfies EF[1,1] and swap stability exists and can be computed in polynomial time. \end{theorem} Since EF[1,1] reduces to EF1 for nonnegative-value players as well as for nonpositive-value players, \Cref{thm:balanced} implies the following corollary. \begin{corollary} For any nonnegative-value player instance, a balanced allocation that satisfies EF1 and swap stability exists and can be computed in polynomial time. The same holds for any nonpositive-value player instance. \end{corollary} At a high level, our algorithm proceeds in a round-robin manner, but instead of assigning a player to a team in each turn, it only assigns a player's \emph{value} to the team; this ensures that more possibilities are available in later turns. Then, among the allocations that satisfy the determined values for teams, it computes an allocation that minimizes the sum of the players' ranks for the teams. Formally, the algorithm is described as Algorithm~\ref{alg:swap-stable-rr}. For each positive integer $q$, we denote by $f(q)$ the unique integer in $[n]$ such that $f(q)\equiv q\pmod{n}$. Also, for a matching $\mu$ with $(q,p)\in \mu$, we define the notation $\mu_q$ and $\mu_p$ so that $\mu_q = p$ and $\mu_p = q$. Note that each $q\in Q$ corresponds to a copy of team~$f(q)$. \begin{algorithm}[htb] \caption{EF[1,1], swap stable, and balanced algorithm}\label{alg:swap-stable-rr} Construct a complete bipartite graph $G=(Q,P; E)$ with weight function $w\colon E\to\mathbb{R}$ where $Q=[m]$ and $w(q,p)=v_{f(q)}(p)$ for each $(q,p)\in Q\times P$\; Compute a perfect matching $\mu\subseteq Q\times P$ such that the weight of the edge adjacent to vertex $1\in Q$ is as large as possible, and subject to this condition, the weight of the edge adjacent to vertex $2\in Q$ is as large as possible, and so on until vertex $m\in Q$\;\label{line:perfect-matching} Let $E^*=\{(q,p)\in Q\times P \mid w(q,p)=w(q,\mu_q)\}$\; Compute a perfect matching $\mu^*$ in $G^*=(Q,P; E^*)$ such that the sum over all players $p\in P$ of the rank of team~$f(\mu^*_p)$ for player~$p$ is minimized\;\label{line:matching-special} Return the allocation $A$ such that $p$ is allocated to team~$f(q)$ for each $(q,p)\in \mu^*$\; \end{algorithm} It is clear that the allocation produced by \Cref{alg:swap-stable-rr} is balanced. To establish \Cref{thm:balanced}, we prove the remaining properties of the algorithm, including its polynomial running time, in the following three lemmas. \begin{lemma} The output allocation $A$ of Algorithm~\ref{alg:swap-stable-rr} is EF[1,1]. \end{lemma} \begin{proof} Fix distinct $i,j\in T$. It suffices to show that $v_i(A_i\setminus X)\ge v_i(A_j\setminus Y)$ for some $X\subseteq A_i$ and $Y\subseteq A_j$ with $|X|,|Y|\le 1$. The statement holds trivially if $m\le n$ since each team receives at most one player, so assume that $m > n$. We consider three cases. First, suppose that $|A_i|=|A_j|$. Let $k \coloneqq|A_i| \ge 1$. Then, we have \begin{align*} v_i(A_i\setminus\{\mu^*_{n(k-1)+i}\}) &= \sum_{\ell=1}^{k-1} v_i(\mu^*_{n(\ell-1)+i})\\ &= \sum_{\ell=1}^{k-1} v_i(\mu_{n(\ell-1)+i})\\ &\ge \sum_{\ell=1}^{k-1} v_i(\mu_{n\ell+j}) = \sum_{\ell=2}^{k} v_i(\mu_{n(\ell-1)+j}) = \sum_{\ell=2}^{k} v_i(\mu^*_{n(\ell-1)+j}) = v_i(A_j\setminus\{\mu^*_{j}\}), \end{align*} where $v_i(\mu_{n(\ell-1)+i}) \ge v_i(\mu_{n\ell + j})$ holds because otherwise the weight of the edge in $\mu$ adjacent to vertex $n(\ell-1)+i \in Q$ can be increased without decreasing the weights of the edges adjacent to vertices $1,2,\dots,n(\ell-1)+i-1$, contradicting the definition of $\mu$. Next, suppose that $|A_i| > |A_j|$; in particular, it must be that $i < j$. Let $|A_i|=k$ and $|A_j|=k-1$. Applying a similar argument, we have \begin{align*} v_i(A_i\setminus\{\mu^*_{n(k-1)+i}\}) &= \sum_{\ell=1}^{k-1} v_i(\mu^*_{n(\ell-1)+i})\\ &= \sum_{\ell=1}^{k-1} v_i(\mu_{n(\ell-1)+i}) \ge \sum_{\ell=1}^{k-1} v_i(\mu_{n(\ell-1)+j}) = \sum_{\ell=1}^{k-1} v_i(\mu^*_{n(\ell-1)+j}) = v_i(A_j). \end{align*} Finally, suppose that $|A_i| < |A_j|$; in particular, it must be that $i > j$. Let $|A_i|=k-1$ and $|A_j|=k$. Applying a similar argument once more, we have \begin{align*} v_i(A_i) &= \sum_{\ell=1}^{k-1} v_i(\mu^*_{n(\ell-1)+i})\\ &= \sum_{\ell=1}^{k-1} v_i(\mu_{n(\ell-1)+i})\\ &\ge \sum_{\ell=1}^{k-1} v_i(\mu_{n\ell+j}) = \sum_{\ell=2}^{k} v_i(\mu_{n(\ell-1)+j}) = \sum_{\ell=2}^{k} v_i(\mu^*_{n(\ell-1)+j}) = v_i(A_j\setminus\{\mu^*_{j}\}). \end{align*} Hence, in all three cases, the allocation $A$ is EF[1,1]. \end{proof} \begin{lemma} The output allocation $A$ of Algorithm~\ref{alg:swap-stable-rr} is swap stable. \end{lemma} \begin{proof} Let us consider a swap between players $\mu^*_{q}$ and $\mu^*_{r}$, where $q,r\in Q$ with $q<r$. Suppose that this swap is \emph{possibly} a beneficial swap, i.e., $v_{f(q)}(\mu^*_q)\le v_{f(q)}(\mu^*_r)$, $v_{f(r)}(\mu^*_r)\le v_{f(r)}(\mu^*_q)$, $f(q)\precsim_{\mu^*_q}f(r)$, and $f(r)\precsim_{\mu^*_r}f(q)$. We will show that this swap cannot make any of the involved parties better off. Denote by $\mu^{**}$ the matching that results from this swap. If $v_{f(q)}(\mu^*_q) < v_{f(q)}(\mu^*_r)$, the matching $\mu$ can be improved by using $\mu^{**}$ instead, a contradiction. So $v_{f(q)}(\mu^*_q) = v_{f(q)}(\mu^*_r)$. Similarly, if $v_{f(r)}(\mu^*_r) < v_{f(r)}(\mu^*_q)$, then because $v_{f(q)}(\mu^*_q) = v_{f(q)}(\mu^*_r)$, the matching $\mu$ can again be improved by using $\mu^{**}$ instead, a contradiction. So $v_{f(r)}(\mu^*_r) = v_{f(r)}(\mu^*_q)$. Hence, the matching $\mu^{**}$ after the swap remains a feasible perfect matching in $G^*$. As $\mu^*$ minimizes the sum of the players' rank for teams among the perfect matchings in $G^*$, we get $f(q)\sim_{\mu^*_q}f(r)$ and $f(r)\sim_{\mu^*_r}f(q)$. Therefore, the swap is not a beneficial swap, and the allocation $A$ is swap stable. \end{proof} \begin{lemma} \Cref{alg:swap-stable-rr} can be implemented to run in polynomial time. \end{lemma} \begin{proof} We first focus on computing the matching $\mu$ in \Cref{line:perfect-matching}. The weight $w(1,\mu_1)$ can be found by simply taking the largest weight of an edge adjacent to vertex $1\in Q$ in $G$. Given $w(1,\mu_1), \dots, w(i-1,\mu_{i-1})$, to determine $w(i,\mu_i)$, we delete all edges $(q,p)$ with $1\le q \le i-1$ such that $w(q,p) \ne w(q,\mu_q)$ from $G$, change the weight of all edges $(q,p)$ with $i+1\le q\le m$ to $0$, and compute a maximum-weight perfect matching in the resulting graph. Note that this matching can be found in time $O(m^3)$ \citep{Tomizawa71}. Once we have $w(1,\mu_1),\dots,w(m,\mu_m)$, we can construct~$G^*$ in \Cref{line:matching-special} by keeping only the edges $(q,p)$ in~$G$ such that $w(q,p) = w(q,\mu_q)$. Finally, to compute $\mu^*$, we reassign the weight of each edge $(q,p)$ in $G^*$ to be the rank of player~$p$ for team~$f(q)$ and find a minimum-weight perfect matching in $G^*$. \end{proof} We now observe two ways in which \Cref{thm:balanced} cannot be improved. First, the condition EF[1,1] cannot be strengthened to EF1.\footnote{This observation was also made by \citet{ShoshanSeHa22}.} \begin{proposition} \label{prop:balanced-EF1} Even for two teams with identical valuations, there does not necessarily exist a balanced EF1 allocation. \end{proposition} \begin{proof} Consider an instance with $n = m = 2$ such that both teams have value $1$ for $p_1$ and $-1$ for $p_2$. Clearly, no balanced allocation is EF1. \end{proof} Second, we cannot add individual stability to the list of guarantees. \begin{proposition} \label{prop:balanced-IS} Even for two teams and nonnegative-value players, there does not necessarily exist a balanced and individually stable allocation. \end{proposition} \begin{proof} Consider an instance with $n = m = 2$ such that team~$1$ has value~$1$ for each player, team~$2$ has value~$0$ for each player, and both players strictly prefer team~$1$ to team~$2$. The only individually stable allocation assigns both players to team~$1$, but this allocation is not balanced. \end{proof} Nevertheless, if we give up balancedness, we can attain EF1, swap stability, and individual stability simultaneously. To this end, we combine \Cref{alg:swap-stable-rr} with the \emph{double round-robin algorithm} introduced by \citet{AzizCaIg22}. In the first phase, the players who yield nonnegative value to at least one team are allocated by \Cref{alg:swap-stable-rr} in the forward order of the teams, while in the second phase, the remaining players are allocated by \Cref{alg:swap-stable-rr} in the backward order of the teams. Intuitively, EF1 is guaranteed because, for each pair of teams $i$ and $j$ with $i<j$, $i$ does not envy $j$ in the first phase whereas $j$ does not envy $i$ in the second phase. Moreover, we add a sufficient number of dummy players, who yield value $0$ to every team and are indifferent between all teams, in order to guarantee individual stability. This leads to each team receiving at least one dummy player, and a beneficial deviation in the resulting situation can be captured by a beneficial swap between the deviating player and a dummy player. The algorithm is formally described as \Cref{alg:SS-EF1}. \begin{algorithm}[htb] \caption{EF1, swap stable, and individually stable algorithm}\label{alg:SS-EF1} Partition $P$ into $P^+ \coloneqq\{p\in P \mid \max_{i\in T}v_i(p)\ge 0\}$ and $P^- \coloneqq\{p\in P \mid \max_{i\in T}v_i(p)< 0\}$\; Let $\widehat{P}^+$ consist of $P^+$ together with $(n-1)|P^+|+n$ dummy players, where each dummy player yields value $0$ to every team and is indifferent between all teams\; Let $\widehat{P}^-$ consist of $P^-$ together with $(n-1)|P^-|+n$ dummy players, where each dummy player yields value $0$ to every team and is indifferent between all teams\; Let $A^+$ be the allocation obtained by executing \Cref{alg:swap-stable-rr} on $\widehat{P}^+$ with the teams in the forward order $1,2,\dots,n$\label{line:SS+}\; Let $A^-$ be the allocation obtained by executing \Cref{alg:swap-stable-rr} on $\widehat{P}^-$ with the teams in the backward order $n,n-1,\dots,1$\label{line:SS-}\; Return the allocation $A$ which is the union of $A^+$ and $A^-$ with the dummy players removed\; \end{algorithm} \begin{theorem} \label{thm:individual-stable} For any instance, \Cref{alg:SS-EF1} returns an EF1, swap stable, and individually stable allocation in polynomial time. \end{theorem} \begin{proof} We show that \Cref{alg:SS-EF1} has the desired properties. Since \Cref{alg:swap-stable-rr} runs in polynomial time, so does \Cref{alg:SS-EF1}. Note that $|\widehat{P}^+|=n(|P^+|+1)$ and $|\widehat{P}^-|=n(|P^-|+1)$, so every team has at least one dummy player in each of $A^+$ and $A^-$. Also, as \Cref{alg:swap-stable-rr} outputs a swap stable allocation, each of the allocations $A^+$ and $A^-$ is swap stable. In addition, each player $p\in P^+$ is allocated to a team that values her nonnegatively, i.e., $v_i(p)\ge 0$ for all $i\in T$ and $p\in A_i^+$. Indeed, if $v_j(p)\ge 0>v_i(p)$ for some $i,j\in T$ and $p\in A_i^+$, the swap between $p$ and a dummy player in $A_j^+$ would lead to a better matching than $\mu$ in \Cref{alg:swap-stable-rr}, a contradiction. We now prove that the allocation $A$ returned by \Cref{alg:SS-EF1} is EF1, swap stable, and individually stable. \paragraph{EF1} Consider any pair of teams $i$ and $j$ where $i<j$. First, consider $i$'s envy for $j$. In the first phase, $i$ has priority over $j$ and both teams receive the same number of players, so $i$ does not envy $j$ with respect to $A^+$ (i.e., $v_i(A^+_i)\ge v_i(A^+_j)$). Also, as $A^-$ is EF[1,1], there exist $X\subseteq A^-_i$ and $Y\subseteq A^-_j$ such that $|X|, |Y|\le 1$ and $v_i(A^-_i\setminus X)\ge v_i(A^-_j\setminus Y)$. Hence, we obtain \begin{align*} v_i(A_i\setminus X) &= v_i(A^+_i)+v_i(A^-_i\setminus X) \ge v_i(A^+_j)+v_i(A^-_j\setminus Y) \ge v_i(A^+_j)+v_i(A^-_j) =v_i(A_j); \end{align*} here, the second equality holds because any player in $Y$ yields negative value to every team. Thus, $i$ does not envy $j$ by more than one player. Next, consider $j$'s envy for $i$. In the second phase, $j$ has priority over $i$ and both teams receive the same number of players, so $j$ does not envy $i$ with respect to $A^-$ (i.e., $v_j(A^-_j)\ge v_j(A^-_i)$). Also, as $A^+$ is EF[1,1], there exist $X'\subseteq A^+_j$ and $Y'\subseteq A^+_i$ such that $|X'|, |Y'|\le 1$ and $v_j(A^+_j\setminus X')\ge v_j(A^+_i\setminus Y')$. Note that $v_j(X')\ge 0$ since $j$ only receives players with nonnegative value in the first phase. Hence, we obtain \begin{align*} v_j(A_j) &=v_j(A^+_j)+v_j(A^-_j) \ge v_j(A^+_j\setminus X')+v_j(A^-_j) \ge v_j(A^+_i\setminus Y')+v_j(A^-_i) =v_j(A_i\setminus Y'). \end{align*} Thus, $j$ does not envy $i$ by more than one player. \paragraph{Swap stability} Suppose to the contrary that the swap between some $p\in A_i$ and $q\in A_j$ is a beneficial swap in $A$. Since each of $A^+$ and $A^-$ is swap stable, it cannot be that $p,q\in P^+$ or $p,q\in P^-$. Thus, without loss of generality, we may assume that $p\in P^+$ and $q\in P^-$. We have $v_i(p)\ge 0$ but $v_i(q)<0$, which means that the swap is not beneficial for team~$i$, a contradiction. \paragraph{Individual stability} Suppose to the contrary that there is a beneficial deviation of player $p$ from team~$i$ to team~$j$ in $A$. If $p\in P^+$ (resp., $p\in P^-$), the swap between $p$ and a dummy player in $A_j^+$ (resp., $A_j^-$) would be a beneficial swap in $A^+$ (resp., in $A^-$), contradicting the swap stability of $A^+$ (resp., $A^-$). \end{proof} In certain applications, there may be constraints on the allocations that are feasible. For example, the players in a sport could be categorized as defenders, midfielders, and strikers, and each team needs a specific number of players from each category in order to form a proper team. Such \emph{cardinality constraints} have been studied in fair division with one-sided preferences \citep{BiswasBa18,DrorFeSe21,HummelHe22,ShoshanSeHa22}. In particular, \citet{ShoshanSeHa22} showed that for two teams, there exists a polynomial-time algorithm that computes an EF[1,1] allocation that is balanced with respect to each category of players and moreover \emph{feasible-team-PO} (i.e., team-PO within the set of all allocations satisfying the cardinality constraints). By using their allocation as a starting point, we can obtain an outcome that is additionally \emph{feasible-swap stable} (i.e., swap stable when considering only swaps that result in another feasible allocation). \begin{theorem} \label{thm:balanced-category} Consider an instance with two teams where the players are divided into categories. There exists a polynomial-time algorithm that computes an allocation that is balanced with respect to each category, EF[1,1], and feasible-swap stable. \end{theorem} \begin{proof} First, use the polynomial-time algorithm of \citet{ShoshanSeHa22} to compute an allocation that is balanced with respect to each category, EF[1,1], and feasible-team-PO. Then, let players make beneficial swaps until no such swap exists. Because the initial allocation is feasible-team-PO, for every beneficial swap, each team must be indifferent between the two swapped players. Hence, EF[1,1] is maintained, and one of the players is better off while the other player is not worse off. Since determining whether a beneficial swap exists can be done by checking all pairs of players and each player's happiness can improve at most $n$ times, our algorithm terminates with a swap stable allocation in polynomial time. \end{proof} \Cref{prop:balanced-IS} shows that we cannot add individual stability to \Cref{thm:balanced-category}, even when there is only a single category. An interesting question is whether the theorem can be extended to three or more teams. \section{Pareto Optimality} \label{sec:PO} In this section, we turn our attention to Pareto optimality, which is a stronger requirement than both swap stability and individual stability. Firstly, while it is easy to check whether an allocation is swap stable or individually stable by checking for all (polynomial number of) possible beneficial swaps or deviations, the same is not true for PO. \begin{theorem} \label{thm:PO-hardness} Deciding whether an allocation is PO or not is coNP-complete, even for two teams with identical valuations, nonnegative-value players, and a balanced allocation. \end{theorem} \begin{proof} Checking that an allocation is Pareto dominated by another given allocation can be done in polynomial time, so the problem is in coNP. To prove coNP-hardness, we reduce from \textsc{Subset Sum}. An instance of \textsc{Subset Sum} consists of positive integers $b_1,\dots,b_r$ and $s$; it is a Yes-instance if and only if the sum of some subset of the $b_i$'s is exactly $s$. Given an instance $(b_1,\dots,b_r;s)$ of \textsc{Subset Sum}, we create two teams with identical valuations for $2r$ players; the values are $b_1,\dots,b_r,s,0,0,\dots,0$, where $0$ occurs $r-1$ times. Player~$p_{r+1}$ (with value $s$) prefers team~$1$ to team~$2$, while all other players prefer team~$2$ to team~$1$. Consider a balanced allocation in which the first $r$ players are in team~$1$ while the other $r$ players are in team~$2$. We claim that this allocation admits a Pareto improvement if and only if $(b_1,\dots,b_r;s)$ is a Yes-instance. Indeed, if $\sum_{i\in I}b_i = s$ for some $I\subseteq [r]$, then exchanging players $p_i$ for $i\in I$ with $p_{r+1}$ yields a Pareto improvement: both teams are indifferent while all players involved are better off. For the converse direction, note that an exchange that yields a Pareto improvement cannot involve $p_{r+2},\dots,p_{2r}$, so such an exchange must involve $p_{r+1}$ along with a subset $P'\subseteq\{p_1,\dots,p_r\}$. Since the teams have identical valuations, this exchange can be a Pareto improvement only when the $b_i$'s corresponding to $P'$ sum up to exactly $s$. \end{proof} Note that even though the same decision problem is also coNP-complete for two teams with one-sided preferences \citep[Thm.~1]{AzizBiLa19}, it becomes trivial for any number of teams with identical valuations and one-sided preferences, because every allocation is PO in that case. In light of \Cref{thm:PO-hardness}, we cannot hope to reach a PO allocation in polynomial time by starting with an arbitrary allocation and iteratively finding Pareto improvements. However, a PO allocation can be efficiently computed by simply assigning each player to a team with the highest value for her, breaking ties in favor of a team that the player prefers most. Can we then attain PO along with fairness for the teams? The next example shows that round-robin-based algorithms such as \Cref{alg:swap-stable-rr,alg:SS-EF1} do not work, even for two teams with identical valuations and nonnegative-value players. \begin{example} Consider the following instance with $n = 2$ and $m = 8$: \begin{itemize} \item For $i\in [2]$, $v_i(p_1) = v_i(p_2) = 4$, $v_i(p_3) = v_i(p_4) = 3$, $v_i(p_5) = v_i(p_6) = 2$, and $v_i(p_7) = v_i(p_8) = 1$; \item $1\succ_{p_j}2$ for $j\in\{1,2,7,8\}$ and $2\succ_{p_j}1$ for $j\in\{3,4,5,6\}$. \end{itemize} Given this instance, \Cref{alg:swap-stable-rr,alg:SS-EF1} return an allocation $A$ that assigns to each team exactly one player from each of the sets $\{p_1,p_2\}$, $\{p_3,p_4\}$, $\{p_5,p_6\}$, and $\{p_7,p_8\}$. However, $A$ is Pareto dominated by the allocation $A' = (\{p_1,p_2,p_7,p_8\}, \{p_3,p_4,p_5,p_6\})$. \end{example} Nevertheless, we show that for two teams and arbitrary-value players, we can find an EF1 and PO allocation by extending the \emph{generalized adjusted winner procedure} of \citet{AzizCaIg22}. The algorithm is shown as \Cref{alg:gaw}; it operates in the same way as Aziz et al.'s algorithm except that we employ a tie-breaking rule among players with the same ratio between the teams' values in \Cref{line:tiebreak}. \begin{algorithm}[htb] \caption{EF1 and PO algorithm for two teams}\label{alg:gaw} Assign each player with zero value for both teams to a team that she prefers (breaking ties arbitrarily), and assume from now on that $P$ is the set of remaining players\; Let $P^*_1 = \{ p \in P \mid v_1(p) \geq 0, v_2(p)\leq 0 \}$ and $P^*_2 = \{ p \in P \mid v_1(p) \leq 0, v_2(p)\geq 0 \}$\; Let $P^+ = \{ p \in P \mid v_1(p) > 0, v_2(p)> 0 \}$ and $P^- = \{ p \in P \mid v_1(p) < 0, v_2(p) < 0 \}$\; Assume without loss of generality that the players in $P^+\cup P^-$ are $p_1,p_2,\dots,p_r$, and relabel them so that $|v_{1}(p_1)|/|v_{2}(p_1)|\le |v_{1}(p_2)|/|v_{2}(p_2)|\le \dots \le |v_{1}(p_r)|/|v_{2}(p_r)|$.\label{line:tiebreak} Moreover, for players with the same ratio, place them in the following order: (1) those in~$P^+$ who strictly prefer team~$2$ and those in~$P^-$ who strictly prefer team~$1$; (2) those in~$P^+ \cup P^-$ who like both teams equally; (3) those in~$P^+$ who strictly prefer team~$1$ and those in~$P^-$ who strictly prefer team~$2$\; Let $(A_1,A_2)\leftarrow (P^+\cup P^*_1,P^-\cup P^*_2)$\label{line:initial-alloc}\; \For{$i\leftarrow 1,2,\dots,r$}{ \lIf{team~$2$ does not envy team~$1$ by more than one player}{\textbf{break}} \lIf{$p_i\in P^+$}{Move player $p_i$ from team~$1$ to team~$2$ (i.e., $A_1\leftarrow A_1\setminus \{p_i\}$ and $A_2\leftarrow A_2 \cup \{p_i\}$)} \lElse{Move player $p_i$ from team~$2$ to team~$1$ (i.e., $A_1\leftarrow A_1\cup \{p_i\}$ and $A_2\leftarrow A_2 \setminus \{p_i\}$)} } \Return $(A_1,A_2)$\; \end{algorithm} \begin{theorem} \label{thm:gaw} Given any instance with two teams, \Cref{alg:gaw} outputs an allocation that is EF1, PO, and team-PO in time $O(m^2)$. \end{theorem} \begin{proof} Since \Cref{alg:gaw} is a version of the generalized adjusted winner procedure with specific tie-breaking, EF1, team-PO, and the running time follows from the work of \citet{AzizCaIg22}. In particular, the tie-breaking in \Cref{line:tiebreak} takes time $O(m)$ and does not add to the overall running time. It remains to show that the output allocation is PO. First, players with zero value for both teams are already assigned to a team that they prefer, and such players do not affect the utility of either team no matter which team they are assigned to, so we may safely ignore them. We claim that at any point from \Cref{line:initial-alloc} onward, the allocation~$A$ in the algorithm is PO. Suppose to the contrary that there exists a Pareto improvement $A'=(A'_1,A'_2)$ of $A$. Since this intermediate allocation $A$ is team-PO \citep{AzizCaIg22}, we have \begin{align} v_1(A_1)=v_1(A'_1) \quad \text{and} \quad v_2(A_2)=v_2(A'_2).\label{eq:teamPO} \end{align} We can assume that players in $P^*_1$ and $P^*_2$ stay in team~$1$ and $2$, respectively, because transferring such a player makes neither team better off and at least one team worse off, contradicting team-PO. Thus, in the following argument, we assume that only players in $P^+\cup P^-$ are exchanged. In addition, we observe from the proof of \citet{AzizCaIg22} that, for some value $\alpha$, $|v_1(p)|/|v_2(p)|=\alpha$ for all $p\in (A_1\cap A'_2)\cup (A_2\cap A'_1)$ (i.e., the exchanged players between $A$ and $A'$). We derive a contradiction by splitting the argument according to $p_i$, the last player we moved in the for-loop (see \Cref{fig:adjusted_winner}). If we did not move any player, we can apply the argument in Case~1. \begin{figure}[htbp] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[thick,fill=blue!5] (-0.1,0) rectangle (3.9,2); \draw[thick,fill=red!5] (4,0) rectangle (16,2); \draw[thick,fill=blue!5] (16.1,0) rectangle (20.1,2); \draw (8,0) -- (8,2); \draw (12,0) -- (12,2); \draw[above,blue!50!black] (1.9,2) node {$|v_1(p)|/|v_2(p)|<\alpha$}; \draw[above, red!50!black] (10,2) node {$|v_1(p)|/|v_2(p)|=\alpha$}; \draw[above,blue!50!black] (18.1,2) node {$|v_1(p)|/|v_2(p)|>\alpha$}; \draw (6,1.5) node[font=\small] {$p\in P^+\text{ and }2\succ_p 1$}; \draw (6,0.5) node[font=\small] {$p\in P^-\text{ and }1\succ_p 2$}; \draw (10,1.5) node[font=\small] {$p\in P^+\text{ and }1\sim_p 2$}; \draw (10,0.5) node[font=\small] {$p\in P^-\text{ and }1\sim_p 2$}; \draw (14,1.5) node[font=\small] {$p\in P^+\text{ and }1\succ_p 2$}; \draw (14,0.5) node[font=\small] {$p\in P^-\text{ and }2\succ_p 1$}; \draw[<->,below,thick] (-0.1,-.3) -- (7.95,-.3); \draw[<->,below,thick] (8.05,-.3) -- (11.95,-.3); \draw[<->,below,thick] (12.05,-.3) -- (20.1,-.3); \node at (4,-.7) {Case 1}; \node at (10,-.7) {Case 2}; \node at (16,-.7) {Case 3}; \end{tikzpicture} \caption{The order of players in $P^+\cup P^-$.}\label{fig:adjusted_winner} \end{center} \end{figure} \paragraph{Case 1} Suppose that one of the following holds: (i) $|v_1(p_i)|/|v_2(p_i)| < \alpha$; (ii) $|v_1(p_i)|/|v_2(p_i)| = \alpha$ and $p_i \in P^+$ with $2 \succ_{p_i} 1$; or (iii) $|v_1(p_i)|/|v_2(p_i)| = \alpha$ and $p_i \in P^-$ with $1 \succ_{p_i} 2$. We see that every player $p$ in $A_1 \cap P^-$ (resp.,~$A_2 \cap P^+$) with the ratio $|v_1(p)|/|v_2(p)|=\alpha$, if exists, strictly prefers team $1$ (resp.,~team $2$), so such a player cannot be exchanged for a Pareto improvement. Thus, $A_1\cap A'_2$ (resp., $A_2\cap A'_1$) consists only of players in $P^+$ (resp., $P^-$). Because $(A_1\cap A'_2)\cup (A_2\cap A'_1)$ is nonempty, the utility of team~$1$ is lower in $A'$ than in $A$, which implies that $A'$ cannot be a Pareto improvement. \paragraph{Case 2} Suppose that $|v_1(p_i)|/|v_2(p_i)| = \alpha$, $p_i \in P^+\cup P^-$, and $1 \sim_{p_i} 2$. Every player $p$ in $A_1 \cap (P^+\cup P^-)$ with the ratio $\alpha$ weakly prefers team~$1$, while every $p$ in $A_2 \cap (P^+\cup P^-)$ with the ratio $\alpha$ weakly prefers team~$2$. Note that any player in $(A_1\cap A'_2) \cup (A_2\cap A'_1)$ likes both teams equally, because players in $P^+\cup P^-$ with ratio~$\alpha$ and a strict preference are already allocated to their preferred team. This together with the team-PO of $A$ implies that no player is better off in $A'$ than in $A$. Thus, $A'$ is not a Pareto improvement. \paragraph{Case 3} Suppose that one of the following holds: (i) $|v_1(p_i)|/|v_2(p_i)| > \alpha$; (ii) $|v_1(p_i)|/|v_2(p_i)| = \alpha$ and $p_i \in P^+$ with $1 \succ_{p_i} 2$; or (iii) $|v_1(p_i)|/|v_2(p_i)| = \alpha$ and $p_i \in P^-$ with $2 \succ_{p_i} 1$. We see that every player $p$ in $A_1 \cap P^+$ (resp.,~$A_2 \cap P^-$) with the ratio $\alpha$, if exists, strictly prefers team $1$ (resp.,~team $2$), so such a player cannot be exchanged for a Pareto improvement. Thus, $A_1\cap A'_2$ (resp., $A_2\cap A'_1$) consists only of players in $P^-$ (resp., $P^+$). Because $(A_1\cap A'_2)\cup (A_2\cap A'_1)$ is nonempty, the utility of team~$2$ is lower in $A'$ than in $A$, which implies that $A'$ cannot be a Pareto improvement.\\ In each of the three cases, we arrive at a contradiction. Therefore, we conclude that $A$ is PO. \end{proof} Although EF1, PO, and team-PO can be guaranteed simultaneously in the case of two teams, EF1 and player-PO are already incompatible in this case. \begin{proposition} Even for two teams with identical valuations and nonnegative-value players, there does not necessarily exist an EF1 and player-PO allocation. \end{proposition} \begin{proof} Consider an instance with $n = 2$ and $m = 4$ such that each team has value $1$ for each player and every player prefers team~$1$ to team~$2$. The only player-PO allocation assigns all players to team~$1$, but this allocation is not EF1. \end{proof} Note also that since PO is a strengthening of individual stability, \Cref{prop:balanced-IS} implies that we cannot guarantee PO and balancedness simultaneously. We now move on to the general setting where the number of agents can be arbitrary. Unfortunately, even for nonpositive-value players and one-sided preferences, it is unknown whether EF1 and PO can always be satisfied together \citep{EbadianPeSh22,GargMuQi22}. We therefore restrict our attention to nonnegative-value players in the remainder of this section. By building upon a well-known result of \citet{CaragiannisKuMo19}, we can establish the existence of an EF1 and PO allocation. For any allocation~$A$, its \emph{Nash welfare} is defined as the product $\prod_{i\in T}v_i(A_i)$. An allocation is said to be a \emph{maximum Nash welfare (MNW) allocation} if it maximizes the Nash welfare among all allocations.\footnote{If the maximum possible Nash welfare is $0$, an MNW allocation should yield nonzero utility to the largest possible number of teams and, subject to that, maximizes the product of utilities of these teams.} \begin{theorem}\label{thm:MNW} For any instance with nonnegative-value players, there exists an EF1 and PO allocation. \end{theorem} \begin{proof} Let $\mathcal{W}$ be the set of all MNW allocations, and let $A$ be an allocation that is PO within $\mathcal{W}$---such an allocation must exist because otherwise there would be an infinite sequence of Pareto improvements in~$\mathcal{W}$. It is known that every MNW allocation is EF1 \citep{CaragiannisKuMo19}, so $A$ is EF1. We claim that $A$ is PO within the set of all allocations. Suppose to the contrary that there is a Pareto improvement $A'$ of~$A$. Since $v_i(A_i') \ge v_i(A_i)$ for all $i\in T$, $A'$ must also be an MNW allocation. However, this contradicts the assumption that $A$ is PO within $\mathcal{W}$. \end{proof} Given \Cref{thm:MNW}, a natural question is whether there exists a polynomial-time algorithm that computes an allocation guaranteed by the theorem. However, this question is open even for one-sided preferences.\footnote{\citet{BarmanKrVa18} gave a pseudopolynomial-time algorithm for this problem.} We demonstrate next that, in two special cases, such an algorithm exists. The first case is when the teams have \emph{binary valuations}, meaning that each team has value either $0$ or $1$ for each player. In this case, it turns out that \Cref{alg:SS-EF1} computes an EF1 and PO allocation in polynomial time. \begin{theorem} For any instance with binary valuations, \Cref{alg:SS-EF1} computes an EF1 and PO allocation in polynomial time. \end{theorem} \begin{proof} Since EF1 and polynomial-time computability were already shown in the proof of \Cref{thm:individual-stable}, it is sufficient to establish PO. Let $A$ be the outcome of \Cref{alg:SS-EF1}, and suppose to the contrary that there is a Pareto improvement $A'$ of $A$. For each player~$p$, we denote by $A(p)$ and $A'(p)$ the team that $p$ is allocated to in $A$ and $A'$, respectively. Note that $A'(p)\succsim_p A(p)$ for all $p\in P$ and $v_i(A'_i)\ge v_i(A_i)$ for all $i\in T$. We claim that $v_{A(p)}(p)\ge v_{A'(p)}(p)$ for each player $p$. Indeed, if this is not the case, then $v_{A(p)}(p)=0$ and $v_{A'(p)}(p)=1$ and moreover $A'(p)\succsim_p A(p)$. However, a similar proof as that for individual stability in \Cref{thm:individual-stable} shows that such a deviation by $p$ from $A(p)$ to $A'(p)$, which hurts neither $p$ nor $A(p)$ and strictly helps $A'(p)$, cannot exist, thereby proving the claim. Now, since $A'$ is a Pareto improvement of $A$, we have \[\sum_{p\in P}v_{A(p)}(p)=\sum_{i\in T} v_i(A_i)\le \sum_{i\in T}v_i(A'_i)=\sum_{p\in P}v_{A'(p)}(p).\] Since $v_{A(p)}(p)\ge v_{A'(p)}(p)$ for all $p\in P$, we must have $v_{A(p)}(p) = v_{A'(p)}(p)$ for all $p\in P$ and $v_i(A_i) = v_i(A_i')$ for all $i\in T$. Thus, we can construct a better matching than $\mu^*$ in \Cref{alg:swap-stable-rr} on $\widehat{P}^+$ (\Cref{line:SS+} of \Cref{alg:SS-EF1}) by a round-robin sequence in which each team $i$ picks players in $A'_i$ as early as possible, because the Pareto improvement makes no player worse off and at least one player strictly better off. However, this contradicts the definition of $\mu^*$. \end{proof} Next, we focus on the case where there are three teams with identical valuations, and each player prefers one team and is indifferent between the other two teams. For $i\in[3]$, denote by $S_i$ the type of players who prefer team~$i$. \begin{algorithm}[htb] \caption{EF1 and PO algorithm for three teams with the conditions in \Cref{thm:three-teams}}\label{alg:three-teams} \For{$i\leftarrow 1,2,3$}{ Assign each player with zero value of type $S_i$ to team $i$\; } \While{there is at least one unassigned player}{ Let $i$ be a team with the least current value. If there is more than one such team, choose a team~$i$ for which there is at least one unassigned player of type~$S_i$, if possible\; \label{line:team-tiebreak} \lIf{there is an unassigned player of type~$S_i$}{Assign any such player to team~$i$} \lElseIf{there is only one type of players left}{Assign any remaining player to team~$i$} \lElse{Denote the other two types by $S_j$ and $S_k$. Let $f(S_j)$ be the total value that team~$j$ would receive if all unassigned players of type $S_j$ were assigned to it (in addition to the already assigned players in team~$j$), and define $f(S_k)$ analogously for team $k$. Assign a player of the type with the higher $f$-value to team $i$, breaking ties between types arbitrarily and breaking ties among players in favor of higher-value players} } \Return the current allocation $(A_1,A_2,A_3)$\; \end{algorithm} \begin{theorem} \label{thm:three-teams} Suppose that there are $n = 3$ teams with identical valuations, all players yield nonnegative value, and each player belongs to one of the types $S_1,S_2,S_3$, where players of type $S_i$ prefer team~$i$ and are indifferent between the remaining two teams. Then, \Cref{alg:three-teams} computes an EF1 and PO allocation in polynomial time. \end{theorem} \begin{proof} It is clear that the algorithm runs in polynomial time. Since the algorithm always assigns a player to a team~$i$ with the least current value, no other team envies $i$ by more than one player at this point. Hence, the same is true for all pairs of teams during the entire execution of the algorithm, which means that the allocation is EF1.\footnote{Alternatively, the algorithm can be seen as a special case of \citet{LiptonMaMo04}'s \emph{envy cycle elimination algorithm} for identical valuations.} We now show that the allocation is PO. Assume without loss of generality that the types run out in the order $S_1,S_2,S_3$. In particular, team~$1$ may receive players of all three types, team~$2$ may only receive players of type $S_2$ and $S_3$, and team~$3$ may only receive players of type $S_3$. Suppose for contradiction that there exists a Pareto improvement $A' = (A_1',A_2',A_3')$ of the output allocation $A$. Denoting the common team valuation by $v$, we have $v(A_i) = v(A_i')$ for all $i\in\{1,2,3\}$. Since all players with zero value are already with their preferred team in $A$, they must remain with their team in $A'$. Moreover, since $A_3$ only contains players of type~$S_3$ and $v(A_3) = v(A_3')$, it must be that $A_3 = A_3'$. So, from $A$ to $A'$, some players of type $S_3$ are moved from $A_2$ to $A_1'$, while some players of type $S_2$ or $S_3$ (at least one player of type $S_2)$ are moved from $A_1$ to $A_2'$, where both sets of players have the same total value. We will show that every player of type~$S_2$ in~$A_1$ has a strictly larger value than the total value of all players of type~$S_3$ in $A_2$. This is sufficient to obtain the desired contradiction. Consider the moment when the algorithm assigns the last player $p$ of type~$S_2$ to team~$1$. Since players of type~$S_3$ run out after those of type~$S_2$, there is at least one player of type~$S_3$ available at this moment. The choice of the algorithm to assign a player of type~$S_2$ to team~$1$ implies that $f(S_2) \ge f(S_3)$. After $p$'s assignment, $f(S_2)$ decreases by $v(p)$, so it holds that $f(S_3) - f(S_2) \le v(p)$ at this point. Now, because players of type~$S_2$ have not run out before $p$'s assignment, the first assignment of a player $\widehat{p}$ of type~$S_3$ to team~$2$ must occur after $p$'s assignment. Between $p$'s assignment and $\widehat{p}$'s assignment, some players of type~$S_3$ may be assigned to team~$1$---this only decreases $f(S_3)$. Hence, directly before $\widehat{p}$'s assignment, we still have $f(S_3) - f(S_2) \le v(p)$. Moreover, at this point, the partial allocation $A''$ satisfies $v(A''_2) < v(A''_3)$ (if $v(A''_2) = v(A''_3)$, the algorithm should have assigned $\widehat{p}$ to team~$3$ due to the tie-breaking rule in \Cref{line:team-tiebreak}), and only players of type~$S_3$ are left, i.e., $v(A''_2) = f(S_2)$. Therefore, the total value of players of type~$S_3$ assigned to team~$2$ is at most $f(S_3) - v(A''_3) < f(S_3) - v(A''_2) = f(S_3) - f(S_2) \le v(p)$, i.e., this value is strictly less than $v(p)$. On the other hand, by the tie-breaking on players, every player of type~$S_2$ assigned to team~$1$ has value at least $v(p)$. This completes the proof. \end{proof} Finally, we provide a pseudopolynomial-time algorithm for the case where the number of teams is constant. \begin{theorem} \label{thm:const} For any instance with a constant number of teams, each of which has a nonnegative integer value for each player, an EF1 and PO allocation can be computed in pseudopolynomial time. \end{theorem} \begin{proof} Let $v_{\max}\coloneqq\max_{i\in T,\,p\in P}v_i(p)$. We construct a table $H$ which classifies all possible utility vectors for teams that can be attained by allocating the first $j$ players $p_1,\dots,p_j$. The entry $H(\bm{u},j)$ indicates whether there exists an allocation $A$ of players $p_1,\dots,p_j$ such that $\bm{u}=(v_1(A_1),\dots,v_n(A_n))$. Moreover, if there exists such an allocation, $H(\bm{u},j)$ is an allocation that maximizes the players' happiness lexicographically with respect to the reverse player order (i.e., maximizes player $p_j$'s happiness, then maximizes player $p_{j-1}$'s happiness, and so on) among such allocations. Note that the utility of a team for an allocation is an integer belonging to the range $[0,\, m\cdot v_{\max}]$. Hence, the size of the table $H$ is $O(m\cdot (1+m\cdot v_{\max})^n)$, which is pseudopolynomial when $n$ is a constant. We can fill in the entries of the table according to the following recursive formula, where $\chi_i$ denotes the $i$th unit vector of length $n$, that is, the $k$th coordinate is $1$ if $k=i$ and $0$ otherwise. \begin{itemize} \item For $j=0$, the entry $H(\bm{u},j)$ is $(\emptyset,\dots,\emptyset)$ if $\bm{u}=\bm{0}$, and $\bot$ otherwise. \item For $j=1,2,\dots,m$, the entry $H(\bm{u},j)$ is $\bot$ if $H(\bm{u}-v_i(p_j)\cdot\chi_{i},\,j-1)=\bot$ for all $i\in T$. Otherwise, let $i^*$ be a team that $p_j$ prefers the most among the teams $i$ such that $H(\bm{u}-v_i(p_j)\cdot\chi_{i},\,j-1)\ne\bot$. If there are multiple such teams, we select a team that yields a lexicographically optimal allocation for $p_1,\dots,p_{j-1}$ with respect to the reverse player order. Then, the entry is the allocation such that $p_1,\dots,p_{j-1}$ are allocated as in $H(\bm{u}-v_{i^*}(p_j)\cdot\chi_{i^*},\,j-1)$ while $p_j$ is allocated to $i^*$. \end{itemize} The entries $H(\bm{u},j)$ can be computed in $O(nm)$ time each in a bottom-up manner, so we can construct the table $H$ in $O(nm^2\cdot (1+m\cdot v_{\max})^n)$ time. Now, by using the table, we can pick a utility vector $\bm{u}^*$ that corresponding to an MNW allocation (of all $m$ players). Similarly to the proof of \Cref{thm:MNW}, we can conclude that the allocation $H(\bm{u}^*,m)$ is EF1 and PO. \end{proof} \section*{Acknowledgments} This work was partially supported by JSPS KAKENHI Grant Numbers JP17K12646, JP20K19739, JP21K17708, and JP21H03397, by JST PRESTO Grant Numbers JPMJPR2122 and JPMJPR20C1, by Value Exchange Engineering, a joint research project between Mercari, Inc.\ and the RIISE, and by an NUS Start-up Grant. \bibliographystyle{plainnat}
{ "attr-fineweb-edu": 1.584961, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdTA4eILhQGyaEDiM
\section{Introduction} The 3rd edition of the \textbf{Mu}ltimodal \textbf{Se}ntiment Analysis (MuSe) Challenge, addresses three tasks: humour detection and categorical as well as dimensional emotion recognition. Each corresponding sub-challenge utilises a different dataset. In the \acl{MuSe-Humor} (\textbf{\ac{MuSe-Humor}}), participants will detect the presence of humour in football press conference recordings. For \ac{MuSe-Humor}, the novel \acl{Passau-SFCH} (\textbf{\ac{Passau-SFCH}}) dataset is introduced. It features press conference recordings of 10 German Bundesliga football coaches, recorded between August 2017 and November 2017. Initially, the dataset comprises about 18 hours of video, where each of the 10 coaches accounts for at least 90 minutes of data. The subset provided in the challenge still features 11 hours of video. Originally, the data is annotated for direction as well as sentiment of humour following the two-dimensional model of humour proposed in \cite{martin2003individual}. In the challenge, only the presence of humour is to be predicted. For the Emotional Reactions Sub-Challenge (\ac{MuSe-Reaction}), emotional reactions are explored by introducing a first of its kind, large-scale (2,222 subjects, 70+ hours), multi-modal (audio and video) dataset: \textsc{Hume-Reaction}\,. The data was gathered in the wild, with subjects recording their own facial and vocal reactions to a wide range of emotionally evocative videos via their webcam, in a wide variety of at-home recording settings with varying noise conditions. Subjects selected the emotions they experienced in response to each video out of 48 provided categories and rated each selected emotion on a 0-100 intensity scale. In this sub-challenge, participants will apply a multi-output regression to predict the intensities of seven self-reported emotions from the subjects' multi-modal recorded responses: \begin{inparaitem}[]\item Adoration, \item Amusement, \item Anxiety, \item Disgust, \item Empathic Pain, \item Fear, \item Surprise \end{inparaitem}. The \acl{MuSe-Stress} (\textbf{\ac{MuSe-Stress}}) is a regression task on continuous signals for valence and arousal. It is based on the \acl{Ulm-TSST} dataset (\ac{Ulm-TSST}), comprising individuals in a stress-inducing scenario following the \ac{TSST}. This sub-challenge is motivated by the prevalence of stress and its harmful impacts in modern societies~\cite{can2019stress}. In addition to audio, video and textual features, \ac{Ulm-TSST} includes four biological signals captured at a sampling rate of 1\,kHz; EDA, Electrocardiogram (ECG), Respiration (RESP), and heart rate (BPM). \ac{MuSe-Stress} was already part of MuSe 2021~\cite{stappen2021muse}, where it attracted considerable interest. Due to some participants reporting challenges generalising to the test set~\cite{hamieh2021multi, duong2021multi}, we rerun the challenge, allowing participants to submit more predictions than in the previous iteration. We thereby hope to encourage participants to thoroughly explore the robustness of their proposed approaches. Moreover, for this year's \ac{MuSe-Stress} sub-challenge, we use the labels of last year's \textsc{MuSe-Physio\,} sub-challenge as the arousal gold standard. \begin{table}[t!] \footnotesize \caption{ Reported are the number (\#) of unique subjects, and the duration for each sub-challenge hh\,:mm\,:ss. \label{tab:paritioning} } \resizebox{\linewidth}{!}{ \begin{tabular}{lrcrcrc} \toprule & \multicolumn{2}{c}{\textbf{\ac{MuSe-Humor}}} & \multicolumn{2}{c}{\textbf{\ac{MuSe-Reaction}}} & \multicolumn{2}{c}{\textbf{\ac{MuSe-Stress}}}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr ){6-7} Partition & \# & Duration & \# & Duration & \# & Duration \\ \midrule Train & 4 & 3\,:52\,:44 & 1334 &51\,:04\,:02 & 41 & 3\,:25\,:56 \\ Development & 3 & 3\,:08\,:12 & 444 & 14\,:59\,:27 & 14 & 1\,:10\,:50 \\ Test & 3 & 3\,:55\,:41 & 444 & 14\,:48\,:21 & 14 & 1\,:10\,:41 \\ \midrule $\sum$ & 10 & 10\,:56\,:37 & 2222 & 74\,:26\,:19 & 69 & 5\,:47\,:27 \\ \bottomrule \end{tabular} } \end{table} By providing the mentioned tasks in the 2022 edition of \ac{MuSe}, we aim for addressing research questions that are of interest to affective computing, machine learning and multimodal signal processing communities and encourage a fusion of their disciplines. Further, we hope that our multimodal challenge can yield new insights into the merits of each of the core modalities, as well as various multimodal fusion approaches. Participants are allowed to use the provided feature sets in the challenge packages and integrate them into their own machine learning frameworks. The paper is structured as follows: \Cref{sec:challenges} introduces the three sub-challenges alongside with the datasets they are based on, and outlines the challenge protocol. Then, pre-processing, provided features, their alignment, and our baseline models are described in \Cref{sec:features}. In \Cref{sec:results}, we present and discuss our baseline results before concluding the paper in \Cref{sec:conclusion}. \section{The Three Sub-Challenges}\label{sec:challenges} In what follows, each sub-challenge and dataset is described in detail, as well as the participation guidelines. \begin{figure}[h!] \centering \subfloat[Valence label distribution]{ \includegraphics[width=.45\columnwidth]{figures/stress/v/density_valence.pdf}} \label{fig:density-valence} \subfloat[Physiological arousal label distribution]{ \includegraphics[width=.45\columnwidth]{figures/stress/a/density_arousal.pdf}} \label{fig:density-arousal} \caption{Frequency distribution in the partitions train, development, and test for the continuous prediction sub-challenge \ac{MuSe-Stress}.} \label{fig:freq} \end{figure} \subsection{The MuSe-Humor Sub-Challenge\label{sec:humor}} Humour is one of the richest and most consequential elements of human behaviour and cognition~\cite{gkorezis2014leader} and thus of high relevance in the context of affective computing and human-computer interaction. As humour can be expressed both verbally and non-verbally, multimodal approaches are especially suited for detecting humour. However, while humour detection is a very active field of research in Natural Language Processing (e.\,g., \cite{chen2018humor, yang2015humor}), only a few multimodal datasets for humour detection exist~\cite{hasan2019ur, mittal2021so, wu2021mumor}. Especially, to the best of our knowledge, there are no datasets for detecting humour in spontaneous, non-staged situations. With \ac{MuSe-Humor}, we intend to address this research gap. In this challenge, the \ac{Passau-SFCH} dataset is utilised. It features video and audio recordings of press conferences of 10 German Bundesliga football coaches, during which the coaches occasionally express humour. The press conferences present natural, live, semi-staged communication of the coaches to and with journalists in the audience. All subjects are male and aged between 30 and 53 years. The dataset is split into speaker independent partitions. The training set includes the videos of 4 coaches, while the development and test partition both comprise the videos of 3 coaches. We only include segments in which the coach is speaking to ensure that humour is detected from the behaviour of the coach and not the audience, e.\,g., laughter. Participants are provided with video, audio, and ASR transcripts of said segments. To obtain the transcripts, we utilise a Wav2Vec2-Large-XLSR~\cite{conneau2020unsupervised} model fine-tuned on the German data in Common Voice~\cite{commonvoice:2020}\footnote{\href{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german}{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german}}. Moreover, manually corrected transcripts are included. Every video was originally labelled by 9 annotators at a 2\,Hz rate indicating sentiment and direction of the humour expressed, as defined by the two-dimensional humour model proposed by~\citet{martin2003individual} in the \ac{HSQ}. For the challenge, we only build upon binary humour labels, i.\,e., indicating if the coach's communication is humourous or not. We obtain a binary label referring to presence or absence of humour using the following three steps. First, we only consider the humour dimension label of sentiment. Second, based on the sentiment labels, we filter out annotators displaying low agreement with other annotators. In order to account for slight lags in annotation signals, we choose to compute the target humour labels for frames of two seconds using a step size of one second. Finally, such a frame is considered as containing humour if at least 3 of the remaining annotators indicate humour within this frame. As a result, 4.9\,\% of the training partition frames, 2.9~\% of the development partition frames, and 3.9\,\% of the test partition frames are labelled as humorous. We deliberately opted for a split in which the humour label is over-represented in the training partition in order to help participants' models with learning. The provided features are extracted at 2\,Hz rates. They can easily be mapped to the 2\,s segments they belong to. For evaluation, the \ac{AUC} metric is utilised, indicating how well a model can separate humorous from non-humorous frames. \subsection{The MuSe-Reaction Sub-Challenge\label{sec:reaction}} Computational approaches for understanding human emotional reactions are of growing interest to researchers~\cite{kumar2021construction,sun2020eev}, with emerging applications ranging from pedagogy~\cite{chalfoun2006predicting} to medicine~\cite{skoraczynski2017predicting}. A person's reaction to a given stimulus can be informative about both the stimulus itself, e.\,g., whether educational material is interesting to a given audience, and about the person, e.\,g., their level of empathy~\cite{tamborini1990reacting} and well-being~\cite{zohar2005effects}. However, progress in developing computational approaches to understand human emotional reactions has been hampered by the limited availability of large-scale datasets of spontaneous emotional reactions. Thus, for the \ac{MuSe-Reaction} sub-challenge, we introduce the \ac{Hume-Reaction} dataset, which consists of more than 70 hours of audio and video data, from 2,222 subjects from the United States (1,138) and South Africa (1,084), aged from 18.5 -- 49.0 years old. The subjects within the dataset are reacting to a wide range of emotionally evocative stimuli (2,185 stimuli in total~\cite{cowen2017self}). Each sample within the dataset has been self-annotated by the subjects themselves for the intensity of 7 emotional expressions in a range from 1-100: \begin{inparaitem}[]\item Adoration, \item Amusement, \item Anxiety, \item Disgust, \item Empathic Pain, \item Fear, \item Surprise\end{inparaitem}. The data is self-recorded via subjects' own webcams in an environment of their choosing, including a wide variety of background, noise, and lighting conditions. Furthermore, different subjects spontaneously reacted with their faces and voices to varying degrees, such that the audio and multi-modal aspects of this sub-challenge will be particularly interesting to incorporate. The organisers also provide labels for detected (energy-based) vocalisations to aid participants in incorporating audio, with a total of 8,064 multi-modal recordings found to include vocalisations. For the \ac{MuSe-Reaction} sub-challenge the aim is to perform a multi-output regression from features extracted from the multi-modal (audio and video) data for the intensity of 7 emotional reaction classes. For this sub-challenge's evaluation, the Pearson's correlations coefficient ($\rho$) is reported as the primary baseline. \subsection{The \ac{MuSe-Stress} Sub-challenge} The \ac{MuSe-Stress} task is based on the multimodal \ac{Ulm-TSST} database, for which subjects were recorded in a stress-inducing, free speech scenario, following the \ac{TSST} protocol~\cite{kirschbaum1993trier}. In the \ac{TSST}, a job interview situation is simulated. Following a short period of preparation, a five-minute free speech oral presentation is given by the subjects. This presentation is supervised by two interviewers, who do not communicate with the subjects during the five minutes. \ac{Ulm-TSST} comprises recordings of such \ac{TSST} presentations of 69 participants (49 of them female), aged between 18 and 39 years. Overall, \ac{Ulm-TSST} includes about 6 hours of data ({cf.\ } \Cref{tab:paritioning}). On the one hand, the dataset features the audio, video, and text modalities. On the other hand, the physiological signals ECG, RESP, and BPM are provided. For extensive experiments on multimodal emotion recognition in \ac{TSST}-based multimodal datasets see~\cite{alice_tsst}. \ac{Ulm-TSST} has been annotated by three raters continuously for the emotional dimensions of valence and arousal, at a 2\,Hz sampling rate. Regarding valence, a gold standard is created by fusing the three corresponding annotator ratings, utilising the \ac{RAAW} method from the MuSe-Toolbox~\cite{stappen2021toolbox}. \ac{RAAW} addresses the difficulties arising when emotion annotations -- subjective in their nature -- are to be combined into a gold standard signal. In short, \ac{RAAW} first tackles the inherent rater lag by aligning the (per annotator) standardised signals via generalised \ac{CTW}~\cite{zhou2015generalized}. After that, the \ac{EWE}~\cite{grimm2005evaluation} is applied to the aligned signals. \ac{EWE} fuses the individual signals using a weighting based on each rater's inter-rater agreement to the mean of all others. A detailed description of \ac{RAAW} can be found in \cite{stappen2021toolbox}. We obtain a mean inter-rater agreement of 0.204 ($\pm$ 0.200) for valence. As for the arousal gold standard, a different approach is employed. Instead of fusing the three annotators' arousal ratings, we take the labels of last year's \textsc{MuSe-Physio\,} sub-challenge as the arousal gold standard. Here, the annotator with lowest inter-rater agreement is discarded and replaced with the subject's electrodermal activity signal (EDA) which is known to indicate emotional arousal~\cite{caruelle2019use}. This signal is downsampled to 2\,Hz and smoothed using a Savitzky–Golay filtering approach (window size of 26 steps) in advance. Then, the two remaining annotators and the preprocessed EDA signal are again fused via \ac{RAAW}, resulting in a mean inter agreement of 0.233 ($\pm 0.289$). This signal is called \emph{physiological arousal} in the following. The motivation to employ this kind of gold standard is to obtain a more objective arousal signal. Considering such an objective criterion for arousal in addition to subjective annotations is especially relevant given the task at hand: in the job interview setting, individuals can be expected to try to hide their arousal, making it more difficult for annotators to recognise it. Detailed experiments on combining subjective annotations with objective physiological signals are provided in~\cite{baird2021physiologically}. \ac{Ulm-TSST} is split into train, development, and test partitions containing 41, 14, and 14 videos, respectively. The split is identical to the split used in last year's challenge. \Cref{fig:freq} shows the distributions of the valence and physiological arousal signals for the dataset. \subsection{Challenge Protocol} All challenge participants are required to complete the \ac{EULA} which is available on the \ac{MuSe} 2022 homepage\footnote{\href{https://www.muse-challenge.org}{https://www.muse-challenge.org}}. Further, the participants must hold an academic affiliation. Each challenge contribution should be followed by a paper that describes the applied methods and provides the obtained results. The peer review process is double-blind. To obtain results on the test set, participants upload their predictions for unknown test labels on CodaLab\footnote{The link will be posted on the MuSe homepage: \href{https://www.muse-challenge.org}{https://www.muse-challenge.org}}. The number of prediction uploads depends on the sub-challenge: for \ac{MuSe-Humor} and \ac{MuSe-Reaction}, up to 5 prediction uploads can be submitted, while for \ac{MuSe-Stress}, up to 20 prediction uploads are allowed. We want to stress that the organisers themselves do not participate as competitors in the challenge. \section{Baseline Features and Model}\label{sec:features} To enable the participants to get started quickly, we provide a set of features extracted from each sub-challenge's video data. More precisely, the provided features include of up to five model-ready video, audio, and linguistic feature sets, depending on the sub-challenge\footnote{Note: Participants are free to use other external resources such as features, datasets, or pretrained networks. The accompanying paper is expected to clearly state and explain the sources and tools used.}. Regarding the label sampling rate, labels refer to 2\,s windows in \ac{MuSe-Humor}. The \ac{MuSe-Stress} data is labelled at a 2\,Hz rate. For \ac{MuSe-Reaction}, there is one label vector of 7 classes per sample. \subsection{Pre-processing} All datasets are split into training, development, and test sets. For all partitions, ratings, speaker independence, and duration are taken into consideration ({cf.\ } \Cref{tab:paritioning}). The videos in \ac{Passau-SFCH} are cut to only include segments in which the respective coach is actually speaking. As the press conference setting can be seen as a dialogue between journalists and the coach, the answers given by each coach provide a natural segmentation of the \ac{Passau-SFCH} data. For \ac{MuSe-Reaction} -- as can be seen in \Cref{tab:paritioning} --, a 60-20-20\% split strategy is applied. There is no additional segmentation applied to clean the data further, each sample contains a single reaction to an emotional stimulus, and labels were normalised per sample to range from [0\,:1]. For further exploration, the participants are also provided with voice activity segments from the samples, which show to contain audio of substantial energy. In the \ac{Ulm-TSST} dataset, we make sure to exclude scenes which are not a part of the TSST setting, e.\,g., the instructor speaking. Moreover, we cut segments in which TSST participants reveal their names. The \ac{Ulm-TSST} dataset is not segmented any further. \subsection{Audio} All audio files are first normalised to -3 decibels and then converted from stereo to mono, at 16\,kHz, 16\,bit. Afterwards, we make use of the two well-established machine learning toolkits \textsc{openSMILE}{}~\cite{eyben2010opensmile} and \textsc{DeepSpectrum}{}~\cite{Amiriparian17-SSC} for expert-designed and deep feature extraction from the audio recordings. Both systems have proved valuable in audio-based \ac{SER} tasks~\cite{Amiriparian22-DAP,Gerczuk22-EAT,Schuller21-TI2}. \subsubsection{\acs{eGeMAPS}} \label{ssec:egemaps} The \textsc{openSMILE}{} toolkit~\cite{eyben2010opensmile}\footnote{\href{https://github.com/audeering/opensmile}{https://github.com/audeering/opensmile}} is used for the extraction of the \ac{eGeMAPS}~\cite{eyben2015geneva}. This feature set which is proven valuable for \ac{SER} tasks~\cite{baird2019can}, also in past MuSe challenges (e.\,g., \cite{vlasenko2021fusion}), includes 88 acoustic features that can capture affective physiological changes in voice production. In \ac{MuSe-Humor}, we use the default configuration to extract the 88 \ac{eGeMAPS} functionals for each two second audio frame. For the audio of \ac{MuSe-Reaction}, the 88 \ac{eGeMAPS} functionals are extracted with a step size of 100\,ms and window size of 1 second. Regarding \ac{MuSe-Stress}, the functionals are obtained with a 2\,Hz rate, using a window size of 5 seconds. \subsubsection{\textsc{DeepSpectrum}} The principle of \textsc{DeepSpectrum}~\cite{Amiriparian17-SSC}\footnote{\href{https://github.com/DeepSpectrum/DeepSpectrum}{https://github.com/DeepSpectrum/DeepSpectrum}} is to utilise pre-trained image \acp{CNN} for the extraction of deep features from visual representations (e.\,g., Mel-spectrograms) of audio signals. The efficacy of \textsc{DeepSpectrum}{} features has been demonstrated for \ac{SER}~\cite{Ottl20-GSE}, sentiment analysis~\cite{Amiriparian17-SAU}, and general audio processing tasks~\cite{Amiriparian20-TCP}. For our \textsc{DeepSpectrum}{} baseline experiments, we use \textsc{DenseNet121}~\cite{huang2017densely} pre-trained on ImageNet~\cite{russakovsky2015imagenet} as the \ac{CNN} backbone. The audio is represented as a Mel-spectrogram with $128$ bands employing the viridis colour mapping. Subsequently, the spectrogram representation is fed into \textsc{DenseNet121}, and the output of the last pooling layer is taken as a $1\,024$-dimensional feature vector. The window size is set to one second, the hop-size to $500$\,ms. \subsection{Video} To extract specific image descriptors related to facial expressions, we make use of two \ac{CNN} architectures: \ac{MTCNN} and \textsc{VGGface 2}{}. We also provide a set of \acp{FAU} obtained from faces of individuals in the datasets. Further, participants are also given the set of extracted faces from the raw frames. In the videos of \ac{MuSe-Humor}, typically more than one face is visible. As this sub-challenge's objective is to predict the expression of humour of the coach, we only provide the faces of the respective coach and the features computed for them. \subsubsection{\acs{MTCNN}} The \ac{MTCNN}~\cite{zhang2016mtcnn} model\footnote{\url{https://github.com/ipazc/mtcnn}}, pre-trained on the data\-sets WIDER FACE~\cite{yang2016wider} and CelebA~\cite{liu2015faceattributes}, is used to detect faces in the videos. Two steps are carried out to filter extracted faces that do not show the coach in \ac{Passau-SFCH}: first, we automatically detect the respective coach's faces using FaceNet\footnote{\href{https://github.com/timesler/facenet-pytorch}{https://github.com/timesler/facenet-pytorch}} embeddings of reference pictures showing the coach. The results of this procedure are then corrected manually. \ac{Ulm-TSST}, in contrast, has a simple, static setting. The camera position is fixed and videos only show the \ac{TSST} subjects who typically do not move much. Similarly, for \ac{MuSe-Reaction}, the video is captured from a fixed webcam. Hence, the performance of \textsc{MTCNN\,} is almost flawless for both \ac{Hume-Reaction} and \ac{Ulm-TSST}. The extracted faces then serve as inputs of the feature extractors \textsc{VGGface 2}{} and \textsc{Py-Feat\,}. \subsubsection{\textsc{VGGface 2}} The purpose of \textsc{VGGface 2}{} is to compute general facial features for the previously extracted faces. \textsc{VGGface 2}{}~\cite{cao2018vggface2} is a dataset for the task of face recognition. It contains 3.3 million faces of about 9,000 different persons. As the dataset is originally intended for supervised facial recognition purposes, models trained on it compute face encodings not directly related to emotion and sentiment. We use a ResNet50~\cite{he2016deep} trained on \textsc{VGGface 2}{}\footnote{\href{https://github.com/WeidiXie/Keras-VGGFace2-ResNet50}{https://github.com/WeidiXie/Keras-VGGFace2-ResNet50}} and detach its classification layer, resulting in a 512-dimensional feature vector output referred to as \textsc{VGGface 2}{} in the following. \subsubsection{FAU} \acp{FAU} as originally proposed by Ekman and Friesen~\cite{ekman1978facial}, are closely related to the expression of emotions. Moreover, there is evidence of them being -- to a degree -- independent of an individual's cultural background~\cite{ekman1979facial}. Hence, detecting \acp{FAU} is a promising and popular approach to the visual prediction of affect-related targets (e.\,g., \cite{mallol2020investigation}). We employ \textsc{Py-Feat\,}\footnote{\url{https://py-feat.org}} to obtain predictions for the presence of 20 different \acp{FAU}. We do not change \textsc{Py-Feat\,}'s default configuration, so that a pre-trained random forest model is used to predict the \acp{FAU}. \subsection{Language: Bert} In recent years, pre-trained Transformer language models account for state-of-the-art results in numerous Natural Language Processing tasks, also in tasks related to affect (e.\,g., ~\cite{Schuller21-TI2}). In general, these models are pretrained in a self-supervised way utilising large amounts of text data. Subsequently, they can be fine-tuned for specific downstream tasks. For the transcripts of \ac{MuSe-Humor} and \ac{MuSe-Stress}, we employ a German version of the BERT (Bidirectional Encoder Representations from Transformers (\textsc{BERT\,})~\cite{devlin2019bert}) model\footnote{\hyperlink{https://huggingface.co/bert-base-german-cased}{https://huggingface.co/bert-base-german-cased}}. No further fine-tuning is applied. For both \ac{Passau-SFCH} and \ac{MuSe-Stress}, we extract the \textsc{BERT\,} token embeddings. Additionally, we obtain 768 dimensional sentence embeddings for all texts in \ac{Passau-SFCH} by using the encodings of \textsc{BERT\,}'s $[CLS]$ token. In all cases, we average the embeddings provided by the last 4 layers of the \textsc{BERT\,} model, following ~\cite{sun2020multi}. \subsection{Alignment} For each task, at least two different modalities are available. Typically, sampling rates per modality may differ. We sample the visual features with a rate of 2\,Hz in all sub-challenges. The only exception is the \acp{FAU} in \ac{MuSe-Reaction}, which are sampled at a 4\,Hz rate. Regarding the audio features (\textsc{DeepSpectrum}{} and \ac{eGeMAPS}{}), we apply the same frequency in \ac{MuSe-Humor} and \ac{MuSe-Stress}, while \ac{eGeMAPS} features are obtained using a step size of 100\,ms in \ac{MuSe-Reaction}. As \textsc{VGGish}{} and \acp{FAU} are only meaningful if the respective frame actually includes a face, we impute frames without a face with zeros. For \ac{MuSe-Humor}, the binary humour label refers to frames of at most 2 seconds length. Hence, each label in \ac{MuSe-Humor} corresponds to at most 4 facial and acoustic feature vectors. 2\,Hz sentence embedding vectors are constructed by assigning every sentence to the 500\,ms frames it corresponds to. If two sentences fall into the same frame, their embeddings are averaged to form the feature for that frame. Regarding \ac{MuSe-Reaction}, there is no alignment needed with labels, as each file is associated to a single vector of 7 emotional reaction labels. For the \ac{MuSe-Stress} sub-challenge, we provide label-aligned features. Hence, these features exactly align with the labels. We apply zero-padding to the frames, where the feature type is absent. Moreover, we downsample the biosignals in \ac{Ulm-TSST} to 2\,Hz, followed by a smoothing utilising a Savitzky-Golay filter. Participants are provided with both the raw signals and the downsampled ones. In both \ac{Ulm-TSST} and \ac{Passau-SFCH}, manual transcripts are available. However, they lack timestamps. Hence, we reconstruct word level timestamps utilising the Montreal Forced Aligner (MFA)~\cite{mcauliffe2017montreal} tool. Here, we employ the German (Prosodylab) model and the German Prosodylab dictionary. The text features are then aligned to the 2\,Hz label signal by repeating each word embedding throughout the determined interval of the corresponding word. In case a 500\,ms frame comprises more than one word, we average over the word embeddings. Zero imputing is applied to parts where subjects do not speak. For the sentence embeddings in \ac{Passau-SFCH} we choose an analogous approach, repeating and, if applicable, averaging the embeddings. \subsection{Baseline Model: LSTM-RNN\label{sec:model}} The sequential nature of the tasks makes recurrent neural networks (RNNs) a natural choice for a fairly simple baseline system. More specifically, we employ a Long Short-Term Memory (LSTM)-RNN. Initially, we train a single model on each of the available feature sets. Regarding \ac{MuSe-Stress}, we separately train a model for both labels, valence and arousal. We conduct an extensive hyperparameter search for each prediction target and feature. We thus optimise the number of RNN layers, the dimensionality of the LSTM's hidden vectors and the learning rate. Of note, we also experiment with both unidirectional and bidirectional LSTMs. The code as well as the configurations found in the hyperparameter search are available in the baseline GitHub repository\footnote{ \href{https://github.com/EIHW/MuSe2022}{https://github.com/EIHW/MuSe2022}}. Each label in \ac{MuSe-Humor} is predicted based on all feature vectors belonging to the corresponding 2\,s window. Hence, the sequence length in the \ac{MuSe-Humor} training process is at most 4 steps. In both \ac{MuSe-Reaction} and \ac{MuSe-Stress}, we make use of a segmentation approach which showed to improve results in previous works~\cite{sun2020multi, stappen2020muse1,stappen2021multimodal}. We find that a segmentation of the training data with a window size of 50\,s (i.\,e., 200 steps) and a hop size of 25\,s (i.e., 100 steps) leads to good results for \ac{MuSe-Stress}. For \ac{MuSe-Reaction} a slightly larger size of 500 steps and a hop size of 250, lead to more robust results. Following the unimodal experiments, in order to combine different modalities, for \ac{MuSe-Humor} and \ac{MuSe-Stress}, we implement a simple late fusion approach. We apply the exact same training procedure as before, now treating the predictions of previously trained unimodal models as input features. In these experiments, we use one configuration per task, without performing a hyperparameter search for every possible modality combination in \ac{MuSe-Stress}. As this approach for late fusion is less suited to a multi-label strategy, we apply an early fusion strategy for \ac{MuSe-Reaction}. For early fusion, we simply concatenate the best performing feature sets for each modality (audio and video), and then train a new model with the same hyperparameters from the uni-modal experiments. The code and configuration for the two fusion methods are also part of the baseline GitHub repository\footnote{\href{https://github.com/EIHW/MuSe2022}{https://github.com/EIHW/MuSe2022}}. Moreover, the repository also includes links to the best model weight files in order to ease reproducibility. \section{Experiments and Baseline Results}\label{sec:results} We apply the model described above for every sub-challenge. In what follows, we discuss the baseline results in more detail. \subsection{\ac{MuSe-Humor}} The results for \ac{MuSe-Humor} are given in~\Cref{tab:humor}. Each result is obtained from running the LSTM using the specified features with 5 different fixed seeds, consistent with the challenge setting. \begin{table}[h!] \caption{Results for \ac{MuSe-Humor}. We report the AUC-Scores for the best among 5 fixed seeds, as well as the mean \ac{AUC}-Scores over these seeds and the corresponding standard deviations.} \resizebox{1\columnwidth}{!}{% \centering \begin{tabular}{lcc} \toprule & \multicolumn{2}{c}{[\ac{AUC}]} \\ Features & \multicolumn{1}{c}{Development} & \multicolumn{1}{c}{Test} \\ \midrule \midrule \multicolumn{3}{l}{\textbf{Audio}} \\ \ac{eGeMAPS} & .6861 (.6731 $\pm$ .0172) & .6952 (.6979 $\pm$ .0098) \\ \textsc{DeepSpectrum} & .7149 (.7100 $\pm$ .0030) & .6547 (.6497 $\pm$ .0102) \\ \midrule \multicolumn{3}{l}{\textbf{Video}} \\ \ac{FAU} & .9071 (.9030 $\pm$ .0028) & .7960 (.7952 $\pm$ .0077) \\ \textsc{VGGface 2} & .9253 (.9225 $\pm$ .0024) & \textbf{.8480} (.8412 $\pm$ .0027) \\ \midrule \multicolumn{3}{l}{\textbf{Text}} \\ \textsc{BERT} & .8270 (.8216 $\pm$ 0045) & .7888 (.7905 $\pm$ 0035) \\ \midrule \multicolumn{3}{l}{\textbf{Late Fusion}} \\ A+T & .8901 (.8895 $\pm$ .0005) & .7804 (.7843 $\pm$ .0037) \\ A+V & .8252 (.8219 $\pm$ .0038) & .6643 (.6633 $\pm$ .0027) \\ T+V & .8908 (.8893 $\pm$ .0015) & .8232 (.8212 $\pm$ .0017) \\ A+T+V & .9033 (.9026 $\pm$ .0006) & .7973 (.7910 $\pm$ .0057) \\ \bottomrule \end{tabular}\label{tab:humor} } \end{table} Evaluating audio and video features for the \ac{MuSe-Humor} sub-challenge shows a clear pattern. The video-based features, \ac{FAU} and \textsc{VGGish}{}, clearly outperform the audio-based features with \textsc{VGGish}{} accounting for an \ac{AUC} of $.8480$ on the test set while \ac{eGeMAPS} only achieves $.6952$ \ac{AUC}. This comes as no surprise, given that the expression of humour is often accompanied by smile or laughter and thus recognisable from facial expressions features. A manual inspection of the humorous segments confirms this intuition. Nevertheless, audio features are able to detect humour, too. Partly, this may be due to the presence of laughter. The performance of text features ($.7888$ on the test set) is slightly worse than for the features based on the video modality, but also better than the performance of the audio features. We find that the sentence-level \textsc{BERT} features outperform the token-level features. With the simple fusion of modalities, the performance is not improved. Specifically, the late fusion approach typically shows worse generalisation to the test data than the unimodal experiments. e.\,g., there is a discrepancy of about $.16$ between mean \ac{AUC} on the development ($.8219$) and test ($.6643$) sets for the combination of audio and video. \subsection{\ac{MuSe-Reaction}} \Cref{tab:reaction} shows the results for the \ac{MuSe-Reaction} baseline. As expected, the audio results are substantially lower than those from the video modality. Of particular note, as it pertains to audio, we see that the emotion-tailored feature-set of \ac{eGeMAPS} performs poorly, almost $0.05$ $\rho$ lower on the development set than the \textsc{DeepSpectrum}{} features. Given that there is limited speech in the data set, this may be why the \textsc{DeepSpectrum}{} features perform better, as due to being spectrogram-based, they can potentially capture a more general acoustic scene and non-speech verbalisations potentially better. % For the video features, the \ac{FAU}s are performing much better on the test set than \textsc{VGGface 2}{} (although both are derived from faces), given the nature of the data being `reactions', it may be that the facial action units are much more dynamic generally, and these features model more accurately the emotional expression occurring within the scene. Interestingly, when we observe the individual class scores, we see that \textit{Amusement} is consistently performing better than all other classes, a finding which is consistent for audio and video features (\ac{eGeMAPS}: .148 $\rho$, and \ac{FAU}: .405 $\rho$). As well as being the most likely class to contain non-verbal communication e.\,g., laughter, this performance may be due to the known ease of modelling highly aroused states of emotional expression~\cite{tzirakis2018end2you}. However, it may also relate to the valence of the emotions as we can see from \Cref{fig:cms_reactions}, the \textit{Disgust} class is the worst performing for \ac{FAU}. It is worth noting that in this case, the early-fusion of the two best-performing feature sets in each modality does not yield any beneficial results. This holds, although we do consider that through the use of a knowledge-based audio approach, we may see more improvement for audio, which may result in stronger performance via fusion. \begin{table}[hbt!] \caption{Results for \ac{MuSe-Reaction}. Reported is the mean Pearson's Correlation Coefficient ($\rho$) for the 7 emotional reaction classes. For each feature and late fusion configuration, the result for the best of 5 fixed seeds is given. The respective mean and standard deviation of the results are provided in parentheses.} \resizebox{1\columnwidth}{!}{% \begin{tabular}{lcc} \toprule & \multicolumn{2}{c}{[$\rho$]} \\ Features & \multicolumn{1}{c}{Development} & \multicolumn{1}{c}{Test} \\ \midrule \midrule \multicolumn{3}{l}{\textbf{Audio}} \\ \ac{eGeMAPS} & .0583 (.0504 $\pm$ .0069) & .0552 (.0479 $\pm$ .0062) \\ \textsc{DeepSpectrum} & .1087 (.0945 $\pm$ .0096) & .0741 (.0663 $\pm$ .0077) \\ \midrule \multicolumn{3}{l}{\textbf{Video}} \\ \ac{FAU}{} & .2840 (.2828 $\pm$ .0016) & \textbf{.2801} (.2777 $\pm$ .0017) \\ \textsc{VGGface 2} & .2488 (.2441 $\pm$ .0027) & .1830 (.1985 $\pm$ .0088) \\ \midrule \multicolumn{3}{l}{\textbf{Early Fusion}} \\ A+V & .2382 (.2350 $\pm$ 0.0016) & .2029 (.2014 $\pm$ .0086) \\ \bottomrule \end{tabular}\label{tab:reaction} } \end{table} \begin{figure*}[h!] \centering \subfloat[Amusement (\ac{FAU} $\rho$ .405)]{\includegraphics[width=0.49\linewidth]{figures/reaction/MuSe-Reaction_TestFAUs-Amusement_0.405.pdf}} \label{cm-amusement} \subfloat[Disgust (\ac{FAU} $\rho$ .171 )]{\includegraphics[width=0.49\linewidth]{figures/reaction/MuSe-Reaction_TestFAUs-Disgust_0.171.pdf}} \label{cm-disgust} \caption{Confusion matrices for the best (Amusement) and worst (Disgust) performing classes for the best test set configurations in \ac{MuSe-Reaction} as reported in~\Cref{tab:reaction}.} \label{fig:cms_reactions} \end{figure*} \subsection{\ac{MuSe-Stress}} \Cref{tab:stress} reports the results obtained for \ac{MuSe-Stress}. Consistent with results reported by some of last year's participants~\cite{hamieh2021multi, duong2021multi}, the results for \ac{MuSe-Stress} partly fail to generalise to the test data. With respect to single modality experiments, this observation is particularly significant for the video features. For example, the best seed for predicting physiological arousal based on Facial Action Units yields a \ac{CCC} of $.5191$ on the development set, but only results in a \ac{CCC} of $.0785$ on the test partition. The audio feature sets, in comparison, achieve better generalisation with the most extreme difference between development and test \ac{CCC} being about $.12$ (for \ac{eGeMAPS} on physiological arousal). Moreover, for both prediction targets, the \textsc{DeepSpectrum}{} audio features perform best among the unimodal approaches with \ac{CCC} values of $.4239$ and $.4931$ on the test sets for physiological arousal and valence, respectively. A surprising aspect of the unimodal results is that audio features yield better results for valence than for arousal, contrary to previous results in the domain of multimodal emotion recognition. For the visual features, no such tendency to work better for one of the two dimensions can be observed: \acp{FAU} lead to better results for predicting valence (mean \ac{CCC} of $.3878$ on the test set) than for physiological arousal ($.1135$); the opposite is true for the \textsc{VGGface 2}{} features ($.0739$ and $.1576$ mean \ac{CCC} on the test set for valence and arousal, respectively). The textual \textsc{BERT\,} features account for higher \acp{CCC} on the development partition for valence (mean \ac{CCC} of $.3221$) than for physiological arousal ($.2828$). Surprisingly, however, for arousal, they generalise better to the test data, while for valence, the mean \textsc{BERT\,} \acp{CCC} drops from $.3221$ to $.1864$ when evaluating on the test set. These partly counterintuitive results may be attributed to the job interview setting. Job interviewees typically suppress nervousness in an attempt to give a relaxed, sovereign impression. This might make the detection of arousal from audio and video difficult. The comparably stable performance of textual features for physiological arousal may be due to correlations between participants pausing their speech for a longer time -- or hardly at all -- and arousal. We find such correlations to exist for several participants. We also experiment with the downsampled biosignals, motivated by some of last year's approaches (\cite{ma2021hybrid,zhang2021multimodal,cai2021multimodal}) to the task which used these signals as a feature. To do so, we concatenate the three signals (BPM, ECG, and respiratory rate) into a three-dimensional feature vector and normalise them. Here, severe generalisation and stability problems can be observed. To give an example, for arousal, the mean \ac{CCC} performance of biosignal features on the development set is $.2793$, but for the test set, it drops to .1095. What is more, the standard deviations obtained with the biosignal results are consistently higher than those of any other modality. Because of these issues and in order not to inflate the number of experiments, we exclude the physiological modality from the late fusion experiments. While valence prediction could not be improved by late fusion, the late fusion of the audio and text modality accounts for the best result on the test set for physiological arousal prediction ($.4761$ CCC), slightly surpassing the late fusion of audio and text ($.4413$) as well as \textsc{DeepSpectrum}{} ($.4239$). For valence, a generalisation issue for late fusion is apparent. To give an example, the late fusion of acoustic and visual features yields by far the best result on the development set ($.6914$) but only achieves a \ac{CCC} of $.4906$ on the test set. % \begin{table*}[h!bt] \caption{Results for \ac{MuSe-Stress}. Reported are the \ac{CCC} values for valence, and physiological arousal. For each feature and late fusion configuration, the result for the best of 20 fixed seeds is given. The respective mean and standard deviation of the results are provided in parentheses. The combined results are the mean of arousal and valence test \acp{CCC} for each feature set. } \resizebox{1.0\linewidth}{!}{% \begin{tabular}{lccccc} \toprule & \multicolumn{2}{c}{\textbf{(Physiological) Arousal}} & \multicolumn{2}{c}{\textbf{Valence}} & \textbf{Combined} \\ & \multicolumn{2}{c}{[\ac{CCC}]} & \multicolumn{2}{c}{[\ac{CCC}]} & [\ac{CCC}] \\ Features & Development & Test & Development & Test & Test \\ \midrule \midrule \multicolumn{6}{l}{\textbf{Audio}} \\ \ac{eGeMAPS} & .4112 (.3168 $\pm$ .0459) & .2975 (.3338 $\pm$ .0836) & .5090 (.4744 $\pm$ .0244) & .3988 (.3932 $\pm$ .0385) & .3482 \\ \textsc{DeepSpectrum} & .4139 (.3433 $\pm$ .0548) & .4239 (.4372 $\pm$ .0323) & .5741 (.5395 $\pm$ .0207) & \textbf{.4931} (.4826 $\pm$ .0324) & \textbf{.4585} \\ \midrule \multicolumn{6}{l}{\textbf{Video}} \\ \ac{FAU} & .5191 (.4257 $\pm$ .0475) & .0785 (.1135 $\pm$ .0335) & .4751 (.3886 $\pm$ .0534) & .2388 (.3878 $\pm$ .0560) & .1918 \\ \textsc{VGGface 2} & .3171 (.2697 $\pm$ .0216) & .2076 (.1576 $\pm$ .0285) & .2637 (.1106 $\pm$ .0739) & .0936 (.1968 $\pm$ .1130) & .1506 \\ \midrule \multicolumn{6}{l}{\textbf{Text}} \\ \textsc{BERT\,} & .3280 (.2828 $\pm$ .0372) & .3504 (.3218 $\pm$ .0423) & .3672 (.3221 $\pm$ .0285) & .1864 (.1872 $\pm$ .0269) & .2683 \\ \midrule \multicolumn{6}{l}{\textbf{Physiological}} \\ BPM + ECG + resp. & .3917 (.2793 $\pm$ .0782) & .1095 (.1151 $\pm$ .0656) & .4361 (.2906 $\pm$ .0787) & .1861 (.2141 $\pm$ .0953) & .1478 \\ \midrule \multicolumn{6}{l}{\textbf{Late Fusion}} \\ A+T & .4478 (.4409 $\pm$ .0038) & \textbf{.4761} (.4716 $\pm$ .0034) & .5243 (.4808 $\pm$ .0161) & .3653 (.3163 $\pm$ .0211) & .4207 \\ A+V & .5440 (.5167 $\pm$ .0142) & .3777 (.4011 $\pm$ .0229) & .6914 (.6811 $\pm$ .0081) & .4906 (.4969 $\pm$ .0184) & .4342 \\ T+V & .4609 (.4425 $\pm$ .0112) & .3303 (.3327 $\pm$ .0112) & .5144 (.4965 $\pm$ .0102) & .2462 (.2364 $\pm$ .0082) & .2883 \\ A+T+V & .5056 (.4940 $\pm$ .0070) & .4413 (.4485 $\pm$ .0125) & .6104 (.5720 $\pm$ .0215) & .3703 (.3455 $\pm$ .0258) & .4058 \\ \bottomrule \end{tabular} } \label{tab:stress} \end{table*} \section{Conclusions}\label{sec:conclusion} This baseline paper introduced MuSe 2022 -- the 3rd Multimodal Sentiment Analysis challenge. MuSe 2022 features three multimodal datasets: \ac{Passau-SFCH} with press conference recordings of football coaches annotated for humour, \ac{MuSe-Reaction} containing emotional reactions to stimuli, and \ac{Ulm-TSST} consisting of recordings of the stress-inducing TSST. The challenge offers three sub-challenges accounting for a wide range of different prediction targets: i) in \ac{MuSe-Humor}, humour in press conferences is to be detected; ii) in \ac{MuSe-Reaction}, the intensities of 7 emotion classes are to be predicted; and iii) \ac{MuSe-Stress} is a regression task on the levels of continuous valence and arousal values in a stressful situation. Similar to previous iterations (\cite{stappen2020muse1, stappen2021muse}), we employed open-source software to provide participants with an array of extracted features in order to facilitate fast development of novel methods. Based on these features, we set transparent and realistic baseline results. Features, code, and raw data are made publicly available. The official baselines on the test sets are as follows: $.8480$ \ac{AUC} for \ac{MuSe-Humor} as achieved using \textsc{VGGface 2}{} features; a mean $\rho$ over all classes of $.2801$ for \ac{MuSe-Reaction} is obtained utilising \ac{FAU}, and a \acp{CCC} of $.4761$ and $.4931$ for physiological arousal and valence, respectively, for \ac{MuSe-Stress}, based on \textsc{DeepSpectrum}{} features and a late fusion of audio and text modalities, respectively. The provided baselines give a first impression on which features and modalities may be suited best for the different tasks. % We believe that more refined methods of combining different modalities and features may lead to significant improvements over the reported baseline results. We hope that MuSe 2022 serves as a stimulating environment for developing and evaluating such novel approaches. \section{Acknowledgments} This project has received funding from the Deutsche Forschungsgemeinschaft (DFG) under grant agreement No.\ 461420398, and the DFG's Reinhart Koselleck project No.\ 442218748 (AUDI0NOMOUS). \begin{acronym} \acro{AReLU}[AReLU]{Attention-based Rectified Linear Unit} \acro{AUC}[AUC]{Area Under the Curve} \acro{CCC}[CCC]{Concordance Correlation Coefficient} \acro{CNN}[CNN]{Convolutional Neural Network} \acrodefplural{CNN}[CNNs]{Convolutional Neural Networks} \acro{CI}[CI]{Confidence Interval} \acrodefplural{CI}[CIs]{Confidence Intervals} \acro{CCS}[CCS]{COVID-19 Cough} \acro{CSS}[CSS]{COVID-19 Speech} \acro{CTW}[CTW]{Canonical Time Warping} \acro{ComParE}[ComParE]{Computational Paralinguistics Challenge} \acrodefplural{ComParE}[ComParE]{Computational Paralinguistics Challenges} \acro{DNN}[DNN]{Deep Neural Network} \acrodefplural{DNNs}[DNNs]{Deep Neural Networks} \acro{DEMoS}[DEMoS]{Database of Elicited Mood in Speech} \acro{eGeMAPS}[\textsc{eGeMAPS}]{extended Geneva Minimalistic Acoustic Parameter Set} \acro{EULA}[EULA]{End User License Agreement} \acro{EWE}[EWE]{Evaluator Weighted Estimator} \acro{FLOP}[FLOP]{Floating Point Operation} \acrodefplural{FLOP}[FLOPs]{Floating Point Operations} \acro{FAU}[FAU]{Facial Action Unit} \acrodefplural{FAU}[FAUs]{Facial Action Units} \acro{GDPR}[GDPR]{General Data Protection Regulation} \acro{HDF}[HDF]{Hierarchical Data Format} \acro{Hume-Reaction}[\textsc{Hume-Reaction}]{Hume-Reaction} \acro{HSQ}[HSQ]{Humor Style Questionnaire} \acro{IEMOCAP}[IEMOCAP]{Interactive Emotional Dyadic Motion Capture} \acro{KSS}[KSS]{Karolinska Sleepiness Scale} \acro{LIME}[LIME]{Local Interpretable Model-agnostic Explanations} \acro{LLD}[LLD]{Low-Level Descriptor} \acrodefplural{LLD}[LLDs]{Low-Level Descriptors} \acro{LSTM}[LSTM]{Long Short-Term Memory} \acro{MIP}[MIP]{Mood Induction Procedure} \acro{MIP}[MIPs]{Mood Induction Procedures} \acro{MLP}[MLP]{Multilayer Perceptron} \acrodefplural{MLP}[MLPs]{Multilayer Perceptrons} \acro{MPSSC}[MPSSC]{Munich-Passau Snore Sound Corpus} \acro{MTCNN}[MTCNN]{Multi-task Cascaded Convolutional Networks} \acro{MuSe}[MuSe]{\textbf{Mu}ltimodal \textbf{Se}ntiment Analysis Challenge} \acro{MuSe-Humor}[\textsc{MuSe-Humor}]{Humor Detection Sub-Challenge} \acro{MuSe-Reaction}[\textsc{MuSe-Reaction}]{Emotional Reactions Sub-Challenge} \acro{MuSe-Stress}[\textsc{MuSe-Stress}]{Emotional Stress Sub-Challenge} \acro{Passau-SFCH}[\textsc{Passau-SFCH}]{Passau Spontaneous Football Coach Humor} \acro{RAAW}[\textsc{RAAW}]{Rater Aligned Annotation Weighting} \acro{RAVDESS}[RAVDESS]{Ryerson Audio-Visual Database of Emotional Speech and Song} \acro{SER}[SER]{Speech Emotion Recognition} \acro{SHAP}[SHAP]{SHapley Additive exPlanations} \acro{SLEEP}[SLEEP]{Düsseldorf Sleepy Language Corpus} \acro{STFT}[STFT]{Short-Time Fourier Transform} \acrodefplural{STFT}[STFTs]{Short-Time Fourier Transforms} \acro{SVM}[SVM]{Support Vector Machine} \acro{TF}[TF]{TensorFlow} \acro{TSST}[TSST]{Trier Social Stress Test} \acro{TNR}[TNR]{True Negative Rate} \acro{TPR}[TPR]{True Positive Rate} \acro{UAR}[UAR]{Unweighted Average Recall} \acro{Ulm-TSST}[\textsc{Ulm-TSST}]{Ulm-Trier Social Stress Test} \acrodefplural{UAR}[UARs]{Unweighted Average Recall} \end{acronym} \clearpage \footnotesize \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.774414, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdNY5qg5A6DwoMORx
\section{Introduction} A fundamental challenge in basketball performance evaluation is the team nature of the game. Contributions to team success occur in the context of a five-player lineup, and isolating the specific contribution of an individual is a difficult problem with a considerable history. Among the many approaches to the player evaluation problem are well-known metrics like player efficiency rating (PER), wins produced (WP), adjusted plus-minus (APM), box plus-minus (BPM), win shares (WS), value over replacement player (VORP), and offensive and defensive ratings (OR and DR) to name only a few \citep{BasketballReferenceGlossary}. While these individual player metrics help create a more complete understanding of player value, some contributions remain elusive. Setting good screens, ability to draw defenders, individual defense, and off-ball movement are all examples of important contributions that are difficult to measure and quantify. In part, these contributions are elusive because they often facilitate the success of a teammate who ultimately reaps the statistical benefit. Even beyond contributions that are difficult to quantify, the broader question of chemistry between players is a critical aspect of team success or failure. It is widely accepted that some groups of players work better together than others, creating synergistic lineups that transcend the sum of their individual parts. Indeed, finding (or fostering) these synergistic groups of players is fundamental to the role of a general manager or coach. There are, however, far fewer analytic approaches to identifying and quantifying these synergies between players. Such positive or negative effects among teammates represent an important, but much less well understood, aspect of team basketball. In this paper we propose spectral analysis \citep{Diaconis:1988} as a novel approach to identifying and quantifying group effects in NBA play-by-play data. Spectral analysis is based on algebraic signal processing, a methodology that has garnered increasing attention from the machine learning community \citep{kakarala:2011, Kondor:2007, Kondor:2012}, and is particularly well suited to take advantage of the underlying structure of basketball data. The methodology can be understood as a generalization of traditional Fourier analysis, an approach whose centrality in a host of scientific and applied data analysis problems is well-known, and speaks to the promise of its application in new contexts from social choice to genetic epistasis and more \citep{Paudel:2013,Jurman:2008,Lawson:2006,Uminsky:2018,Uminsky:2019}. The premise of spectral analysis in a basketball context is simple: team success (appropriately measured) can be understood as a function on lineups. Such functions have rich structure which can be analyzed and exploited for data analytic insights. Previous work in basketball analytics has addressed similar questions from a different perspective. Both \cite{kuehn2016accounting} and \cite{maymin2013nba} studied lineup synergies on the level of player skills. In \cite{maymin2013nba} the authors used a probabilistic framework for game events, along with simulated games to evaluate full-lineup synergies and find trades that could benefit both teams by creating a better fit on both sides. In \cite{kuehn2016accounting}, on the other hand, the author used a probabilistic model to determine complementary skill categories that suggest the effect of a player in the context of a specific lineup. Work in \cite{grassetti2019estimation} and \cite{grassetti2019play} modeled lineup and player effects in the Italian Basketball League (Serie A1) based on an adjusted plus-minus framework. Our approach is different in several respects. First, we study synergies on the level of specific player groups independent of particular skill sets. We also ignore individual production statistics and infer synergies directly from observed team success, as defined below. As a consequence of this approach, our analysis is roster constrained-- we don't suggest trades based on prospective synergies across teams. We can, however, suggest groupings of players that allow for more optimal lineups within the context of available players, a central problem in the course of an NBA game or season. Further, our approach uses orthogonality to distinguish between the contributions of a group and nested subgroups. So, for example, a group of three players that appears to exhibit positive synergies may, in fact, be benefiting from strong individual and pair contributions while the triple of players adds no particular value as a pure triple. We tease apart these higher-order correlations. Furthermore, spectral analysis is not a model-based approach. As such, our methodology is notably free of modeling assumptions--rather than fitting the data, spectral analysis reports the observed data, albeit projected into a new basis with new information. Thus, it is a direct translation of what actually happened on the court (as we make precise below). As such, our methodology is at least complementary to existing work, and is also promising in presenting a new approach to understanding and appreciating the nuances of team basketball. Finally, we note that while the methodology that underlies the spectral analysis approach is challenging, the resulting intuitions and insights are readily approachable. In what follows, we have stripped the mathematical details to a minimum and relegated them to references for the interested reader. The analysis, on the other hand, shows promise as a new and practical approach to a difficult problem in basketball analytics. \section{Data} \label{Data} We start with lineup level play-by-play data from the 2015-2016 NBA season. Such play-by-play data is publically available on ESPN.com or NBA.com, or can be purchased from websites like bigdataball.com, already processed into csv format. For a given team, we restrict attention to the 15 players on the roster having the most possessions played on the season, and filter the play-by-play data to periods of games involving only those players. Next, we compute the aggregated raw plus-minus (PM) for each lineup. Suppose lineup $L$ plays against opposing lineup $M$ during a period of gameplay with no substitutions. We compute the points scored by each lineup, as well as the number of possessions for both lineups during that stretch of play. For example, if lineup $L$ scored 6 points in 3 possessions and lineup $M$ scored 3 points in 2 possessions, then their plus-minus is computed as the difference in points-per-possession times possessions. Thus, for $L$ the plus-minus is $(\frac{6}{3} - \frac{3}{2})3 = 1.5$ while for $M$ the plus-minus is $(\frac{3}{2} - \frac{6}{3})2 = -1$. Summing over all of lineup $L$'s possessions gives the total aggregate plus-minus for lineup $L$ which we denote by $\text{pm}_L$. Since a lineup consists of 5 players on the floor, there are $3003={15\choose 5}$ possible lineups, though most see little or no playing time. We thus naturally arrive at a function on lineups by associating with $L$ the value of that lineup's aggregate plus-minus, and write $f(L)=\text{pm}_L$. We call $f$ the team success function. This particular success metric has the advantage of being simple and intuitive. Moreover, by summing over all lineups we recover the value of the team's cumulative plus-minus, which is highly correlated with winning percentage. The function $f$ will serve as the foundation for our analysis, but we note that for what follows, any quantitative measure of a lineup's success could be substituted in its place. \section{Methodology} \label{methodology} Our goal is now to decompose the function $f$ in a way that sheds light on the various group contributions to team success. The groups of interest are generalized lineups, meaning groups of all sizes, from individual players to pairs, triples, groups of four, and full five-player lineups. Our primary tool is spectral analysis, which uses the language of representation theory \citep{serre2012linear} to understand functions on lineups. Observe that a full lineup is an unordered set of five players. Any reshuffling of the five players on the floor, or the ten on the bench, does not change the lineup under consideration. Moreover, given a particular lineup, a permutation (or reshuffling) of the fifteen players on the team will result in a new lineup. The set of such permutations has a rich structure as a mathematical group. In this case, all possible permutations of fifteen players are described by $S_{15}$: the symmetric group on 15 items \citep{Dummit:2004}. Furthermore, the set $X$ of five-player lineups naturally reflects this group structure (as a homogeneous space). Most importantly for our purposes, the set of functions on lineups has robust structure with respect to the natural action of permutations on functions. This structure is well understood and can be exploited for data analytic insights as we show below. By way of analogy, just as traditional Fourier analysis looks to decompose a time series into periodicities that can reveal a hidden structure (weekly or seasonal trends, say), our decomposition of $f$ will reveal group effects in lineup-level data. Let $L(X)$ denote the collection of all real valued functions on five-player lineups. This set is a vector space with the usual notions of sum of functions, multiplication by scalars, and an inner product given by \begin{equation} \label{inner product} \langle g,h \rangle =\frac{1}{|X|}\sum_{x\in X} g(x)h(x). \end{equation} The dimension of $L(X)$ is equal to the number of lineups, $3003={15\choose 5}$. In light of the permutation group's action on $L(X)$ as mentioned above, $L(X)$ admits a natural (invariant and irreducible) decomposition as follows: \begin{equation} \label{decomposition} L(X)=V_0\oplus V_1 \oplus V_2\oplus V_3\oplus V_4\oplus V_5 . \end{equation} Each $V_i$, with $0\le i \le 5$ is a vector subspace with data analytic significance. Rather than give a self contained treatment of this decomposition, we refer to \cite{Diaconis:1988} and \cite{Dummit:2004}, and here, simply note that each space is spanned by the matrix coefficients of the irreducible representations of the group $S_{15}$ associated with Young tableaux of shape $(10,5)$. We can gain some intuition for the decomposition by considering the lower-order spaces as follows. An explicit computation of the decomposition is given in section \ref{sec:toy example section} below for a toy example. Take $\delta_L$ to be the indicator function of a fixed lineup $L$, so that $\delta_L(L)=1$, while $\delta_{L}(L')=0$ for any other lineup $L'$. As above, $X$ is the set of all possible lineups, and \begin{equation} \label{meanspace} \delta=\sum_{L\in X}\delta_L. \end{equation} If we act on the function $\delta$ by reshuffling lineups (this is the action of the permutation group $S_{15}$), we see that while the terms in the summation in (\ref{meanspace}) get reordered, the function itself remains unchanged. (See section \ref{sec:toy example section} below for details.) Thus, the one-dimensional space spanned by $\delta$ is invariant under lineup reshuffling and represents the mean value of the function $f$ since we can write $f=c\delta+(f-c\delta)$. Here, $c$ is just the average value of $f$ and $c\delta$ is the best possible constant approximation to $f$. The function $f-c\delta$ represents the original data, but now centered with mean zero, and orthogonal to the space of constant functions with respect to the inner product in (\ref{inner product}). The space spanned by $\delta$ is $V_0$ in (\ref{decomposition}). To understand $V_1$, we start with indicator functions for individual players. Given a player $i$, define $\delta_i=\sum_{L\in\mathcal{L}_i}\delta_{L}-m\delta$ where the sum is over all lineups that include player $i$ and four other players, and $m$ is a constant chosen so that $\delta_i$ is orthogonal to $\delta$. One can show that the space spanned by $\{\delta_1,\delta_2,\ldots\delta_{15}\}$ is again stable under lineup reshuffling. (Though the set of individual indicator functions is linearly dependent, and only spans a 14-dimensional space as we'll see below.) The decomposition continues in an analogous way, though the computations become more involved. Several computational approaches are described in \cite{Diaconis:1988} and \cite{maslen:2003}. In our case of the symmetric group $S_{15}$ acting on lineups, we employ the method in \cite{maslen:2003}, which involves first computing the adjacency matrix of an associated {\it Johnson graph} $J(15,5)$. It turns out that $J(15,5)$ has 6 eigenvalues, each of which is associated with one of the effect spaces: zero (mean), and first through fifth-order spaces. Specifically, the largest eigenvalue is simple and is associated with the one-dimensional mean space; the second largest eigenvalue is associated with the first-order space, etc. It is now a matter of computing an eigenbasis for each space, and using it to project the data vector onto each eigenspace to give the orthogonal decomposition used in (\ref{decomposition}). It is also worth noting that spectral analysis includes the traditional analysis of variance as a special case, a connection suggested by the discussion above and further explained in \cite{Diaconis:1988}. The decomposition in (\ref{decomposition}) is particularly useful for two reasons. First, each $V_i$ can be interpreted as the space of functions encoding $i$-th order effects. For instance, one can see that $V_1$ is naturally understood as encoding first-order individual effects beyond the mean. Thus, the projection of $f$ onto $V_1$ can be understood as that part of team success $f$ attributable to the contributions of individual players. Similarly $V_2$ includes effects attributable to pure player pairs (individual contributions have been removed), and the corresponding projection of $f$ in $V_2$ gives the contributions of those pairs to team success. $V_3$ encodes contributions of groups of three, and so on. These interpretations follow from the fact that each subspace in the decomposition of $L(X)$ is invariant under the natural reshuffling action of $S_{15}$ on lineups. It is also worth noticing that the lineup success function is completely recovered via its projections onto the order subspaces in (\ref{decomposition}). If we write $f_i$ for the projection of $f$ onto $V_i$, then $f=f_0+f_1+f_2+f_3+f_4+f_5$. As such, the spectral decomposition gives a complete description of the original data set with respect to a new basis grounded in group contributions. Secondly, the decomposition in (\ref{decomposition}) is orthogonal (signified by the $\oplus$ notation). From a data analytic perspective, this means that there is no overlap among the spaces, and group effects are independent. Thus, for instance, a contribution attributed to a group of three players can be understood as a pure third-order contribution. All constituent pair and individual contributions have been removed and quantified separately in the appropriate lower-order spaces. We thus avoid erroneous attribution of success due to multicollinearity among groups. For example, is a big three really adding value as a triple, or is its success better understood as a strong pair plus an individual? The spectral decomposition in (\ref{decomposition}) provides a quantitative basis for answering such questions. The advantage of the orthogonality of the spaces in (\ref{decomposition}), however, presents a challenge with respect to direct interpretation of contributions for particular groups. This is evident when considering the dimension of each of the respective effect spaces in Table \ref{ProjSpaceDimensions}, which is strictly smaller than the number of groups of that size we might wish to analyze. \begin{table} \centering \begin{tabular}{ccc} \hline Space & Dimension & Number of Groups\\ \hline $V_0$ & 1 & --\\ $V_1$ & 14 & 15\\ $V_2$ & 90 & 105\\ $V_3$ & 350 &455\\ $V_4$ & 910 & 1365\\ $V_5$ & 1638 & 3003\\ \hline \end{tabular} \caption{Dimension of each effect space, along with the number of natural groups of each size.} \label{ProjSpaceDimensions} \end{table} Since we have rosters of fifteen players, there are fifteen individual contributions to consider. The space $V_1$, however, is 14-dimensional. Similarly, while $V_2$ includes all of the contributions to $f$ attributable to pairs of players, it does so in a 90-dimensional space despite the fact that there are $105={15\choose 2}$ natural pairs of players to consider. The third-order space $V_3$ has dimension 350 while there are 455 player triples, and so on. We deal with this issue using Mallows' method of following easily interpretable vectors as in \cite{Diaconis:1988}. Let $g$ be a group of players. For example, if players are labeled 1 through 15, then a particular triple might be $g=\{1,2,7\}$. Let $\phi_g$ be the indicator function associated with $g$, i.e., the function that takes the value 1 when all three players 1, 2, and 7 are in a lineup, and outputs 0 otherwise. The function $\phi_g$ is intuitively associated with the success of the group $g$ (though it is not invariant under reshuffling and is not orthogonal to nested lower-order groups). To quantify the contribution of $g$ (as a pure triple) to the success of the team as measured by $f$, project both $\phi_g$ and $f$ onto $V_3$ and take the inner product of the projections: $\langle pr_{V_3}(\phi_g), pr_{V_3}(f)\rangle = \langle pr_{V_3}(\phi_g), f_3\rangle$. After projecting onto $V_3$ we are left with only the third-order components of $\phi_g$ and $f$. The resulting inner product is a weighted cosine similarity that indicates the extent to which the pure triple $g$ is correlated with the team's success $f$. Larger values of this inner product reflect a stronger synergy between the triple of players $\{1,2,7\}$, while a negative value indicates that, after removing the contributions of the constituent individuals and pairs, spectral analysis finds this particular group of three ineffective. In the results below we show how this information might be useful in evaluating lineups. \section{Two-On-Two Basketball} \label{sec:toy example section} To ground the ideas of the previous section we present a small-scale example in detail. Consider a version of basketball where a team consists of 5 players, two of which play at any given moment. The set of possible lineups consists of the ten unordered pairs $\{i,j\}$ with $i,j\in\{1,2,3,4,5\}$ and $i\ne j$. The symmetric group $S_5$ acts on lineups by relabeling, and we extend this action to functions on lineups as follows. Given a permutation $\pi$, a function $h$, and a lineup $L$, define \begin{equation} (\pi\cdot h)(L)=h(\pi^{-1}L). \end{equation} Therefore, if $\pi$ is the permutation $(123)$, taking player 1 to player 2, player 2 to player 3, player 3 to player 1, and leaving everyone else fixed, and if $L$ is the lineup $\{1,3\}$, then \begin{equation} (\pi\cdot h)(L) = h(\pi^{-1}\{1,3\}) = h(\{3,2\}). \end{equation} The use of the inverse is necessary to ensure that the action on functions respects the operation in the group, that is, so that $(\tau\pi)\cdot h = \tau\cdot (\pi\cdot h)$ \citep{Dummit:2004}. Following a season of play, we obtain a success function that gives the plus-minus (or other success metric) of each lineup. We might observe a function like that in Table \ref{ToyLineupFunction}. \begin{table}[ht] \centering \begin{tabular}{cc|cc} \hline $L$ & $f(L)$ & $L$ & $f(L)$\\ \hline $\{1,2\}$ &22 &$\{2,4\}$& 35\\ $\{1,3\}$ &18 &$\{2,5\}$& 26\\ $\{1,4\}$ &3 &$\{3,4\}$& 84\\ $\{1,5\}$ &58 &$\{3,5\}$& 25\\ $\{2,3\}$ &93 &$\{4,5\}$& 2\\ \end{tabular} \caption{Success function for two-player lineups.} \label{ToyLineupFunction} \end{table} Summing $f(L)$ over all lineups that include a particular player gives individual raw plus-minus as in Table \ref{ToyIndPM}. \begin{table}[ht] \centering \begin{tabular}{ccc} \hline Player & PM &Rank \\ \hline 1 & 101&5 \\ 2 & 176&2\\ 3 & 220&1\\ 4 & 124&3\\ 5 & 111& 4\\ \end{tabular} \caption{Preliminary analysis of sample team using individual plus-minus (PM), which is the sum of the lineup PM over lineups that include a given individual.} \label{ToyIndPM} \end{table} Player 3 is the top rated individual, followed by 2, 4, 5, and 1. Lineup rankings are given by $f(L)$ itself, which shows $\{2,3\},\{3,4\}$, and $\{1,5\}$ as the top three. Now compare the analysis above with spectral analysis. In this context the vector space of functions on lineups is 10-dimensional and has a basis consisting of vectors $\delta_{\{i,j\}}$ that assign the value 1 to lineup $\{i,j\}$ and 0 to all other lineups. The decomposition in (\ref{decomposition}) becomes \begin{equation} \label{toy decomposition} V=V_0\oplus V_1\oplus V_2. \end{equation} Define $\delta = \sum_{\{i,j\}}\delta_{\{i,j\}}$. The span of $\delta$ is the one-dimensional subspace $V_0$ of constant functions. Moreover, $V_0$ is $S_5$ invariant since for any relabeling of players given by $\pi$, we have $\pi\cdot\delta =\delta$. Given a function $f$ in $V$, its projection $f_0$ on $V_0$ will assigns to each lineup the average value of $f$, in this case 36.6. First order (or individual) effects beyond the mean are in encoded in $V_1$. Explicitly, define $\delta_1 = \sum_i\delta_{\{1,i\}}-\frac{2}{5}\delta$, with $\delta_2,\delta_3,$ and $\delta_4$ defined analogously. One can check that the 4-dimensional vector space spanned by $\{\delta_1,\delta_2,\delta_3,\delta_4\}$, is $S_5$ invariant, and is orthogonal to $V_0$. Since the mean has been subtracted out and accounted for in $V_0$, a vector in $V_1$ represents a pure first order effect. Note that $\delta_5(x)=\sum_i\delta_{\{5,i\}}-\frac{2}{5}\delta$ can be written $\delta_5=-\delta_1-\delta_2-\delta_3-\delta_4$. Consequently, $V_1$ is 4-dimensional even though there are five natural first order effects to consider: one for each player. Finally, the orthogonal complement of $V_0\bigoplus V_1$ is the 5-dimensional $S_5$ invariant subspace $V_2$. $V_2$ gives the contribution to $f$ from {\it pure pairs}, or pure second order effects after the mean and individual contributions are removed. The three subspaces $V_0$, $V_1$, and $V_2$ are all irreducible since none contains a nontrivial $S_5$ invariant subspace. We can now project $f$ onto $V_0, V_1$, and $V_2$. All together we have $f=f_0+f_1+f_2$: \begin{equation} \label{toy_decomposition} f\left( \begin{array}{c} \{1,2\}\\ \{1,3\}\\ \{1,4\}\\ \{1,5\}\\ \{2,3\}\\ \{2,4\}\\ \{2,5\}\\ \{3,4\}\\ \{3,5\}\\ \{4,5\}\\ \end{array}\right) =\left[ \begin{array}{r} 22 \\ 18 \\ 3 \\ 58 \\ 93\\ 35 \\ 26 \\ 84 \\ 25 \\ 2 \\ \end{array} \right] = \left[ \begin{array}{c} 36.6 \\ 36.6 \\ 36.6 \\ 36.6 \\ 36.6\\ 36.6 \\ 36.6 \\ 36.6 \\ 36.6 \\ 36.6 \\ \end{array} \right] + \left[ \begin{array}{r} -5.27 \\ 9.40 \\ -22.60 \\ -26.93 \\ 34.40\\ 2.40 \\ -1.93 \\ 17.07 \\ 12.73 \\ -19.27 \\ \end{array}\right] + \left[ \begin{array}{r} -9.33 \\ -28.00 \\ -11.00 \\ 48.33 \\ 22.00\\ -4.00 \\ -8.67 \\ 30.33 \\ -24.33 \\ -15.33 \\ \end{array}\right] \end{equation} Turning to the question of interpretability, section \ref{methodology} proposes Mallows' method of using readily interpretable vectors projected into the appropriate effect space. To that end, the individual indicator function $\phi_{\{2\}}=\delta_{\{1,2\}}+\delta_{\{2,3\}}+\delta_{\{2,4\}}+\delta_{\{2,5\}}$ is naturally associated with player 2: $\phi_{\{2\}}(L)=1$ when player 2 is in $L$ and is 0 otherwise. We quantify the effect of player $2$ by projecting $\phi_{\{2\}}$ and $f$ into $V_1$, and then taking the dot product of the projections. For a lineup like $\{2,3\}$, we take the dot product of the projections of the lineup indicator function $\delta_{\{2,3\}}$, and $f$, in $V_2$. Note that player 2's raw plus-minus is the inner product of $10\cdot f$ with the interpretable function $\phi_{\{2\}}$. Similarly $f(\{i,j\})$ is $10\cdot\langle f,\phi_{\{i,j\}}\rangle$. The key difference is that spectral analysis uses Mallow's Method {\it after} projecting onto the orthogonal subspaces in (\ref{toy decomposition}). Contributions from spectral analysis as measured by Mallows' method are given in Table \ref{toyspec} for both individuals and (two-player) lineups. \begin{table}[ht] {\footnotesize \centering \label{toyspec} \begin{tabular}{cc?cccc|cccc} \hline Individual& Spec&Pair &Spec &Rank &$f$ Rank& Pair & Spec &Rank & $f$ Rank\\ \hline \{1\}&-45.4& \{1,2\} &-9.3 & 6 &7&\{2,4\}&-4&4&4\\ \{2\}&29.6&\{1,3\} &-28 &10 &8&\{2,5\} &-8.7&5&5\\ \{3\}&73.6&\{1,4\}& -11 & 7 &9&\{3,4\}& 30.3&2&2\\ \{4\}&-22.4&\{1,5\}&48.3 & 1 &3&\{3,5\}& -24.3&9&6\\ \{5\}&-35.4&\{2,3\}& 22 &3& 1&\{4,5\} & -24&8&10\\ \end{tabular} \caption{Spectral value (Spec) for each individual player and two-player lineup, and rank of each lineup, along with the preliminary rank given by $f$.} } \end{table} The table also includes both the spectral and preliminary (based on $f$) rankings of each lineup. Note that lineup $\{2,3\}$ drops from the best pair to the third best pure pair. Once we account for the contributions of players two and three as individuals, the lineup is not nearly as strong as it appears in the preliminary analysis. We find stronger pair effects from lineups $\{1,5\}$ and $\{3,4\}$. All remaining lineups are essentially ineffective in that their success can be attributed to the success of the constituent individuals rather than the pairing. Interesting questions immediately arise. What aspects of player four's game result in a more effective pairing with player three, the team's star individual player, than the pairing of three with two, the team's second best individual? What is behind the success of the $\{1,5\}$ lineup? These considerations are relevant to team construction, personnel considerations, and substitution patterns. We pursue this type of analysis further in the context of an actual NBA team below. {\iffalse section{Comparison with a Modeling Approach on Simulated Data} \label{sec:Simulation} Before turning to actual NBA data we briefly compare spectral analysis with a linear modeling approach on a simulated data set. Starting from the modeling framework in \citep{Sill:2010}, which used ridge regression to improve the adjusted plus-minus metric to estimate individual player contributions, it is natural to consider extending this framework to evaluate groups. Examples of this approach can be found in \citep{grassetti2019estimation} and \citep{grassetti2019play}, which studied the relationship between individual and lineup contributions. While a thorough comparison of modeling approaches and spectral analysis is left to future work, we present a simple example here to illustrate the fact that the approaches can differ significantly. Along these lines, we simulate observations for a hypothetical team with ten players labeled $A$, $B$, $C$, \ldots, $J$. First we generate a baseline of three hundred observations by randomly choosing a lineup of five players and giving it a value randomly chosen from a normal distribution with mean 10 and standard deviation 3. Next we insert a signal as follows. We add thirty observations that include player $A$ and four randomly chosen teammates valued at 25, and we add thirty observations that include player $B$ and four randomly chosen teammates valued at 20. We introduce a pair effect by adding thirty observations including players $A$, $B$ and three random teammates valued at 35, and a somewhat sparser signal for triples by adding only 20 observations including $A$, $B$, $C$ and two random teammates, valued at 50. We analyze the data using ridge regression and spectral analysis to identify individual and group effects. Recall that ridge regression fits a linear model that minimizes the usual residual sum of squares along with an $L_2$ penalty on the coefficient vector (without the intercept) \citep{friedman2001elements}. The regression is run using the glmnet package in R, and using ten-fold cross validation to select the optimal shrinkage parameter $\lambda$. One would hope to find four dominant contributions coming from individuals $A$ and $B$, the pair $\{A,B\}$, and the triple $\{A,B,C\}$. The top five group effects according to ridge regression are, with coefficients in parentheses, $\{A,B,C,E,G\}$ (1.36), $\{A,B,C,D,F\}$ (1.02), $\{A,B,C,E,I\}$ (0.77), $\{A,B,D,I,J\}$ (0.75), and $\{A,B,D,E,H\}$ (0.75). The triple $\{A,B,C\}$ ranks 24-th (0.38), the pair $\{A,B\}$ ranks 34-th (0.31), and the individuals $A$ and $B$ rank 61-st (0.20) and 63-rd (0.19), respectively. Finally, Table \ref{specsim} shows the top five groups (by their Mallows coefficients) in the first, second, and third order spaces in spectral analysis. \begin{table}[ht] \centering \scriptsize \label{specsim} \begin{tabular}{cc|cc|cc} \hline Individual & Coefficient &Pair & Coefficient&Triple & Coefficient\\ \hline B & 302.0 & $\{A,B\}$ & 160.1 &$\{A,B,C\}$ & 78.8 \\ A & 266.3 &$\{B,C\}$ & 79.6 &$\{B,F,G\}$ & 50.3\\ C & 98.4 &$\{A,C\}$ & 79.2 &$\{B,D,E\}$ & 29.9 \\ F & -65.4 &$\{F,G\}$ & 70.8 &$\{A,F,H\}$ & 29.7 \\ I & -69.4 &$\{D,E\}$ & 47.1 &$\{B,F,H\}$ & 29.7 \\ \end{tabular} \caption{Spectral analysis for individuals, pairs, and triples on the simulated data.} \end{table} Spectral analysis is successful at both identifying all of the dominant contributions and separating them from the noise. Also notably, the fourth and fifth order effect spaces are identified by spectral analysis as pure noise. \fi} \section{Results and Discussion} A challenge inherent in working with real lineup-level data is the wide disparity in the number of possessions that lineups play. Most teams have a dominant starting lineup that plays far more possessions than any other. For example, the starting lineup of the '16 Golden State Warriors played approximately 1140 possessions while the next most used lineup played 535 possessions. Only 12 lineups played more than 100 possessions for the Warriors on the season. For the Boston Celtics, the starters played 1413 possessions compared to 257 for the next most utilized, with 13 lineups playing more than 100 possessions. By contrast, the Celtics had 255 lineups that played fewer than 10 possessions (but at least one), and the Warriors had 236. Numbers are similar across the league. This is another reason for using raw plus-minus in defining the team success function $f$ on lineups. A metric like per-possession lineup plus-minus breaks down in the face of large numbers of very low possession lineups and a few high possession lineups. Still, we want to identify potentially undervalued and underutilized groups of players-- especially for smaller groups like pairs and triples where there are many more groups that do play significant numbers of possessions. Another consideration is that over time, lineups with large numbers of possessions will settle closer to their true mean value while lineups with few possessions will be inherently noisier. As a result, we perform the spectral analysis on $f$ as described in section \ref{methodology} above, and then normalize the spectral contribution by the log of possessions played by each group. We call the result {\it spectral contribution per log possession} (SCLP). This balances the considerations above and allows strong lower possession groups to emerge while not over-penalizing groups that do play many possessions. Despite these challenges, however, we'll see below that there are significant insights to be gained in working with lineup level data. Moreover, since spectral analysis is a non-model-based description of complete lineup-level game data, it has the advantage of maintaining close proximity to the actual gameplay observed by coaches, players, and fans. There are always five players on the floor, so all data begins at the level of full lineups. Consider the first order effects for the 15-16 Golden State Warriors in Table \ref{GSWFirstTable}. Draymond Green, Stephen Curry, and Klay Thompson are the top three players. The ordering, specifically Green ranked above Curry, is perhaps interesting, though it's worth noting that this ordering agrees with ESPN's real plus-minus (RPM). (Green led the entire league in RPM in 15-16.) Other metrics like box plus-minus (BPM) and wins-above-replacement (WAR) rank Curry higher. Because SCLP is based on ability of lineups to outscore opponents when the player is on the floor (like RPM), however, as opposed to metrics like BPM and WAR which are more focused on points produced, the ordering is defensible. \begin{table}[ht] \begin{center} \begin{tabular}{lccc} Player & SCLP & PM & Poss\\ \hline Draymond Green& 17.2& 1038.4 & 5800\\ Stephen Curry & 15.9 & 978.7& 5610\\ Klay Thompson & 12.0 & 808.6& 5453\\ Andre Iguodala & 3.5 & 436.1 & 3516\\ Andrew Bogut & 2.8 & 403.6 & 2951\\ \hline\hline Marreese Speights & -7.4 & 20.0& 1630\\ Ian Clark & -9.8 & -51.9& 1108\\ Anderson Varejao & -11.1 & -34.4& 368\\ Jason Thompson & -11.2& -33.8 & 339\\ James Michael McAdoo & -12.1& -85.0& 526\\ \end{tabular} \caption{Top and bottom five first-order effects for GSW. SCLP is the spectral contribution per log possession, PM is the player's raw plus-minus, and Poss is the number of possessions for that player.}\label{GSWFirstTable} \end{center} \end{table} In fact, a closer look at the interpretable vector $\phi_i$ associated with individual player $i$ (as described in sections \ref{methodology} and \ref{toyspec}) reveals that $\phi_i=\delta_i+c\cdot \delta$, so is just a non-mean-centered version of the first order invariant functions that span $V_1$. Consequently, the spectral contribution (non-possession normalized) is a linear function of individual plus-minus, so reflects precisely that ordering. This is not the case for higher-order groups, however, which is where we focus the bulk of our analysis. The second-order effects are given in in Table \ref{GSWSecondTable}, and quantify the contributions of player pairs, having removed the mean, individual, and higher-order group effects. The top and bottom five pairs (in terms of SCLP) are presented here, with more complete data in Table \ref{AGSWSecondTable} in the appendix. \begin{table}[ht] {\footnotesize \begin{center} \begin{tabular}{llccc} \hline P1 &P2 & SCLP & PM & Poss\\ \hline Draymond Green & Stephen Curry & 13.3& 979.9 & 5102\\ Stephen Curry & Klay Thompson & 11.2& 827.8 & 4311\\ Draymond Green & Klay Thompson & 11.1& 847.8 & 4678\\ Leandro Barbosa & Marreese Speights& 5.3& 76.2 & 983\\ Draymond Green & Andre Iguodala& 4.3& 490.0 & 2165\\ \hline\hline Draymond Green & Ian Clark & -7.2 & 33.3 & 424 \\ Klay Thompson & Leandro Barbosa & -7.2 & 4.8 &349 \\ Stephen Curry & Ian Clark & -8.1 & 14.0 & 220 \\ Draymond Green & Anderson Varejao & -9.5 & 7.2 & 217\\ Stephen Curry & Anderson Varejao & -10.1 & -26.9 & 237 \\ \hline \end{tabular} \caption{Top and bottom five SCLP pairs with at least 200 possessions, along with raw plus-minus and possessions.}\label{GSWSecondTable} \end{center} } \end{table} Even after accounting for and removing their strong individual contributions, however, it is notable that Green--Curry, Curry--Thompson, and Green--Thompson are the dominant pair contributors by a considerable margin, with SCLP values that are all more than twice as large as for the next largest pair (Barbosa--Speights). These large positive SCLP values represent true synergies: These pairs contribute to team success {\it as pure pairs}. The fact that the individual contributions of the constituent players are also positive results in a stacking of value within a lineup that provides a quantifiable way of assessing whether the whole does indeed add to more than the sum of its parts. Reserves Leandro Barbosa, Mareese Speights, and Ian Clark, on the other hand, were poor individual contributors, but manage to combine effectively in several pairs. In particular, the Barbosa--Speights pairing is notable as the fourth best pure pair on the team (in 983 possessions). After accounting for individual contributions, lineups that include the Barbosa--Speights pairing benefited from a real synergy that positively contributed to team success. This suggests favoring, when feasible, lineup combinations with those two players together to leverage this synergy and mitigate their individual weaknesses. Tables \ref{SmallBogutPairs} and \ref{SmallLivingstonPairs} show pair values for players Andrew Bogut and Shaun Livingston (again in pairs with at least 150 possessions, and with more detailed tables in the appendix). Both players are interesting with respect to second order effects. While Bogut was a positive individual contributor, and was a member of the Warriors' dominant starting lineup that season, he largely fails to find strong pairings. His best pairings are with Klay Thompson and Harrison Barnes, while he pairs particularly poorly with Andre Iguodala (in a considerable 785 possessions). This raises interesting questions as to why Bogut's style of play is better suited to players like Thompson or Barnes rather than players like Curry or Iguodala. Also noteworthy is the fact that the Bogut--Iguodala pairing has a positive plus-minus value of 107. The spectral interpretation is that this pairing's success should be attributed to the individual contributions of the players, and once those contributions are removed, the group lacks value as a pure pair. \begin{table}[h!] {\footnotesize \begin{center} \begin{tabular}{llccc} \hline P1 &P2 & SCLP & PM & Poss\\ \hline Andrew Bogut & Klay Thompson& 3.7 & 394.3& 2637\\ Andrew Bogut & Harrison Barnes & 2.1 & 206.2& 1527 \\ Andrew Bogut & Stephen Curry & 1.6 & 378.5& 2530 \\ \hline Andrew Bogut & Andre Iguodala & -2.1 & 107.0 & 785 \\ \hline \end{tabular} \caption{Select pairs involving Andrew Bogut (with at least 150 possessions).}\label{SmallBogutPairs} \end{center} } \end{table} \begin{table}[h!] {\footnotesize \begin{center} \begin{tabular}{llccc} \hline P1 &P2 & SCLP & PM &Poss\\ \hline Shaun Livingston & Anderson Varejao & 2.0 & -1.5 & 174 \\ Shaun Livingston & Marreese Speights & 1.6 & 17.8 & 1014 \\ Shaun Livingston & Draymond Green & 1.2 & 323.6 & 1486\\ \hline Shaun Livingston & Andre Iguodala & -1.3 & 65.2 & 1605 \\ Shaun Livingston & Klay Thompson & -3.6 & 111.8 & 1412 \\ \hline \end{tabular} \caption{Select pairs involving Shaun Livingston (with at least 150 possessions).}\label{SmallLivingstonPairs} \end{center} } \end{table} Shaun Livingston, on the other hand, played an important role as a reserve point guard for the Warriors. Interestingly, Livingston's worst pairing by far was with Klay Thompson. Again, considering the particular styles of these players compels interesting questions from the perspective of analyzing team and lineup compositions and playing style. It's also noteworthy that this particular pairing saw 1412 possessions, and it seems entirely plausible that its underlying weakness was overlooked due to the healthy 111.8 plus-minus with that pair on the floor. The success of those lineups should be attributed to other, better synergies. For example, one rotation added Livingston as a sub for Barnes (112 possessions). Another put Livingston and Speights with Thompson, Barnes, and Iguodala (70 possessions). Finally, it's also interesting to note that Livingston appears to pair better with other reserves than with starters (save Draymond Green, further highlighting Green's overall value), an observation that raises important questions about how players understand and occupy particular roles on the team. Table \ref{GSWThirdTable} shows the best and worst triples with at least 200 possessions. \begin{table} {\footnotesize \begin{center} \begin{tabular}{lllrrr} \hline P1 & P2 &P3 & SCLP & PM & Poss \\ \hline Draymond Green & Stephen Curry & Klay Thompson & 12.6 & 812.7 & 4085 \\ Draymond Green & Klay Thompson & Harrison Barnes & 5.9 & 427.3 & 2473 \\ Draymond Green & Stephen Curry & Andre Iguodala & 5.8 & 464.8 & 1830 \\ Stephen Curry & Klay Thompson & Harrison Barnes & 5.7 & 416.5 & 2431 \\ Stephen Curry & Klay Thompson & Andrew Bogut & 4.9& 382.2 & 2296 \\ \hline\hline Stephen Curry & Andre Iguodala & Brandon Rush & -3.8 & -13.5 & 207 \\ Draymond Green & Stephen Curry & Marreese Speights & -4.1 & 97.9 & 299 \\ Draymond Green & Klay Thompson & Marreese Speights & -4.5 & 52.2 & 250 \\ Draymond Green & Klay Thompson & Ian Clark & -5.8 & 9.8 & 316 \\ Draymond Green & Stephen Curry & Ian Clark & -7.4 & 14.5 & 205 \\ \hline \end{tabular} \caption{Best and worst third-order effects for GSW with at least 200 possessions.}\label{GSWThirdTable} \end{center} } \end{table} The grouping of Green--Curry--Thompson is far and away the most dominant triple, and safely (and unsurprisingly) earns designation as the Warriors' big three. Other notable triples include starters like Green and Curry or Green and Thompson together with Andre Iguodala who came off the bench, and more lightly used triples like Curry--Barbosa--Speights who had an SCLP of 4.6 in 245 possessions. Analyzing subpairs of these groups shows a better stacking of synergies in the triples that include Iguodala--he pairs well with Green, Curry, and Thompson in the second order space as well, while either of Barbosa or Speights paired poorly with Curry. Still, Barbosa with Speights was quite strong as a pair, and we see that the addition of Curry does provide added value as a pure triple. Interesting ineffective triples include Iguodala and Bogut with either of Curry or Green, especially in light of the fact that Bogut--Iguodala was also a weak pairing (see detailed tables in the appendix). Figure \ref{GSW3scatter} shows that the most effective player-triples as identified by spectral analysis are positively correlated with higher values of plus-minus. \begin{figure}[ht] \centering \includegraphics[width = \textwidth]{g3scatter.pdf} \caption{Third-order effects for triples with more than 100 possessions the 2015-2016 Golden State Warriors. The $x$-axis gives the group's plus-minus per log possession (PMperLP) while the $y$-axis shows the spectral contribution per log possession (SCLP). Observations are shaded by number of possessions.} \label{GSW3scatter} \end{figure} As raw group plus-minus decreases, however, we see considerable variation in the spectral contributions of the groups (and in number of possessions played). This suggests the following narrative: while it may be relatively easy to identify the team's top groups, it is considerably more difficult to identify positive and negative synergies among the remaining groups, especially when controlling for lower-order contributions. Spectral analysis suggests several opportunities for constructing more optimal lineups with potential for untapped competitive advantage, especially when more obvious dominant groupings are unavailable. Table \ref{SmallBOSThirdTable} shows top and bottom three third-order effects for the 15-16 Boston Celtics. (The appendix includes more complete tables for Boston including effects of all orders.) Figure \ref{GSWBOS3Bar} gives contrasting bar plots of the third-order effects for both Boston and Golden State. \begin{table}[hbt] {\footnotesize \begin{center} \begin{tabular}{lllrrr} \hline P1 & P2 &P3 & SCLP & PM & Poss \\ \hline Evan Turner & Kelly Olynyk & Jonas Jerebko & 2.9 & 110.1 & 879\\ Isaiah Thomas& Avery Bradley & Jared Sullinger & 2.7& 177.7 & 2642\\ Avery Bradley & Jae Crowder & Jared Sullinger & 2.3 & 139.3& 2216\\ \hline\hline Isaiah Thomas & Evan Turner & Kelly Olynyk & -1.8& -30.9 & 870\\ Avery Bradley& Jared Sullinger & Jonas Jerebko & -2.3& -11.7 & 194\\ Isaiah Thomas & Avery Bradley & Jonas Jerebko & -2.4 & -1.6 & 290\\ \hline \end{tabular} \caption{Top and bottom three third-order effects for BOS with at least 150 possessions.}\label{SmallBOSThirdTable} \end{center} } \end{table} \begin{figure}[h!] \centering \includegraphics[width = \textwidth]{GSW_BOS_3_bars.pdf} \caption{Bar graph of third order spectral contributions per log possession (SCLP) for BOS and GSW for groups with more than 150 possessions.} \label{GSWBOS3Bar} \end{figure} The Celtics have fewer highly dominant groups. In particular, we note that the spectral signature of the Celtics is distinctly different from that of the Warriors in that Boston lacks anything resembling the big-three of Golden State. While SCLP values are not directly comparable across teams (they depend, for instance, on the norm of the overall team success function when projected into each effect space), the relative values within an effect-space are comparable. Similarly, the SCLP values also depend on the norm of the interpretable vector used in Mallow's method. As a result, the values are not directly comparable across effect spaces-- a problem we return to below. In fourth and fifth-order spaces the numbers of high-possession groups begins to decline, as alluded to above. (See appendix for complete tables.) Still, it is interesting to note that spectral analysis flags the Warriors small lineup of Green--Curry--Thompson--Barnes--Iguodala as the team's best, even over the starting lineup with Bogut replacing Barnes. It also prefers two lesser-used lineups to the Warriors' second most-used lineup of Green--Curry--Thompson--Bogut--Rush. Also of note is the fact that Golden State's best group of three and best group of four are both subsets of the starting lineup-- another instance of stacking of positive effects--while neither of Boston's best groups of three or four are part of their starting lineup. \section{Connection With Linear Models} \label{sec:LM Section} Before moving on, we consider the connection between spectral analysis and a related approach via linear regression which will likely be more familiar to the sports analytics community. Recalling our assumption of a 15 man roster, consider the problem of modeling a lineup's plus-minus, given by $f(L)$ for lineup $L$, using indicator variables that correspond to all possible groups of players. Label the predictor variables $X_1$, $X_2$,\ldots $X_p$, where each variable corresponds to a group of players (with some fixed group order). Thus, the variable $X_i$ is 1 when the players from group $i$ are on the floor, and zero otherwise. If the first fifteen variables are the indicator functions of the individual players $X_1, X_2,\ldots X_{15}$, then the group variables, the $X_i$ for $i>15$, are interaction terms. For instance, the variable corresponding to the group $\{1,2,3\}$ is $X_1X_2X_3$. This approach is therefore similar to an adjusted plus minus with interactions approach. Including all possible group effects, however, means that the number of predictors is quite large and depending on the number of observations, we may be in a situation where $p>>N$. Moreover, the nature of player usage in lineups means that there is a significant multicollinearity issue. Consequently, an attempt to quantify group effects in a regression model of this sort will rely on a shrinkage technique like ridge regression. Let $N$ be the number of lineups, and $y=f(L)$, an $N\times 1$ column vector. Let $\bf X$ be the $N\times (p+1)$ matrix whose first column is the vector of all ones and where the $i$-th row consists of the binary value of each predictor variable for the $i$-th player group. The vector of ridge coefficients $\hat{\beta}^{\text{ridge}}$ minimizes the penalized residual sum of squares: $\argmin_\beta \left\{ \| y-{\bf X}\beta \|^2 +\lambda\sum_{i=1}^p\beta_i^2 \right\}$. The non-negative parameter $\lambda$ serves as a penalty on the $L_2$-norm of the solution vector. (The intercept is not included in the ridge penalty.) The ridge approach reduces the variability exhibited by the least squares coefficients in the presence of multicollinearity by shrinking the coefficient estimates in the model towards zero (and toward each other). One can show that ridge regression uses the singular values of the covariance matrix associated with the centered version of ${\bf X}$ to disproportionately shrink coefficients associated with inputs where the data exhibits lower degrees of variance. See \cite{friedman2001elements} for details. The fitted coefficients $\hat{\beta_0},\hat{\beta}_1,\ldots\hat{\beta}_p$ in the ridge regression model attempt to measure the contribution of group $i$ while controlling for the contributions of all other groups and individuals. We note that this modeling approach resembles work in \cite{Sill:2010}, \cite{grassetti2019estimation}, and \cite{grassetti2019play}, though there are key differences which we explore below. In particular, note that we model group contributions aggregated over all opponents, and without controlling for the quality of the opponents faced. This simplified approach allows for a more direct comparison with the results of spectral analysis above. Tables \ref{RidgeIndsPairs} and \ref{RidgeTriples} give the ridge regression coefficients associated with the top 5 individuals, pairs, and triples for the Warriors. \begin{table}[h!] {\footnotesize \begin{center} \begin{tabular}{lcc|cllc} \hline Individual & Estimate &\ & \ & P1 & P2 & Pair Estimate \\ \hline Draymond Green &0.28&\ &\ & Draymond Green&Stephen Curry&0.65 \\ Stephen Curry &0.25&\ &\ & Stephen Curry&Andrew Bogut&0.53 \\ Klay Thompson& 0.15&\ &\ & Stephen Curry&Klay Thompson&0.47 \\ Andrew Bogut & 0.14&\ &\ & Draymond Green&Klay Thompson&0.47 \\ Festus Ezeli & 0.02&\ &\ & Draymond Green&Andrew Bogut&0.46 \\ \hline \end{tabular} \caption{Best individuals and pairs using the linear model.}\label{RidgeIndsPairs} \end{center} } \end{table} \begin{table}[hbt] {\footnotesize \begin{center} \begin{tabular}{lllr} \hline P1 & P2 &P3 & Estimate \\ \hline Draymond Green&Stephen Curry&Andrew Bogut& 1.61\\ Stephen Curry&Klay Thompson&Andrew Bogut& 1.49\\ Draymond Green&Stephen Curry&Klay Thompson& 1.39\\ Draymond Green&Klay Thompson&Andrew Bogut& 1.24\\ Draymond Green&Klay Thompson&Harrison Barnes& 1.03\\ \hline \end{tabular} \caption{Top triples according to the linear model.}\label{RidgeTriples} \end{center} } \end{table} Comparing with Tables \ref{GSWFirstTable}, \ref{GSWSecondTable}, and \ref{GSWThirdTable} shows both some overlap in the top rated groups, but also significant differences with respect to both ordering and magnitude of contribution. In particular, the linear model appears to value the contributions of Andrew Bogut considerably more than spectral analysis. It is also notable that spectral analysis identifies a clearly dominant big three of Green--Curry--Thompson, in contrast to the considerably different result arising from the modeling approach which ranks that group third. We can interpret the linear model determined by $\hat{\beta}^{\text{ridge}}$ as giving a similar decomposition to the spectral decomposition in $(\ref{decomposition})$. For each lineup $L$ we have predicted success given by \begin{equation} \label{lm} \hat{\bf y} = {\bf X}_L\hat{\beta}^{\text{ridge}} \end{equation} where ${\bf X}_L$ is now the ${15 \choose 5} \times (p+1)$ matrix whose first column is all 1s, and whose $i,j+1$ entry is 1 if the $j$-th player group is part if the $i$-th lineup. (We have fixed a particular ordering of lineups.) The columns of ${\bf X}_L$ (the $X_i$) that correspond to individual players can be understood as spanning a subspace $W_1$ analogous to $V_1$ in (\ref{decomposition}). Similarly, $W_2$ is spanned by the columns of ${\bf X}_L$ corresponding to pair interactions, and so on for all groups through full five player lineups. The particular linear combinations in each $W_i$ determined by the respective coordinates of $\hat{\beta}^{\text{ridge}}$ are analogous to the ${\bf pr}_{V_i}f$. In fact, the space of all lineup functions can be written \begin{equation} \label{lmdecomp} V=W_0+W_1+W_2+W_3+W_4+W_5, \end{equation} where $W_i$ is the space of interaction effects for groups of size $i$. Still, there are important differences between (\ref{decomposition}) and (\ref{lmdecomp}). While $V_0$ and $W_0$ are both one-dimensional, for $i\ge 1$ the dimensions of the $W_i$ are strictly larger than those of their $V_i$ counterparts. For instance, $W_5$ includes a vector for each possible set of five players from the original fifteen. Similarly $W_4$ and groups of four, and so on. Thus, the dimension of $W_5$ is 3003 (the number of lineups), which is the same as the dimension of $V$ itself. By contrast the dimension of $V_5$ in (\ref{decomposition}) is only 1638. Similarly the dimension of $W_4$ is 1365 while that of $V_4$ is $350$. Clearly, the decomposition in (\ref{lm}) is highly non-orthogonal (explaining the $+$ rather than $\oplus$ notation). It is easy to find vectors in $W_i$ that overlap with $W_j$ in the sense that their inner product is non-zero. In the context of basketball, the contribution of a group of, for example, $5$ players is not necessarily separate from a constituent group of four (or any other number of) players despite the use of shrinkage methods. The decomposition in ($\ref{decomposition}$) is special in that it gives minimal subspaces that are invariant under relabeling and mutually orthogonal as described in section \ref{methodology}. As we've seen, spectral analysis achieves this at the expense of easy interpretation of group contributions. This is a drawback to spectral analysis that (\ref{lm}) does not have, and is an appealing feature of regression models. The interaction term associated with a group of $i$ players in a regression model is easy to understand. Still, as we see above one must balance either ease of interpretation, or orthogonality of effects. \section{Stability} \label{sec: Stability} In this section we take a first step to addressing questions of the stability of spectral analysis. We seek evidence that spectral analysis is indicative of a true signal, and that should the data have turned out slightly differently, the analysis would not change dramatically. Since spectral analysis works on the lineup function $f(L)$, which is aggregated over all of a team's plays involving $L$, we need to introduce variability into the values of $f(L)$. A fully aggregated NBA season is, in a sense, a complete record of all events and lineup outcomes in that season. Still, it seems reasonable to leverage the variability inherent in the many observed results of a lineup's plays, as well as the substitution patterns of coaches, and suggest a bootstrapping approach. To that end, we start with the actual 15-16 season for the Boston Celtics. We can then build a bootstrapped season by sampling plays, with replacement, from the set of all plays in the actual season. (We sample the same number of plays as in the actual season.) A play is defined as a connected sequence of events surrounding a possession in the team's play-by-play data. For example, a play might involve a sequence like a missed shot, offensive rebound, and a made jump shot; or, a defensive rebound followed by a bad pass turnover. When sampling from a team's plays, a particular lineup will be selected with a probability proportional to the number of plays in which that lineup participated. We generate 500 bootstrapped seasons, process each using the methodology of sections \ref{Data} and \ref{methodology} to produce success functions $f_{\text{boot}}$, and then apply spectral analysis to each. We thus have a bootstrapped distribution of lineup plus-minus and possession values over each lineup $L$, which in turn gives plus-minus and possession distributions of all player-groups. While the the number of possessions played is highly stable for both full-lineups and smaller player-groups, there is considerable variability in plus-minus values over the bootstrapped seasons. Lineups with a significant number of possessions exhibit both positive and negative performance, and the balance between the positive and negative plays is delicate. The variability in group PM presents a challenge in gauging the stability of the spectral analysis associated with a player group. Take, for example, the Thomas--Bradley--Crowder triple for the Celtics. The actual season's plus-minus for this group was 154.8 in 2572 possessions. Over the bootstrapped seasons the group has means of 145.9 and 2574.1 for plus-minus and possessions, respectively. On the other hand, the standard deviation of the plus-minus values is 82.8 versus only 47.7 for possessions. Thus, some of the variability in the spectral contribution of the group over the bootstrapped seasons should be expected since, in fact, the group was less effective in some of those seasons. Figure \ref{BOS3BootGroup0} shows SCLP plotted against PMperLP for the Thomas--Bradley--Crowder triple in 500 bootstrapped seasons. Of course, spectral analysis purports to do more than raw plus-minus by removing otherwise confounding colinearities and overlapping effects. Not surprisingly, therefore, we still see variability in SCLP within a band of plus-minus values, but the overall positive correlation, whereby SCLP increases in seasons where the group tended to outscore its opponents, is reasonable. \begin{figure}[htbp] \centering \includegraphics[width = \textwidth]{BOS3BootGroup0.png} \caption{Spectral contribution per log possession (SCLP) versus plus-minus per log possession (PMperLP) for Thomas--Bradley--Crowder triple in 500 bootstrapped seasons. Each bootstrapped season consists of sampling plays (connected sequences of game events) with replacement from the set of all season plays. Resampled season data is then processed as in section \ref{Data} and group contributions are computed via spectral analysis as in section \ref{methodology}.} \label{BOS3BootGroup0} \end{figure} Also intuitively, the strength of the correlation between group plus-minus and spectral contribution depends on the number of possessions played. Fewer possessions means that group's contribution is more dependent on other groups and hence exhibits more variability. The mean possessions for the Thomas--Bradley--Crowder triple in Fig.\ref{BOS3BootGroup0} is 2574, and has a Pearson correlation of $r=0.953$. The group Thomas--Turner--Zeller, on the other hand, has $r=0.688$ with a mean of 305 possessions. A group like Jared Sullinger--Marcus Smart is particularly interesting. This pair has a season plus-minus of 25.0 in 1116 possessions. In 500 bootstrap seasons, they have a mean plus-minus of 23.6 and mean possessions of 1118.3. The value of the group's plus-minus is negative in only $32.4\%$ of those seasons. Should this group, therefore, be considered effective overall? Spectral analysis answers with a fairly emphatic {\it no}. After removing other group contributions their SCLP as a pure pair is negative in $90.6\%$ of bootstrapped seasons, while still exhibiting strong correlation with overall plus-minus ($r=0.73$). Similarly, the Bradley--Smart pair has a season plus-minus of 45.3 in 1679 possessions In 500 bootstrap seasons, they have a mean plus-minus of 40.4 and mean possessions of 1679. Their plus-minus is negative in $27\%$ of those seasons while their spectral contribution is negative in $81\%$ of bootstrapped seasons. \section{Importance of Effect Spaces} \label{sec: importance} Another natural question is how to value the relative importance of the group-effect spaces. One way to gauge importance uses the squared $L_2$ norm of the success function in each space. Since the spaces are mutually orthogonal, we have $\|f\|^2=\|f_1\|^2+\|f_2\|^2+\|f_3\|^2+\|f_4\|^2+\|f_5\|^2$. (Recall that $f_i$ is the projection of $f$ onto the $i$-th order effect space $V_i$.) One can then measure the total mass of $f$ that is concentrated in each effect space. For example, if we found that the mass of the success function was concentrated in the mean space, and thus, a constant function gave a good approximation to $f$, we could conclude that the particular lineup used by this team was largely irrelevant-- the success of the team never strayed far from the mean and was not strongly affected by any groups. This would be an easy team to coach. Of course, this is not the case in basketball, as evidenced by the $L_2$ norm squared distribution of the sample of teams in Table \ref{L2 Table}. \begin{table}[htbp] \centering \begin{tabular}{l|rrrrrr} \hline Team & $V_0$ & $V_1$ & $V_2$ & $V_3$ & $V_4$ & $V_5$ \\ \hline BOS & 0.001 & 0.012 & 0.048 & 0.138 & 0.297 & 0.504 \\ CLE & 0.003 & 0.021 & 0.058 & 0.150 & 0.301 & 0.467 \\ GSW & 0.003 & 0.031 & 0.092 & 0.203 & 0.312 & 0.360 \\ HOU & 0.000 & 0.007 & 0.037 & 0.123 & 0.285 & 0.548 \\ OKC & 0.001 & 0.011 & 0.038 & 0.137 & 0.304 & 0.510 \\ POR & 0.000 & 0.004 & 0.027 & 0.112 & 0.289 & 0.568 \\ SAS & 0.007 & 0.027 & 0.072 & 0.173 & 0.294 & 0.427 \\ \hline Null & 0.000 & 0.005 & 0.03 & 0.117 & 0.303 & 0.545 \\ \end{tabular}% \caption{Distribution of the squared $L_2$-norm of the team success function over the effect spaces.}\label{L2 Table} \end{table}% By this measure, the higher-order spaces are dominant as they hold most of the mass of the success function. An issue with this metric, however, is the disparity in the dimensions of the spaces. Because $V_5$ is 1638-dimensional, we might expect the mass of $f$ to be disproportionately concentrated in that space. In fact, a random unit vector projected into each of the effect spaces would be, on average, distributed according to the null distribution in Table \ref{L2 Table}, with mass proportional to the dimension of each of the spaces in question. Moreover, we can take the true success function of a team and break the dependence on the actual player groups as follows. Recall that the raw data $f$ records the plus-minus for each of the possible 3003 lineups. We then take $f$ and randomly permute the values so that there is no connection between the lineup and the value associated with that lineup. Still, however, the overall plus-minus and mean of $f$ are preserved. We can then run spectral analysis on the permuted $f$ and record the distribution of the squared $L_2$ norm in each space. Repeating this experiment 500 times for both GSW and BOS give means in Table \ref{gswbosnull} that exactly conform to the null distribution in Table \ref{L2 Table}. \begin{table} \centering \begin{tabular}{l|rr} \hline Space & BOS & GSW\\ \hline First & 0.005 & 0.005\\ Second & 0.030 & 0.030\\ Third & 0.117 & 0.116\\ Fourth & 0.302 & 0.302\\ Fifth& 0.543 & 0.544\\ \hline \end{tabular} \caption{Average fraction of squared $L_2$ mass by order effect space using randomly permuted success function.}\label{gswbosnull} \end{table}% An alternative measure of the importance of each effect space is given by measuring the extent to which projections onto $V_i$ deviate from the null distribution. By this measure of importance, there is some preliminary evidence that strong teams shift the mass of $f$ from $V_5$ into lower-order spaces, particularly $V_1$, $V_2$, and $V_3$. This is interesting as it agrees with the idea that building an elite team requires a group of three stars. Using all 30 NBA teams, we compute correlations of $r=0.51$, $r=0.58$ and $r=0.55$, respectively, between win-percentage and the projected mass $f$ in the first, second, and third-order spaces. Win-percentage and fifth-order projection have correlation coefficient $r=-0.54$. As pointed out in \cite{Diaconis:1989}, however, care must be taken when looking at deviation from the null distribution if the projections are highly structured and lie close to a few of the interpretable vectors. This is a direction for further inquiry. \section{Conclusion} Spectral analysis proposes a new approach to understanding and quantifying group effects in basketball. By thinking of the success of a team as function on lineups, we can exploit the structure of functions on permutations to decompose the team success function. The resulting Fourier expansion is naturally interpreted as quantifying the group effects to overall team success. The resulting analysis brings insight into important and difficult questions like which groups of players work effectively together, and which do not. Furthermore, the spectral analysis approach is unique in addressing questions of lineup synergies by presenting an EDA summary of the actual team data without making the kind of modeling or skill-based assumptions of other methods. There are several directions for future work. First, the analysis presented used raw lineup level plus-minus to measure success. This approach has the advantage of keeping the analysis tethered to data that is intuitive, and helps avoid pitfalls arising from low-possession lineups. Still, adjusting the lineup level plus-minus to account for quality of opponent, for example, seems like a valuable next step. Another straight forward adjustment to raw plus-minus data would involve devaluing so-called garbage time possessions when the outcome of the game is not in question. As presented here, spectral analysis provides an in-depth exploratory analysis of a team's lineups. Still, the results of spectral analysis could also add valuable inputs to more traditional predictive models or machine learning approaches to projecting group effects. Similarly, it would be interesting to use spectral analysis as a practical tool for lineup suggestions. While the orthogonality of the spectral decomposition facilitates valuation of pure player-groups, the question of lineup construction realistically begins at the level of individuals and works up, hopefully stacking the contributions of individuals with strong pairs, triples, and so-on. A strong group of three, for instance, without any strong individual players may be interesting from an internal development perspective, or at the edges of personnel utility, but may also be of limited practical value from the perspective of constructing a strong lineup. Development of a practical tool would likely require further analysis of the ideas in sections \ref{sec: Stability} and \ref{sec: importance} based on ideas in \cite{Diaconis:1998}. For example, given data (a function on lineups), we might fix the projection of that data onto certain spaces (like the first or second order), and then generate new sample data conditional on that fixed projection. The resulting projections in the higher-order spaces would give some evidence for how the fixed lower-order projections affect the mass of $f$ in the higher-order effects spaces. This would help give a more detailed sense of variability of projections, and a more definitive answer to the question of which spaces are most important, and how the spectral signature of a team correlates with team success. With that information in place, however, one can build tools to suggest lineup replacements that maximize the stacking of a team's most important groups. \bibliographystyle{plainnat} \section{Introduction} A fundamental challenge in basketball performance evaluation is the team nature of the game. Contributions to team success occur in the context of a five-player lineup, and isolating the specific contribution of an individual is a difficult problem with a considerable history. Among the many approaches to the player evaluation problem are well-known metrics like player efficiency rating (PER), wins produced (WP), adjusted plus-minus (APM), box plus-minus (BPM), win shares (WS), value over replacement player (VORP), and offensive and defensive ratings (OR and DR) to name only a few \citep{BasketballReferenceGlossary}. While these individual player metrics help create a more complete understanding of player value, some contributions remain elusive. Setting good screens, ability to draw defenders, individual defense, and off-ball movement are all examples of important contributions that are difficult to measure and quantify. In part, these contributions are elusive because they often facilitate the success of a teammate who ultimately reaps the statistical benefit. Even beyond contributions that are difficult to quantify, the broader question of chemistry between players is a critical aspect of team success or failure. It is widely accepted that some groups of players work better together than others, creating synergistic lineups that transcend the sum of their individual parts. Indeed, finding (or fostering) these synergistic groups of players is fundamental to the role of a general manager or coach. There are, however, far fewer analytic approaches to identifying and quantifying these synergies between players. Such positive or negative effects among teammates represent an important, but much less well understood, aspect of team basketball. In this paper we propose spectral analysis \citep{Diaconis:1988} as a novel approach to identifying and quantifying group effects in NBA play-by-play data. Spectral analysis is based on algebraic signal processing, a methodology that has garnered increasing attention from the machine learning community \citep{kakarala:2011, Kondor:2007, Kondor:2012}, and is particularly well suited to take advantage of the underlying structure of basketball data. The methodology can be understood as a generalization of traditional Fourier analysis, an approach whose centrality in a host of scientific and applied data analysis problems is well-known, and speaks to the promise of its application in new contexts from social choice to genetic epistasis and more \citep{Paudel:2013,Jurman:2008,Lawson:2006,Uminsky:2018,Uminsky:2019}. The premise of spectral analysis in a basketball context is simple: team success (appropriately measured) can be understood as a function on lineups. Such functions have rich structure which can be analyzed and exploited for data analytic insights. Previous work in basketball analytics has addressed similar questions from a different perspective. Both \cite{kuehn2016accounting} and \cite{maymin2013nba} studied lineup synergies on the level of player skills. In \cite{maymin2013nba} the authors used a probabilistic framework for game events, along with simulated games to evaluate full-lineup synergies and find trades that could benefit both teams by creating a better fit on both sides. In \cite{kuehn2016accounting}, on the other hand, the author used a probabilistic model to determine complementary skill categories that suggest the effect of a player in the context of a specific lineup. Work in \cite{grassetti2019estimation} and \cite{grassetti2019play} modeled lineup and player effects in the Italian Basketball League (Serie A1) based on an adjusted plus-minus framework. Our approach is different in several respects. First, we study synergies on the level of specific player groups independent of particular skill sets. We also ignore individual production statistics and infer synergies directly from observed team success, as defined below. As a consequence of this approach, our analysis is roster constrained-- we don't suggest trades based on prospective synergies across teams. We can, however, suggest groupings of players that allow for more optimal lineups within the context of available players, a central problem in the course of an NBA game or season. Further, our approach uses orthogonality to distinguish between the contributions of a group and nested subgroups. So, for example, a group of three players that appears to exhibit positive synergies may, in fact, be benefiting from strong individual and pair contributions while the triple of players adds no particular value as a pure triple. We tease apart these higher-order correlations. Furthermore, spectral analysis is not a model-based approach. As such, our methodology is notably free of modeling assumptions--rather than fitting the data, spectral analysis reports the observed data, albeit projected into a new basis with new information. Thus, it is a direct translation of what actually happened on the court (as we make precise below). As such, our methodology is at least complementary to existing work, and is also promising in presenting a new approach to understanding and appreciating the nuances of team basketball. Finally, we note that while the methodology that underlies the spectral analysis approach is challenging, the resulting intuitions and insights are readily approachable. In what follows, we have stripped the mathematical details to a minimum and relegated them to references for the interested reader. The analysis, on the other hand, shows promise as a new and practical approach to a difficult problem in basketball analytics. \section{Data} \label{Data} We start with lineup level play-by-play data from the 2015-2016 NBA season. Such play-by-play data is publically available on ESPN.com or NBA.com, or can be purchased from websites like bigdataball.com, already processed into csv format. For a given team, we restrict attention to the 15 players on the roster having the most possessions played on the season, and filter the play-by-play data to periods of games involving only those players. Next, we compute the aggregated raw plus-minus (PM) for each lineup. Suppose lineup $L$ plays against opposing lineup $M$ during a period of gameplay with no substitutions. We compute the points scored by each lineup, as well as the number of possessions for both lineups during that stretch of play. For example, if lineup $L$ scored 6 points in 3 possessions and lineup $M$ scored 3 points in 2 possessions, then their plus-minus is computed as the difference in points-per-possession times possessions. Thus, for $L$ the plus-minus is $(\frac{6}{3} - \frac{3}{2})3 = 1.5$ while for $M$ the plus-minus is $(\frac{3}{2} - \frac{6}{3})2 = -1$. Summing over all of lineup $L$'s possessions gives the total aggregate plus-minus for lineup $L$ which we denote by $\text{pm}_L$. Since a lineup consists of 5 players on the floor, there are $3003={15\choose 5}$ possible lineups, though most see little or no playing time. We thus naturally arrive at a function on lineups by associating with $L$ the value of that lineup's aggregate plus-minus, and write $f(L)=\text{pm}_L$. We call $f$ the team success function. This particular success metric has the advantage of being simple and intuitive. Moreover, by summing over all lineups we recover the value of the team's cumulative plus-minus, which is highly correlated with winning percentage. The function $f$ will serve as the foundation for our analysis, but we note that for what follows, any quantitative measure of a lineup's success could be substituted in its place. \section{Methodology} \label{methodology} Our goal is now to decompose the function $f$ in a way that sheds light on the various group contributions to team success. The groups of interest are generalized lineups, meaning groups of all sizes, from individual players to pairs, triples, groups of four, and full five-player lineups. Our primary tool is spectral analysis, which uses the language of representation theory \citep{serre2012linear} to understand functions on lineups. Observe that a full lineup is an unordered set of five players. Any reshuffling of the five players on the floor, or the ten on the bench, does not change the lineup under consideration. Moreover, given a particular lineup, a permutation (or reshuffling) of the fifteen players on the team will result in a new lineup. The set of such permutations has a rich structure as a mathematical group. In this case, all possible permutations of fifteen players are described by $S_{15}$: the symmetric group on 15 items \citep{Dummit:2004}. Furthermore, the set $X$ of five-player lineups naturally reflects this group structure (as a homogeneous space). Most importantly for our purposes, the set of functions on lineups has robust structure with respect to the natural action of permutations on functions. This structure is well understood and can be exploited for data analytic insights as we show below. By way of analogy, just as traditional Fourier analysis looks to decompose a time series into periodicities that can reveal a hidden structure (weekly or seasonal trends, say), our decomposition of $f$ will reveal group effects in lineup-level data. Let $L(X)$ denote the collection of all real valued functions on five-player lineups. This set is a vector space with the usual notions of sum of functions, multiplication by scalars, and an inner product given by \begin{equation} \label{inner product} \langle g,h \rangle =\frac{1}{|X|}\sum_{x\in X} g(x)h(x). \end{equation} The dimension of $L(X)$ is equal to the number of lineups, $3003={15\choose 5}$. In light of the permutation group's action on $L(X)$ as mentioned above, $L(X)$ admits a natural (invariant and irreducible) decomposition as follows: \begin{equation} \label{decomposition} L(X)=V_0\oplus V_1 \oplus V_2\oplus V_3\oplus V_4\oplus V_5 . \end{equation} Each $V_i$, with $0\le i \le 5$ is a vector subspace with data analytic significance. Rather than give a self contained treatment of this decomposition, we refer to \cite{Diaconis:1988} and \cite{Dummit:2004}, and here, simply note that each space is spanned by the matrix coefficients of the irreducible representations of the group $S_{15}$ associated with Young tableaux of shape $(10,5)$. We can gain some intuition for the decomposition by considering the lower-order spaces as follows. An explicit computation of the decomposition is given in section \ref{sec:toy example section} below for a toy example. Take $\delta_L$ to be the indicator function of a fixed lineup $L$, so that $\delta_L(L)=1$, while $\delta_{L}(L')=0$ for any other lineup $L'$. As above, $X$ is the set of all possible lineups, and \begin{equation} \label{meanspace} \delta=\sum_{L\in X}\delta_L. \end{equation} If we act on the function $\delta$ by reshuffling lineups (this is the action of the permutation group $S_{15}$), we see that while the terms in the summation in (\ref{meanspace}) get reordered, the function itself remains unchanged. (See section \ref{sec:toy example section} below for details.) Thus, the one-dimensional space spanned by $\delta$ is invariant under lineup reshuffling and represents the mean value of the function $f$ since we can write $f=c\delta+(f-c\delta)$. Here, $c$ is just the average value of $f$ and $c\delta$ is the best possible constant approximation to $f$. The function $f-c\delta$ represents the original data, but now centered with mean zero, and orthogonal to the space of constant functions with respect to the inner product in (\ref{inner product}). The space spanned by $\delta$ is $V_0$ in (\ref{decomposition}). To understand $V_1$, we start with indicator functions for individual players. Given a player $i$, define $\delta_i=\sum_{L\in\mathcal{L}_i}\delta_{L}-m\delta$ where the sum is over all lineups that include player $i$ and four other players, and $m$ is a constant chosen so that $\delta_i$ is orthogonal to $\delta$. One can show that the space spanned by $\{\delta_1,\delta_2,\ldots\delta_{15}\}$ is again stable under lineup reshuffling. (Though the set of individual indicator functions is linearly dependent, and only spans a 14-dimensional space as we'll see below.) The decomposition continues in an analogous way, though the computations become more involved. Several computational approaches are described in \cite{Diaconis:1988} and \cite{maslen:2003}. In our case of the symmetric group $S_{15}$ acting on lineups, we employ the method in \cite{maslen:2003}, which involves first computing the adjacency matrix of an associated {\it Johnson graph} $J(15,5)$. It turns out that $J(15,5)$ has 6 eigenvalues, each of which is associated with one of the effect spaces: zero (mean), and first through fifth-order spaces. Specifically, the largest eigenvalue is simple and is associated with the one-dimensional mean space; the second largest eigenvalue is associated with the first-order space, etc. It is now a matter of computing an eigenbasis for each space, and using it to project the data vector onto each eigenspace to give the orthogonal decomposition used in (\ref{decomposition}). It is also worth noting that spectral analysis includes the traditional analysis of variance as a special case, a connection suggested by the discussion above and further explained in \cite{Diaconis:1988}. The decomposition in (\ref{decomposition}) is particularly useful for two reasons. First, each $V_i$ can be interpreted as the space of functions encoding $i$-th order effects. For instance, one can see that $V_1$ is naturally understood as encoding first-order individual effects beyond the mean. Thus, the projection of $f$ onto $V_1$ can be understood as that part of team success $f$ attributable to the contributions of individual players. Similarly $V_2$ includes effects attributable to pure player pairs (individual contributions have been removed), and the corresponding projection of $f$ in $V_2$ gives the contributions of those pairs to team success. $V_3$ encodes contributions of groups of three, and so on. These interpretations follow from the fact that each subspace in the decomposition of $L(X)$ is invariant under the natural reshuffling action of $S_{15}$ on lineups. It is also worth noticing that the lineup success function is completely recovered via its projections onto the order subspaces in (\ref{decomposition}). If we write $f_i$ for the projection of $f$ onto $V_i$, then $f=f_0+f_1+f_2+f_3+f_4+f_5$. As such, the spectral decomposition gives a complete description of the original data set with respect to a new basis grounded in group contributions. Secondly, the decomposition in (\ref{decomposition}) is orthogonal (signified by the $\oplus$ notation). From a data analytic perspective, this means that there is no overlap among the spaces, and group effects are independent. Thus, for instance, a contribution attributed to a group of three players can be understood as a pure third-order contribution. All constituent pair and individual contributions have been removed and quantified separately in the appropriate lower-order spaces. We thus avoid erroneous attribution of success due to multicollinearity among groups. For example, is a big three really adding value as a triple, or is its success better understood as a strong pair plus an individual? The spectral decomposition in (\ref{decomposition}) provides a quantitative basis for answering such questions. The advantage of the orthogonality of the spaces in (\ref{decomposition}), however, presents a challenge with respect to direct interpretation of contributions for particular groups. This is evident when considering the dimension of each of the respective effect spaces in Table \ref{ProjSpaceDimensions}, which is strictly smaller than the number of groups of that size we might wish to analyze. \begin{table} \centering \begin{tabular}{ccc} \hline Space & Dimension & Number of Groups\\ \hline $V_0$ & 1 & --\\ $V_1$ & 14 & 15\\ $V_2$ & 90 & 105\\ $V_3$ & 350 &455\\ $V_4$ & 910 & 1365\\ $V_5$ & 1638 & 3003\\ \hline \end{tabular} \caption{Dimension of each effect space, along with the number of natural groups of each size.} \label{ProjSpaceDimensions} \end{table} Since we have rosters of fifteen players, there are fifteen individual contributions to consider. The space $V_1$, however, is 14-dimensional. Similarly, while $V_2$ includes all of the contributions to $f$ attributable to pairs of players, it does so in a 90-dimensional space despite the fact that there are $105={15\choose 2}$ natural pairs of players to consider. The third-order space $V_3$ has dimension 350 while there are 455 player triples, and so on. We deal with this issue using Mallows' method of following easily interpretable vectors as in \cite{Diaconis:1988}. Let $g$ be a group of players. For example, if players are labeled 1 through 15, then a particular triple might be $g=\{1,2,7\}$. Let $\phi_g$ be the indicator function associated with $g$, i.e., the function that takes the value 1 when all three players 1, 2, and 7 are in a lineup, and outputs 0 otherwise. The function $\phi_g$ is intuitively associated with the success of the group $g$ (though it is not invariant under reshuffling and is not orthogonal to nested lower-order groups). To quantify the contribution of $g$ (as a pure triple) to the success of the team as measured by $f$, project both $\phi_g$ and $f$ onto $V_3$ and take the inner product of the projections: $\langle pr_{V_3}(\phi_g), pr_{V_3}(f)\rangle = \langle pr_{V_3}(\phi_g), f_3\rangle$. After projecting onto $V_3$ we are left with only the third-order components of $\phi_g$ and $f$. The resulting inner product is a weighted cosine similarity that indicates the extent to which the pure triple $g$ is correlated with the team's success $f$. Larger values of this inner product reflect a stronger synergy between the triple of players $\{1,2,7\}$, while a negative value indicates that, after removing the contributions of the constituent individuals and pairs, spectral analysis finds this particular group of three ineffective. In the results below we show how this information might be useful in evaluating lineups. \section{Two-On-Two Basketball} \label{sec:toy example section} To ground the ideas of the previous section we present a small-scale example in detail. Consider a version of basketball where a team consists of 5 players, two of which play at any given moment. The set of possible lineups consists of the ten unordered pairs $\{i,j\}$ with $i,j\in\{1,2,3,4,5\}$ and $i\ne j$. The symmetric group $S_5$ acts on lineups by relabeling, and we extend this action to functions on lineups as follows. Given a permutation $\pi$, a function $h$, and a lineup $L$, define \begin{equation} (\pi\cdot h)(L)=h(\pi^{-1}L). \end{equation} Therefore, if $\pi$ is the permutation $(123)$, taking player 1 to player 2, player 2 to player 3, player 3 to player 1, and leaving everyone else fixed, and if $L$ is the lineup $\{1,3\}$, then \begin{equation} (\pi\cdot h)(L) = h(\pi^{-1}\{1,3\}) = h(\{3,2\}). \end{equation} The use of the inverse is necessary to ensure that the action on functions respects the operation in the group, that is, so that $(\tau\pi)\cdot h = \tau\cdot (\pi\cdot h)$ \citep{Dummit:2004}. Following a season of play, we obtain a success function that gives the plus-minus (or other success metric) of each lineup. We might observe a function like that in Table \ref{ToyLineupFunction}. \begin{table}[ht] \centering \begin{tabular}{cc|cc} \hline $L$ & $f(L)$ & $L$ & $f(L)$\\ \hline $\{1,2\}$ &22 &$\{2,4\}$& 35\\ $\{1,3\}$ &18 &$\{2,5\}$& 26\\ $\{1,4\}$ &3 &$\{3,4\}$& 84\\ $\{1,5\}$ &58 &$\{3,5\}$& 25\\ $\{2,3\}$ &93 &$\{4,5\}$& 2\\ \end{tabular} \caption{Success function for two-player lineups.} \label{ToyLineupFunction} \end{table} Summing $f(L)$ over all lineups that include a particular player gives individual raw plus-minus as in Table \ref{ToyIndPM}. \begin{table}[ht] \centering \begin{tabular}{ccc} \hline Player & PM &Rank \\ \hline 1 & 101&5 \\ 2 & 176&2\\ 3 & 220&1\\ 4 & 124&3\\ 5 & 111& 4\\ \end{tabular} \caption{Preliminary analysis of sample team using individual plus-minus (PM), which is the sum of the lineup PM over lineups that include a given individual.} \label{ToyIndPM} \end{table} Player 3 is the top rated individual, followed by 2, 4, 5, and 1. Lineup rankings are given by $f(L)$ itself, which shows $\{2,3\},\{3,4\}$, and $\{1,5\}$ as the top three. Now compare the analysis above with spectral analysis. In this context the vector space of functions on lineups is 10-dimensional and has a basis consisting of vectors $\delta_{\{i,j\}}$ that assign the value 1 to lineup $\{i,j\}$ and 0 to all other lineups. The decomposition in (\ref{decomposition}) becomes \begin{equation} \label{toy decomposition} V=V_0\oplus V_1\oplus V_2. \end{equation} Define $\delta = \sum_{\{i,j\}}\delta_{\{i,j\}}$. The span of $\delta$ is the one-dimensional subspace $V_0$ of constant functions. Moreover, $V_0$ is $S_5$ invariant since for any relabeling of players given by $\pi$, we have $\pi\cdot\delta =\delta$. Given a function $f$ in $V$, its projection $f_0$ on $V_0$ will assigns to each lineup the average value of $f$, in this case 36.6. First order (or individual) effects beyond the mean are in encoded in $V_1$. Explicitly, define $\delta_1 = \sum_i\delta_{\{1,i\}}-\frac{2}{5}\delta$, with $\delta_2,\delta_3,$ and $\delta_4$ defined analogously. One can check that the 4-dimensional vector space spanned by $\{\delta_1,\delta_2,\delta_3,\delta_4\}$, is $S_5$ invariant, and is orthogonal to $V_0$. Since the mean has been subtracted out and accounted for in $V_0$, a vector in $V_1$ represents a pure first order effect. Note that $\delta_5(x)=\sum_i\delta_{\{5,i\}}-\frac{2}{5}\delta$ can be written $\delta_5=-\delta_1-\delta_2-\delta_3-\delta_4$. Consequently, $V_1$ is 4-dimensional even though there are five natural first order effects to consider: one for each player. Finally, the orthogonal complement of $V_0\bigoplus V_1$ is the 5-dimensional $S_5$ invariant subspace $V_2$. $V_2$ gives the contribution to $f$ from {\it pure pairs}, or pure second order effects after the mean and individual contributions are removed. The three subspaces $V_0$, $V_1$, and $V_2$ are all irreducible since none contains a nontrivial $S_5$ invariant subspace. We can now project $f$ onto $V_0, V_1$, and $V_2$. All together we have $f=f_0+f_1+f_2$: \begin{equation} \label{toy_decomposition} f\left( \begin{array}{c} \{1,2\}\\ \{1,3\}\\ \{1,4\}\\ \{1,5\}\\ \{2,3\}\\ \{2,4\}\\ \{2,5\}\\ \{3,4\}\\ \{3,5\}\\ \{4,5\}\\ \end{array}\right) =\left[ \begin{array}{r} 22 \\ 18 \\ 3 \\ 58 \\ 93\\ 35 \\ 26 \\ 84 \\ 25 \\ 2 \\ \end{array} \right] = \left[ \begin{array}{c} 36.6 \\ 36.6 \\ 36.6 \\ 36.6 \\ 36.6\\ 36.6 \\ 36.6 \\ 36.6 \\ 36.6 \\ 36.6 \\ \end{array} \right] + \left[ \begin{array}{r} -5.27 \\ 9.40 \\ -22.60 \\ -26.93 \\ 34.40\\ 2.40 \\ -1.93 \\ 17.07 \\ 12.73 \\ -19.27 \\ \end{array}\right] + \left[ \begin{array}{r} -9.33 \\ -28.00 \\ -11.00 \\ 48.33 \\ 22.00\\ -4.00 \\ -8.67 \\ 30.33 \\ -24.33 \\ -15.33 \\ \end{array}\right] \end{equation} Turning to the question of interpretability, section \ref{methodology} proposes Mallows' method of using readily interpretable vectors projected into the appropriate effect space. To that end, the individual indicator function $\phi_{\{2\}}=\delta_{\{1,2\}}+\delta_{\{2,3\}}+\delta_{\{2,4\}}+\delta_{\{2,5\}}$ is naturally associated with player 2: $\phi_{\{2\}}(L)=1$ when player 2 is in $L$ and is 0 otherwise. We quantify the effect of player $2$ by projecting $\phi_{\{2\}}$ and $f$ into $V_1$, and then taking the dot product of the projections. For a lineup like $\{2,3\}$, we take the dot product of the projections of the lineup indicator function $\delta_{\{2,3\}}$, and $f$, in $V_2$. Note that player 2's raw plus-minus is the inner product of $10\cdot f$ with the interpretable function $\phi_{\{2\}}$. Similarly $f(\{i,j\})$ is $10\cdot\langle f,\phi_{\{i,j\}}\rangle$. The key difference is that spectral analysis uses Mallow's Method {\it after} projecting onto the orthogonal subspaces in (\ref{toy decomposition}). Contributions from spectral analysis as measured by Mallows' method are given in Table \ref{toyspec} for both individuals and (two-player) lineups. \begin{table}[ht] {\footnotesize \centering \label{toyspec} \begin{tabular}{cc?cccc|cccc} \hline Individual& Spec&Pair &Spec &Rank &$f$ Rank& Pair & Spec &Rank & $f$ Rank\\ \hline \{1\}&-45.4& \{1,2\} &-9.3 & 6 &7&\{2,4\}&-4&4&4\\ \{2\}&29.6&\{1,3\} &-28 &10 &8&\{2,5\} &-8.7&5&5\\ \{3\}&73.6&\{1,4\}& -11 & 7 &9&\{3,4\}& 30.3&2&2\\ \{4\}&-22.4&\{1,5\}&48.3 & 1 &3&\{3,5\}& -24.3&9&6\\ \{5\}&-35.4&\{2,3\}& 22 &3& 1&\{4,5\} & -24&8&10\\ \end{tabular} \caption{Spectral value (Spec) for each individual player and two-player lineup, and rank of each lineup, along with the preliminary rank given by $f$.} } \end{table} The table also includes both the spectral and preliminary (based on $f$) rankings of each lineup. Note that lineup $\{2,3\}$ drops from the best pair to the third best pure pair. Once we account for the contributions of players two and three as individuals, the lineup is not nearly as strong as it appears in the preliminary analysis. We find stronger pair effects from lineups $\{1,5\}$ and $\{3,4\}$. All remaining lineups are essentially ineffective in that their success can be attributed to the success of the constituent individuals rather than the pairing. Interesting questions immediately arise. What aspects of player four's game result in a more effective pairing with player three, the team's star individual player, than the pairing of three with two, the team's second best individual? What is behind the success of the $\{1,5\}$ lineup? These considerations are relevant to team construction, personnel considerations, and substitution patterns. We pursue this type of analysis further in the context of an actual NBA team below. {\iffalse section{Comparison with a Modeling Approach on Simulated Data} \label{sec:Simulation} Before turning to actual NBA data we briefly compare spectral analysis with a linear modeling approach on a simulated data set. Starting from the modeling framework in \citep{Sill:2010}, which used ridge regression to improve the adjusted plus-minus metric to estimate individual player contributions, it is natural to consider extending this framework to evaluate groups. Examples of this approach can be found in \citep{grassetti2019estimation} and \citep{grassetti2019play}, which studied the relationship between individual and lineup contributions. While a thorough comparison of modeling approaches and spectral analysis is left to future work, we present a simple example here to illustrate the fact that the approaches can differ significantly. Along these lines, we simulate observations for a hypothetical team with ten players labeled $A$, $B$, $C$, \ldots, $J$. First we generate a baseline of three hundred observations by randomly choosing a lineup of five players and giving it a value randomly chosen from a normal distribution with mean 10 and standard deviation 3. Next we insert a signal as follows. We add thirty observations that include player $A$ and four randomly chosen teammates valued at 25, and we add thirty observations that include player $B$ and four randomly chosen teammates valued at 20. We introduce a pair effect by adding thirty observations including players $A$, $B$ and three random teammates valued at 35, and a somewhat sparser signal for triples by adding only 20 observations including $A$, $B$, $C$ and two random teammates, valued at 50. We analyze the data using ridge regression and spectral analysis to identify individual and group effects. Recall that ridge regression fits a linear model that minimizes the usual residual sum of squares along with an $L_2$ penalty on the coefficient vector (without the intercept) \citep{friedman2001elements}. The regression is run using the glmnet package in R, and using ten-fold cross validation to select the optimal shrinkage parameter $\lambda$. One would hope to find four dominant contributions coming from individuals $A$ and $B$, the pair $\{A,B\}$, and the triple $\{A,B,C\}$. The top five group effects according to ridge regression are, with coefficients in parentheses, $\{A,B,C,E,G\}$ (1.36), $\{A,B,C,D,F\}$ (1.02), $\{A,B,C,E,I\}$ (0.77), $\{A,B,D,I,J\}$ (0.75), and $\{A,B,D,E,H\}$ (0.75). The triple $\{A,B,C\}$ ranks 24-th (0.38), the pair $\{A,B\}$ ranks 34-th (0.31), and the individuals $A$ and $B$ rank 61-st (0.20) and 63-rd (0.19), respectively. Finally, Table \ref{specsim} shows the top five groups (by their Mallows coefficients) in the first, second, and third order spaces in spectral analysis. \begin{table}[ht] \centering \scriptsize \label{specsim} \begin{tabular}{cc|cc|cc} \hline Individual & Coefficient &Pair & Coefficient&Triple & Coefficient\\ \hline B & 302.0 & $\{A,B\}$ & 160.1 &$\{A,B,C\}$ & 78.8 \\ A & 266.3 &$\{B,C\}$ & 79.6 &$\{B,F,G\}$ & 50.3\\ C & 98.4 &$\{A,C\}$ & 79.2 &$\{B,D,E\}$ & 29.9 \\ F & -65.4 &$\{F,G\}$ & 70.8 &$\{A,F,H\}$ & 29.7 \\ I & -69.4 &$\{D,E\}$ & 47.1 &$\{B,F,H\}$ & 29.7 \\ \end{tabular} \caption{Spectral analysis for individuals, pairs, and triples on the simulated data.} \end{table} Spectral analysis is successful at both identifying all of the dominant contributions and separating them from the noise. Also notably, the fourth and fifth order effect spaces are identified by spectral analysis as pure noise. \fi} \section{Results and Discussion} A challenge inherent in working with real lineup-level data is the wide disparity in the number of possessions that lineups play. Most teams have a dominant starting lineup that plays far more possessions than any other. For example, the starting lineup of the '16 Golden State Warriors played approximately 1140 possessions while the next most used lineup played 535 possessions. Only 12 lineups played more than 100 possessions for the Warriors on the season. For the Boston Celtics, the starters played 1413 possessions compared to 257 for the next most utilized, with 13 lineups playing more than 100 possessions. By contrast, the Celtics had 255 lineups that played fewer than 10 possessions (but at least one), and the Warriors had 236. Numbers are similar across the league. This is another reason for using raw plus-minus in defining the team success function $f$ on lineups. A metric like per-possession lineup plus-minus breaks down in the face of large numbers of very low possession lineups and a few high possession lineups. Still, we want to identify potentially undervalued and underutilized groups of players-- especially for smaller groups like pairs and triples where there are many more groups that do play significant numbers of possessions. Another consideration is that over time, lineups with large numbers of possessions will settle closer to their true mean value while lineups with few possessions will be inherently noisier. As a result, we perform the spectral analysis on $f$ as described in section \ref{methodology} above, and then normalize the spectral contribution by the log of possessions played by each group. We call the result {\it spectral contribution per log possession} (SCLP). This balances the considerations above and allows strong lower possession groups to emerge while not over-penalizing groups that do play many possessions. Despite these challenges, however, we'll see below that there are significant insights to be gained in working with lineup level data. Moreover, since spectral analysis is a non-model-based description of complete lineup-level game data, it has the advantage of maintaining close proximity to the actual gameplay observed by coaches, players, and fans. There are always five players on the floor, so all data begins at the level of full lineups. Consider the first order effects for the 15-16 Golden State Warriors in Table \ref{GSWFirstTable}. Draymond Green, Stephen Curry, and Klay Thompson are the top three players. The ordering, specifically Green ranked above Curry, is perhaps interesting, though it's worth noting that this ordering agrees with ESPN's real plus-minus (RPM). (Green led the entire league in RPM in 15-16.) Other metrics like box plus-minus (BPM) and wins-above-replacement (WAR) rank Curry higher. Because SCLP is based on ability of lineups to outscore opponents when the player is on the floor (like RPM), however, as opposed to metrics like BPM and WAR which are more focused on points produced, the ordering is defensible. \begin{table}[ht] \begin{center} \begin{tabular}{lccc} Player & SCLP & PM & Poss\\ \hline Draymond Green& 17.2& 1038.4 & 5800\\ Stephen Curry & 15.9 & 978.7& 5610\\ Klay Thompson & 12.0 & 808.6& 5453\\ Andre Iguodala & 3.5 & 436.1 & 3516\\ Andrew Bogut & 2.8 & 403.6 & 2951\\ \hline\hline Marreese Speights & -7.4 & 20.0& 1630\\ Ian Clark & -9.8 & -51.9& 1108\\ Anderson Varejao & -11.1 & -34.4& 368\\ Jason Thompson & -11.2& -33.8 & 339\\ James Michael McAdoo & -12.1& -85.0& 526\\ \end{tabular} \caption{Top and bottom five first-order effects for GSW. SCLP is the spectral contribution per log possession, PM is the player's raw plus-minus, and Poss is the number of possessions for that player.}\label{GSWFirstTable} \end{center} \end{table} In fact, a closer look at the interpretable vector $\phi_i$ associated with individual player $i$ (as described in sections \ref{methodology} and \ref{toyspec}) reveals that $\phi_i=\delta_i+c\cdot \delta$, so is just a non-mean-centered version of the first order invariant functions that span $V_1$. Consequently, the spectral contribution (non-possession normalized) is a linear function of individual plus-minus, so reflects precisely that ordering. This is not the case for higher-order groups, however, which is where we focus the bulk of our analysis. The second-order effects are given in in Table \ref{GSWSecondTable}, and quantify the contributions of player pairs, having removed the mean, individual, and higher-order group effects. The top and bottom five pairs (in terms of SCLP) are presented here, with more complete data in Table \ref{AGSWSecondTable} in the appendix. \begin{table}[ht] {\footnotesize \begin{center} \begin{tabular}{llccc} \hline P1 &P2 & SCLP & PM & Poss\\ \hline Draymond Green & Stephen Curry & 13.3& 979.9 & 5102\\ Stephen Curry & Klay Thompson & 11.2& 827.8 & 4311\\ Draymond Green & Klay Thompson & 11.1& 847.8 & 4678\\ Leandro Barbosa & Marreese Speights& 5.3& 76.2 & 983\\ Draymond Green & Andre Iguodala& 4.3& 490.0 & 2165\\ \hline\hline Draymond Green & Ian Clark & -7.2 & 33.3 & 424 \\ Klay Thompson & Leandro Barbosa & -7.2 & 4.8 &349 \\ Stephen Curry & Ian Clark & -8.1 & 14.0 & 220 \\ Draymond Green & Anderson Varejao & -9.5 & 7.2 & 217\\ Stephen Curry & Anderson Varejao & -10.1 & -26.9 & 237 \\ \hline \end{tabular} \caption{Top and bottom five SCLP pairs with at least 200 possessions, along with raw plus-minus and possessions.}\label{GSWSecondTable} \end{center} } \end{table} Even after accounting for and removing their strong individual contributions, however, it is notable that Green--Curry, Curry--Thompson, and Green--Thompson are the dominant pair contributors by a considerable margin, with SCLP values that are all more than twice as large as for the next largest pair (Barbosa--Speights). These large positive SCLP values represent true synergies: These pairs contribute to team success {\it as pure pairs}. The fact that the individual contributions of the constituent players are also positive results in a stacking of value within a lineup that provides a quantifiable way of assessing whether the whole does indeed add to more than the sum of its parts. Reserves Leandro Barbosa, Mareese Speights, and Ian Clark, on the other hand, were poor individual contributors, but manage to combine effectively in several pairs. In particular, the Barbosa--Speights pairing is notable as the fourth best pure pair on the team (in 983 possessions). After accounting for individual contributions, lineups that include the Barbosa--Speights pairing benefited from a real synergy that positively contributed to team success. This suggests favoring, when feasible, lineup combinations with those two players together to leverage this synergy and mitigate their individual weaknesses. Tables \ref{SmallBogutPairs} and \ref{SmallLivingstonPairs} show pair values for players Andrew Bogut and Shaun Livingston (again in pairs with at least 150 possessions, and with more detailed tables in the appendix). Both players are interesting with respect to second order effects. While Bogut was a positive individual contributor, and was a member of the Warriors' dominant starting lineup that season, he largely fails to find strong pairings. His best pairings are with Klay Thompson and Harrison Barnes, while he pairs particularly poorly with Andre Iguodala (in a considerable 785 possessions). This raises interesting questions as to why Bogut's style of play is better suited to players like Thompson or Barnes rather than players like Curry or Iguodala. Also noteworthy is the fact that the Bogut--Iguodala pairing has a positive plus-minus value of 107. The spectral interpretation is that this pairing's success should be attributed to the individual contributions of the players, and once those contributions are removed, the group lacks value as a pure pair. \begin{table}[h!] {\footnotesize \begin{center} \begin{tabular}{llccc} \hline P1 &P2 & SCLP & PM & Poss\\ \hline Andrew Bogut & Klay Thompson& 3.7 & 394.3& 2637\\ Andrew Bogut & Harrison Barnes & 2.1 & 206.2& 1527 \\ Andrew Bogut & Stephen Curry & 1.6 & 378.5& 2530 \\ \hline Andrew Bogut & Andre Iguodala & -2.1 & 107.0 & 785 \\ \hline \end{tabular} \caption{Select pairs involving Andrew Bogut (with at least 150 possessions).}\label{SmallBogutPairs} \end{center} } \end{table} \begin{table}[h!] {\footnotesize \begin{center} \begin{tabular}{llccc} \hline P1 &P2 & SCLP & PM &Poss\\ \hline Shaun Livingston & Anderson Varejao & 2.0 & -1.5 & 174 \\ Shaun Livingston & Marreese Speights & 1.6 & 17.8 & 1014 \\ Shaun Livingston & Draymond Green & 1.2 & 323.6 & 1486\\ \hline Shaun Livingston & Andre Iguodala & -1.3 & 65.2 & 1605 \\ Shaun Livingston & Klay Thompson & -3.6 & 111.8 & 1412 \\ \hline \end{tabular} \caption{Select pairs involving Shaun Livingston (with at least 150 possessions).}\label{SmallLivingstonPairs} \end{center} } \end{table} Shaun Livingston, on the other hand, played an important role as a reserve point guard for the Warriors. Interestingly, Livingston's worst pairing by far was with Klay Thompson. Again, considering the particular styles of these players compels interesting questions from the perspective of analyzing team and lineup compositions and playing style. It's also noteworthy that this particular pairing saw 1412 possessions, and it seems entirely plausible that its underlying weakness was overlooked due to the healthy 111.8 plus-minus with that pair on the floor. The success of those lineups should be attributed to other, better synergies. For example, one rotation added Livingston as a sub for Barnes (112 possessions). Another put Livingston and Speights with Thompson, Barnes, and Iguodala (70 possessions). Finally, it's also interesting to note that Livingston appears to pair better with other reserves than with starters (save Draymond Green, further highlighting Green's overall value), an observation that raises important questions about how players understand and occupy particular roles on the team. Table \ref{GSWThirdTable} shows the best and worst triples with at least 200 possessions. \begin{table} {\footnotesize \begin{center} \begin{tabular}{lllrrr} \hline P1 & P2 &P3 & SCLP & PM & Poss \\ \hline Draymond Green & Stephen Curry & Klay Thompson & 12.6 & 812.7 & 4085 \\ Draymond Green & Klay Thompson & Harrison Barnes & 5.9 & 427.3 & 2473 \\ Draymond Green & Stephen Curry & Andre Iguodala & 5.8 & 464.8 & 1830 \\ Stephen Curry & Klay Thompson & Harrison Barnes & 5.7 & 416.5 & 2431 \\ Stephen Curry & Klay Thompson & Andrew Bogut & 4.9& 382.2 & 2296 \\ \hline\hline Stephen Curry & Andre Iguodala & Brandon Rush & -3.8 & -13.5 & 207 \\ Draymond Green & Stephen Curry & Marreese Speights & -4.1 & 97.9 & 299 \\ Draymond Green & Klay Thompson & Marreese Speights & -4.5 & 52.2 & 250 \\ Draymond Green & Klay Thompson & Ian Clark & -5.8 & 9.8 & 316 \\ Draymond Green & Stephen Curry & Ian Clark & -7.4 & 14.5 & 205 \\ \hline \end{tabular} \caption{Best and worst third-order effects for GSW with at least 200 possessions.}\label{GSWThirdTable} \end{center} } \end{table} The grouping of Green--Curry--Thompson is far and away the most dominant triple, and safely (and unsurprisingly) earns designation as the Warriors' big three. Other notable triples include starters like Green and Curry or Green and Thompson together with Andre Iguodala who came off the bench, and more lightly used triples like Curry--Barbosa--Speights who had an SCLP of 4.6 in 245 possessions. Analyzing subpairs of these groups shows a better stacking of synergies in the triples that include Iguodala--he pairs well with Green, Curry, and Thompson in the second order space as well, while either of Barbosa or Speights paired poorly with Curry. Still, Barbosa with Speights was quite strong as a pair, and we see that the addition of Curry does provide added value as a pure triple. Interesting ineffective triples include Iguodala and Bogut with either of Curry or Green, especially in light of the fact that Bogut--Iguodala was also a weak pairing (see detailed tables in the appendix). Figure \ref{GSW3scatter} shows that the most effective player-triples as identified by spectral analysis are positively correlated with higher values of plus-minus. \begin{figure}[ht] \centering \includegraphics[width = \textwidth]{g3scatter.pdf} \caption{Third-order effects for triples with more than 100 possessions the 2015-2016 Golden State Warriors. The $x$-axis gives the group's plus-minus per log possession (PMperLP) while the $y$-axis shows the spectral contribution per log possession (SCLP). Observations are shaded by number of possessions.} \label{GSW3scatter} \end{figure} As raw group plus-minus decreases, however, we see considerable variation in the spectral contributions of the groups (and in number of possessions played). This suggests the following narrative: while it may be relatively easy to identify the team's top groups, it is considerably more difficult to identify positive and negative synergies among the remaining groups, especially when controlling for lower-order contributions. Spectral analysis suggests several opportunities for constructing more optimal lineups with potential for untapped competitive advantage, especially when more obvious dominant groupings are unavailable. Table \ref{SmallBOSThirdTable} shows top and bottom three third-order effects for the 15-16 Boston Celtics. (The appendix includes more complete tables for Boston including effects of all orders.) Figure \ref{GSWBOS3Bar} gives contrasting bar plots of the third-order effects for both Boston and Golden State. \begin{table}[hbt] {\footnotesize \begin{center} \begin{tabular}{lllrrr} \hline P1 & P2 &P3 & SCLP & PM & Poss \\ \hline Evan Turner & Kelly Olynyk & Jonas Jerebko & 2.9 & 110.1 & 879\\ Isaiah Thomas& Avery Bradley & Jared Sullinger & 2.7& 177.7 & 2642\\ Avery Bradley & Jae Crowder & Jared Sullinger & 2.3 & 139.3& 2216\\ \hline\hline Isaiah Thomas & Evan Turner & Kelly Olynyk & -1.8& -30.9 & 870\\ Avery Bradley& Jared Sullinger & Jonas Jerebko & -2.3& -11.7 & 194\\ Isaiah Thomas & Avery Bradley & Jonas Jerebko & -2.4 & -1.6 & 290\\ \hline \end{tabular} \caption{Top and bottom three third-order effects for BOS with at least 150 possessions.}\label{SmallBOSThirdTable} \end{center} } \end{table} \begin{figure}[h!] \centering \includegraphics[width = \textwidth]{GSW_BOS_3_bars.pdf} \caption{Bar graph of third order spectral contributions per log possession (SCLP) for BOS and GSW for groups with more than 150 possessions.} \label{GSWBOS3Bar} \end{figure} The Celtics have fewer highly dominant groups. In particular, we note that the spectral signature of the Celtics is distinctly different from that of the Warriors in that Boston lacks anything resembling the big-three of Golden State. While SCLP values are not directly comparable across teams (they depend, for instance, on the norm of the overall team success function when projected into each effect space), the relative values within an effect-space are comparable. Similarly, the SCLP values also depend on the norm of the interpretable vector used in Mallow's method. As a result, the values are not directly comparable across effect spaces-- a problem we return to below. In fourth and fifth-order spaces the numbers of high-possession groups begins to decline, as alluded to above. (See appendix for complete tables.) Still, it is interesting to note that spectral analysis flags the Warriors small lineup of Green--Curry--Thompson--Barnes--Iguodala as the team's best, even over the starting lineup with Bogut replacing Barnes. It also prefers two lesser-used lineups to the Warriors' second most-used lineup of Green--Curry--Thompson--Bogut--Rush. Also of note is the fact that Golden State's best group of three and best group of four are both subsets of the starting lineup-- another instance of stacking of positive effects--while neither of Boston's best groups of three or four are part of their starting lineup. \section{Connection With Linear Models} \label{sec:LM Section} Before moving on, we consider the connection between spectral analysis and a related approach via linear regression which will likely be more familiar to the sports analytics community. Recalling our assumption of a 15 man roster, consider the problem of modeling a lineup's plus-minus, given by $f(L)$ for lineup $L$, using indicator variables that correspond to all possible groups of players. Label the predictor variables $X_1$, $X_2$,\ldots $X_p$, where each variable corresponds to a group of players (with some fixed group order). Thus, the variable $X_i$ is 1 when the players from group $i$ are on the floor, and zero otherwise. If the first fifteen variables are the indicator functions of the individual players $X_1, X_2,\ldots X_{15}$, then the group variables, the $X_i$ for $i>15$, are interaction terms. For instance, the variable corresponding to the group $\{1,2,3\}$ is $X_1X_2X_3$. This approach is therefore similar to an adjusted plus minus with interactions approach. Including all possible group effects, however, means that the number of predictors is quite large and depending on the number of observations, we may be in a situation where $p>>N$. Moreover, the nature of player usage in lineups means that there is a significant multicollinearity issue. Consequently, an attempt to quantify group effects in a regression model of this sort will rely on a shrinkage technique like ridge regression. Let $N$ be the number of lineups, and $y=f(L)$, an $N\times 1$ column vector. Let $\bf X$ be the $N\times (p+1)$ matrix whose first column is the vector of all ones and where the $i$-th row consists of the binary value of each predictor variable for the $i$-th player group. The vector of ridge coefficients $\hat{\beta}^{\text{ridge}}$ minimizes the penalized residual sum of squares: $\argmin_\beta \left\{ \| y-{\bf X}\beta \|^2 +\lambda\sum_{i=1}^p\beta_i^2 \right\}$. The non-negative parameter $\lambda$ serves as a penalty on the $L_2$-norm of the solution vector. (The intercept is not included in the ridge penalty.) The ridge approach reduces the variability exhibited by the least squares coefficients in the presence of multicollinearity by shrinking the coefficient estimates in the model towards zero (and toward each other). One can show that ridge regression uses the singular values of the covariance matrix associated with the centered version of ${\bf X}$ to disproportionately shrink coefficients associated with inputs where the data exhibits lower degrees of variance. See \cite{friedman2001elements} for details. The fitted coefficients $\hat{\beta_0},\hat{\beta}_1,\ldots\hat{\beta}_p$ in the ridge regression model attempt to measure the contribution of group $i$ while controlling for the contributions of all other groups and individuals. We note that this modeling approach resembles work in \cite{Sill:2010}, \cite{grassetti2019estimation}, and \cite{grassetti2019play}, though there are key differences which we explore below. In particular, note that we model group contributions aggregated over all opponents, and without controlling for the quality of the opponents faced. This simplified approach allows for a more direct comparison with the results of spectral analysis above. Tables \ref{RidgeIndsPairs} and \ref{RidgeTriples} give the ridge regression coefficients associated with the top 5 individuals, pairs, and triples for the Warriors. \begin{table}[h!] {\footnotesize \begin{center} \begin{tabular}{lcc|cllc} \hline Individual & Estimate &\ & \ & P1 & P2 & Pair Estimate \\ \hline Draymond Green &0.28&\ &\ & Draymond Green&Stephen Curry&0.65 \\ Stephen Curry &0.25&\ &\ & Stephen Curry&Andrew Bogut&0.53 \\ Klay Thompson& 0.15&\ &\ & Stephen Curry&Klay Thompson&0.47 \\ Andrew Bogut & 0.14&\ &\ & Draymond Green&Klay Thompson&0.47 \\ Festus Ezeli & 0.02&\ &\ & Draymond Green&Andrew Bogut&0.46 \\ \hline \end{tabular} \caption{Best individuals and pairs using the linear model.}\label{RidgeIndsPairs} \end{center} } \end{table} \begin{table}[hbt] {\footnotesize \begin{center} \begin{tabular}{lllr} \hline P1 & P2 &P3 & Estimate \\ \hline Draymond Green&Stephen Curry&Andrew Bogut& 1.61\\ Stephen Curry&Klay Thompson&Andrew Bogut& 1.49\\ Draymond Green&Stephen Curry&Klay Thompson& 1.39\\ Draymond Green&Klay Thompson&Andrew Bogut& 1.24\\ Draymond Green&Klay Thompson&Harrison Barnes& 1.03\\ \hline \end{tabular} \caption{Top triples according to the linear model.}\label{RidgeTriples} \end{center} } \end{table} Comparing with Tables \ref{GSWFirstTable}, \ref{GSWSecondTable}, and \ref{GSWThirdTable} shows both some overlap in the top rated groups, but also significant differences with respect to both ordering and magnitude of contribution. In particular, the linear model appears to value the contributions of Andrew Bogut considerably more than spectral analysis. It is also notable that spectral analysis identifies a clearly dominant big three of Green--Curry--Thompson, in contrast to the considerably different result arising from the modeling approach which ranks that group third. We can interpret the linear model determined by $\hat{\beta}^{\text{ridge}}$ as giving a similar decomposition to the spectral decomposition in $(\ref{decomposition})$. For each lineup $L$ we have predicted success given by \begin{equation} \label{lm} \hat{\bf y} = {\bf X}_L\hat{\beta}^{\text{ridge}} \end{equation} where ${\bf X}_L$ is now the ${15 \choose 5} \times (p+1)$ matrix whose first column is all 1s, and whose $i,j+1$ entry is 1 if the $j$-th player group is part if the $i$-th lineup. (We have fixed a particular ordering of lineups.) The columns of ${\bf X}_L$ (the $X_i$) that correspond to individual players can be understood as spanning a subspace $W_1$ analogous to $V_1$ in (\ref{decomposition}). Similarly, $W_2$ is spanned by the columns of ${\bf X}_L$ corresponding to pair interactions, and so on for all groups through full five player lineups. The particular linear combinations in each $W_i$ determined by the respective coordinates of $\hat{\beta}^{\text{ridge}}$ are analogous to the ${\bf pr}_{V_i}f$. In fact, the space of all lineup functions can be written \begin{equation} \label{lmdecomp} V=W_0+W_1+W_2+W_3+W_4+W_5, \end{equation} where $W_i$ is the space of interaction effects for groups of size $i$. Still, there are important differences between (\ref{decomposition}) and (\ref{lmdecomp}). While $V_0$ and $W_0$ are both one-dimensional, for $i\ge 1$ the dimensions of the $W_i$ are strictly larger than those of their $V_i$ counterparts. For instance, $W_5$ includes a vector for each possible set of five players from the original fifteen. Similarly $W_4$ and groups of four, and so on. Thus, the dimension of $W_5$ is 3003 (the number of lineups), which is the same as the dimension of $V$ itself. By contrast the dimension of $V_5$ in (\ref{decomposition}) is only 1638. Similarly the dimension of $W_4$ is 1365 while that of $V_4$ is $350$. Clearly, the decomposition in (\ref{lm}) is highly non-orthogonal (explaining the $+$ rather than $\oplus$ notation). It is easy to find vectors in $W_i$ that overlap with $W_j$ in the sense that their inner product is non-zero. In the context of basketball, the contribution of a group of, for example, $5$ players is not necessarily separate from a constituent group of four (or any other number of) players despite the use of shrinkage methods. The decomposition in ($\ref{decomposition}$) is special in that it gives minimal subspaces that are invariant under relabeling and mutually orthogonal as described in section \ref{methodology}. As we've seen, spectral analysis achieves this at the expense of easy interpretation of group contributions. This is a drawback to spectral analysis that (\ref{lm}) does not have, and is an appealing feature of regression models. The interaction term associated with a group of $i$ players in a regression model is easy to understand. Still, as we see above one must balance either ease of interpretation, or orthogonality of effects. \section{Stability} \label{sec: Stability} In this section we take a first step to addressing questions of the stability of spectral analysis. We seek evidence that spectral analysis is indicative of a true signal, and that should the data have turned out slightly differently, the analysis would not change dramatically. Since spectral analysis works on the lineup function $f(L)$, which is aggregated over all of a team's plays involving $L$, we need to introduce variability into the values of $f(L)$. A fully aggregated NBA season is, in a sense, a complete record of all events and lineup outcomes in that season. Still, it seems reasonable to leverage the variability inherent in the many observed results of a lineup's plays, as well as the substitution patterns of coaches, and suggest a bootstrapping approach. To that end, we start with the actual 15-16 season for the Boston Celtics. We can then build a bootstrapped season by sampling plays, with replacement, from the set of all plays in the actual season. (We sample the same number of plays as in the actual season.) A play is defined as a connected sequence of events surrounding a possession in the team's play-by-play data. For example, a play might involve a sequence like a missed shot, offensive rebound, and a made jump shot; or, a defensive rebound followed by a bad pass turnover. When sampling from a team's plays, a particular lineup will be selected with a probability proportional to the number of plays in which that lineup participated. We generate 500 bootstrapped seasons, process each using the methodology of sections \ref{Data} and \ref{methodology} to produce success functions $f_{\text{boot}}$, and then apply spectral analysis to each. We thus have a bootstrapped distribution of lineup plus-minus and possession values over each lineup $L$, which in turn gives plus-minus and possession distributions of all player-groups. While the the number of possessions played is highly stable for both full-lineups and smaller player-groups, there is considerable variability in plus-minus values over the bootstrapped seasons. Lineups with a significant number of possessions exhibit both positive and negative performance, and the balance between the positive and negative plays is delicate. The variability in group PM presents a challenge in gauging the stability of the spectral analysis associated with a player group. Take, for example, the Thomas--Bradley--Crowder triple for the Celtics. The actual season's plus-minus for this group was 154.8 in 2572 possessions. Over the bootstrapped seasons the group has means of 145.9 and 2574.1 for plus-minus and possessions, respectively. On the other hand, the standard deviation of the plus-minus values is 82.8 versus only 47.7 for possessions. Thus, some of the variability in the spectral contribution of the group over the bootstrapped seasons should be expected since, in fact, the group was less effective in some of those seasons. Figure \ref{BOS3BootGroup0} shows SCLP plotted against PMperLP for the Thomas--Bradley--Crowder triple in 500 bootstrapped seasons. Of course, spectral analysis purports to do more than raw plus-minus by removing otherwise confounding colinearities and overlapping effects. Not surprisingly, therefore, we still see variability in SCLP within a band of plus-minus values, but the overall positive correlation, whereby SCLP increases in seasons where the group tended to outscore its opponents, is reasonable. \begin{figure}[htbp] \centering \includegraphics[width = \textwidth]{BOS3BootGroup0.png} \caption{Spectral contribution per log possession (SCLP) versus plus-minus per log possession (PMperLP) for Thomas--Bradley--Crowder triple in 500 bootstrapped seasons. Each bootstrapped season consists of sampling plays (connected sequences of game events) with replacement from the set of all season plays. Resampled season data is then processed as in section \ref{Data} and group contributions are computed via spectral analysis as in section \ref{methodology}.} \label{BOS3BootGroup0} \end{figure} Also intuitively, the strength of the correlation between group plus-minus and spectral contribution depends on the number of possessions played. Fewer possessions means that group's contribution is more dependent on other groups and hence exhibits more variability. The mean possessions for the Thomas--Bradley--Crowder triple in Fig.\ref{BOS3BootGroup0} is 2574, and has a Pearson correlation of $r=0.953$. The group Thomas--Turner--Zeller, on the other hand, has $r=0.688$ with a mean of 305 possessions. A group like Jared Sullinger--Marcus Smart is particularly interesting. This pair has a season plus-minus of 25.0 in 1116 possessions. In 500 bootstrap seasons, they have a mean plus-minus of 23.6 and mean possessions of 1118.3. The value of the group's plus-minus is negative in only $32.4\%$ of those seasons. Should this group, therefore, be considered effective overall? Spectral analysis answers with a fairly emphatic {\it no}. After removing other group contributions their SCLP as a pure pair is negative in $90.6\%$ of bootstrapped seasons, while still exhibiting strong correlation with overall plus-minus ($r=0.73$). Similarly, the Bradley--Smart pair has a season plus-minus of 45.3 in 1679 possessions In 500 bootstrap seasons, they have a mean plus-minus of 40.4 and mean possessions of 1679. Their plus-minus is negative in $27\%$ of those seasons while their spectral contribution is negative in $81\%$ of bootstrapped seasons. \section{Importance of Effect Spaces} \label{sec: importance} Another natural question is how to value the relative importance of the group-effect spaces. One way to gauge importance uses the squared $L_2$ norm of the success function in each space. Since the spaces are mutually orthogonal, we have $\|f\|^2=\|f_1\|^2+\|f_2\|^2+\|f_3\|^2+\|f_4\|^2+\|f_5\|^2$. (Recall that $f_i$ is the projection of $f$ onto the $i$-th order effect space $V_i$.) One can then measure the total mass of $f$ that is concentrated in each effect space. For example, if we found that the mass of the success function was concentrated in the mean space, and thus, a constant function gave a good approximation to $f$, we could conclude that the particular lineup used by this team was largely irrelevant-- the success of the team never strayed far from the mean and was not strongly affected by any groups. This would be an easy team to coach. Of course, this is not the case in basketball, as evidenced by the $L_2$ norm squared distribution of the sample of teams in Table \ref{L2 Table}. \begin{table}[htbp] \centering \begin{tabular}{l|rrrrrr} \hline Team & $V_0$ & $V_1$ & $V_2$ & $V_3$ & $V_4$ & $V_5$ \\ \hline BOS & 0.001 & 0.012 & 0.048 & 0.138 & 0.297 & 0.504 \\ CLE & 0.003 & 0.021 & 0.058 & 0.150 & 0.301 & 0.467 \\ GSW & 0.003 & 0.031 & 0.092 & 0.203 & 0.312 & 0.360 \\ HOU & 0.000 & 0.007 & 0.037 & 0.123 & 0.285 & 0.548 \\ OKC & 0.001 & 0.011 & 0.038 & 0.137 & 0.304 & 0.510 \\ POR & 0.000 & 0.004 & 0.027 & 0.112 & 0.289 & 0.568 \\ SAS & 0.007 & 0.027 & 0.072 & 0.173 & 0.294 & 0.427 \\ \hline Null & 0.000 & 0.005 & 0.03 & 0.117 & 0.303 & 0.545 \\ \end{tabular}% \caption{Distribution of the squared $L_2$-norm of the team success function over the effect spaces.}\label{L2 Table} \end{table}% By this measure, the higher-order spaces are dominant as they hold most of the mass of the success function. An issue with this metric, however, is the disparity in the dimensions of the spaces. Because $V_5$ is 1638-dimensional, we might expect the mass of $f$ to be disproportionately concentrated in that space. In fact, a random unit vector projected into each of the effect spaces would be, on average, distributed according to the null distribution in Table \ref{L2 Table}, with mass proportional to the dimension of each of the spaces in question. Moreover, we can take the true success function of a team and break the dependence on the actual player groups as follows. Recall that the raw data $f$ records the plus-minus for each of the possible 3003 lineups. We then take $f$ and randomly permute the values so that there is no connection between the lineup and the value associated with that lineup. Still, however, the overall plus-minus and mean of $f$ are preserved. We can then run spectral analysis on the permuted $f$ and record the distribution of the squared $L_2$ norm in each space. Repeating this experiment 500 times for both GSW and BOS give means in Table \ref{gswbosnull} that exactly conform to the null distribution in Table \ref{L2 Table}. \begin{table} \centering \begin{tabular}{l|rr} \hline Space & BOS & GSW\\ \hline First & 0.005 & 0.005\\ Second & 0.030 & 0.030\\ Third & 0.117 & 0.116\\ Fourth & 0.302 & 0.302\\ Fifth& 0.543 & 0.544\\ \hline \end{tabular} \caption{Average fraction of squared $L_2$ mass by order effect space using randomly permuted success function.}\label{gswbosnull} \end{table}% An alternative measure of the importance of each effect space is given by measuring the extent to which projections onto $V_i$ deviate from the null distribution. By this measure of importance, there is some preliminary evidence that strong teams shift the mass of $f$ from $V_5$ into lower-order spaces, particularly $V_1$, $V_2$, and $V_3$. This is interesting as it agrees with the idea that building an elite team requires a group of three stars. Using all 30 NBA teams, we compute correlations of $r=0.51$, $r=0.58$ and $r=0.55$, respectively, between win-percentage and the projected mass $f$ in the first, second, and third-order spaces. Win-percentage and fifth-order projection have correlation coefficient $r=-0.54$. As pointed out in \cite{Diaconis:1989}, however, care must be taken when looking at deviation from the null distribution if the projections are highly structured and lie close to a few of the interpretable vectors. This is a direction for further inquiry. \section{Conclusion} Spectral analysis proposes a new approach to understanding and quantifying group effects in basketball. By thinking of the success of a team as function on lineups, we can exploit the structure of functions on permutations to decompose the team success function. The resulting Fourier expansion is naturally interpreted as quantifying the group effects to overall team success. The resulting analysis brings insight into important and difficult questions like which groups of players work effectively together, and which do not. Furthermore, the spectral analysis approach is unique in addressing questions of lineup synergies by presenting an EDA summary of the actual team data without making the kind of modeling or skill-based assumptions of other methods. There are several directions for future work. First, the analysis presented used raw lineup level plus-minus to measure success. This approach has the advantage of keeping the analysis tethered to data that is intuitive, and helps avoid pitfalls arising from low-possession lineups. Still, adjusting the lineup level plus-minus to account for quality of opponent, for example, seems like a valuable next step. Another straight forward adjustment to raw plus-minus data would involve devaluing so-called garbage time possessions when the outcome of the game is not in question. As presented here, spectral analysis provides an in-depth exploratory analysis of a team's lineups. Still, the results of spectral analysis could also add valuable inputs to more traditional predictive models or machine learning approaches to projecting group effects. Similarly, it would be interesting to use spectral analysis as a practical tool for lineup suggestions. While the orthogonality of the spectral decomposition facilitates valuation of pure player-groups, the question of lineup construction realistically begins at the level of individuals and works up, hopefully stacking the contributions of individuals with strong pairs, triples, and so-on. A strong group of three, for instance, without any strong individual players may be interesting from an internal development perspective, or at the edges of personnel utility, but may also be of limited practical value from the perspective of constructing a strong lineup. Development of a practical tool would likely require further analysis of the ideas in sections \ref{sec: Stability} and \ref{sec: importance} based on ideas in \cite{Diaconis:1998}. For example, given data (a function on lineups), we might fix the projection of that data onto certain spaces (like the first or second order), and then generate new sample data conditional on that fixed projection. The resulting projections in the higher-order spaces would give some evidence for how the fixed lower-order projections affect the mass of $f$ in the higher-order effects spaces. This would help give a more detailed sense of variability of projections, and a more definitive answer to the question of which spaces are most important, and how the spectral signature of a team correlates with team success. With that information in place, however, one can build tools to suggest lineup replacements that maximize the stacking of a team's most important groups. \bibliographystyle{plainnat}
{ "attr-fineweb-edu": 1.603516, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfCE4uzlh9qpEi7pj
\section{Introduction} \label{sec:introduction} Analytics have become mainstream in almost every industry, and sports are no exception. In particular, sports betting has historically relied heavily on analytics. A conservative estimate of money bet on college football alone is in excess of one billion dollars annually~\citep{purdam2020}. And so while much work has been done in this area, most of it is proprietary due to the competitive profit-driven nature of this big business. The primary objective of this paper is to provide a method for sports bettors to determine if they have a positive expected value bet based on the betting lines available to them and how they think the game will play out. Methods and a publicly available online tool are provided to answer questions like, ``If a model, or aggregate of models, says Team A wins by 7.9 points, should I bet that team to win by seven or more?'' The outline of the remainder of this paper is as follows. A basic overview of sports betting is provided in Section~\ref{sec:basics}. Following that, Section~\ref{sec:approaches} describes one approach to creating a projected spread before discussing multiple techniques and their shortcomings for translating this to a betting edge. In Section~\ref{sec:new}, a method is presented that combines modifications of those techniques to arrive at an improved solution for determining the probability a bettor will win a specific bet based on his belief about that game. In Section~\ref{sec:future}, some potential extensions are briefly discussed. \section{The Basics of Sports Betting} \label{sec:basics} A successful sports gambler will consistently make positive expected value plays, making picks that will yield a profit over time. Betting on a team to win a game, known as betting a moneyline, is popular in baseball and hockey, though less so in basketball and American football, referred to here as ``football.'' When betting moneyline markets, to achieve positive expected value, a bettor needs to have an objective, consistent, and repeatable approach to assessing the probability that each team wins. When betting moneylines, a relatively simple math problem can be solved to determine a threshold for which advantages are worthy of an investment. So the bettor needs only to decide their personal threshold and how much money they are willing to bet. For example, consider a $-120$ bet using American odds. In this schema, an individual bets \$120 dollars on a team to win. If the team wins, the bettor receives back the \$120 and an additional \$100. American odds can be converted to what is a ``break-even percentage'' $p$ using \begin{equation} \label{eq:odds} p = 100\left(\frac{|\min(100,\textrm{odds})|}{100 + |\textrm{odds}|}\right)\%. \end{equation} Another percentage is the ``cover percentage,'' which is the percentage of times a bet is won. This is easy to describe and possible to quantify after the game is played. However, a priori, finding the cover percentage is one of the most challenging problems in sports betting. The ``betting edge,'' or simply ``edge,'' is defined as the difference between the cover percentage and the break-even percentage. Negative edge values imply that the model or belief system used indicates no wager should be made. All bettors bet under the paradigm that the positive edge must be large enough to justify the bet over the risk. However, the threshold for how large the positive edge should be to place a bet is subjective. If the bettor bets the moneyline in this same scenario over a long period of time, to achieve a positive return using equation~\eqref{eq:odds}, that side must have at least a 54.5\% chance of winning over that period of time. For a single game, if the bettor believes the team has a greater than 54.5\% chance of winning, betting the moneyline is a wise bet. The bet is a positive expected value play under the bettor's belief system. The sportsbooks, which are establishments that accept bets, charge a commission referred to as the ``juice'' or ``vig,'' short for vigorish. The juice varies based on the odds. Aside from promotions, the juice is always there, even when it's not apparent. Suppose in the previous example that the opposing team's moneyline is $+110$, meaning that a \$100 bet would win \$110. Using equation~\eqref{eq:odds}, the bet needs to hit 52.4\% of the time for a bettor to be profitable. This creates a situation where some games provide little or no value to the bettor. If the belief is that the favored team will win 54\% of the time, neither side is profitable in a long-run situation. This is because the 54\% win percentage falls below the break-even threshold of 54.5\% for the favored team and short of the 47.6\% break-even threshold for the underdog. Further, if the bettor believes the team will win 55\% of the time, it is arguable that the expected return on a winning bet is so small, \$5.50 from \$120 investment, that it is not worth the capital invested. Achieving positive expected value play becomes much more complicated when a bet is based on a point spread. A point spread bettor concerns himself with the margin of victory or defeat. To win a bet in the point spread market, the team that is bet on must win by at least a specified number of points, or lose by less than that. This is the most common type of bet in football. Converting a point spread into a simple win probability is a relatively simple task (see, for example, {\tt boydsbets.com/nfl-spread-to-moneyline-conversion/}). However, translating the difference between a point spread and a bettor's projected point spread is a much more complicated problem. As previously mentioned, most of the work that has been done to assess the value of these differences is proprietary. Further complicating this problem is that football games tend to end with common point differentials, like three points or seven points. On the other hand, it is unusual for a football game to end with a point differential of five points. Consequently, a bettor who believes a team will win by six points, but can bet a win by four or more, has a smaller advantage than one who believes a team will win by four points, but can bet them to win by two or more. Thus, how does a sports bettor take a specific point spread situation and determine whether betting is in their favor? In other words, ``For the game of interest, does the bettor have a positive expected value play based on the available betting lines and the bettor's belief about how the game will play out?'' One technique that attempts to answer this is the Pythagorean expectation. Originally developed by Bill James for season-long win percentages in baseball, the technique has been modified and applied to other sports. For football, the modified version is often referred to as the Pythagorean wins and is defined as a fraction multiplied by $N =$ number of games played. For football, \begin{equation*} {\mbox{Pythagorean wins}} = N\left( \frac{{\mbox{points for}}^r}{{\mbox{points for}}^r + {\mbox{points against}}^r}\right). \end{equation*} For the National Football League (NFL), for $N = 17$ games played, the suggested value of the exponent $r$ is 2.37. One of the problems with using Pythagorean wins is the subjective determination of the exponent. Another problem is that the method depends on points scored by both teams and is a long-run expectation. Consequently, it is better suited for a full season rather than a single game, as discussed by \citet{FootballOutsiders}, among others. The standard approach for determining if a single game bet is advantageous when betting with a point spread was developed by \citet{Stern}. \citeauthor{Stern} uses a normal distribution with a standard deviation of 13.86\footnote{More recent values of the standard deviation are 13.5 points.} points for NFL games to find the probability that a team favored by a specific amount will win by a certain margin. The potential flaw with this approach is that point differentials for football are discrete, and the normal distribution determines probabilities of continuous-valued variables. A more serious problem with \citeauthor{Stern}'s approach is the assumption that a team favored by five will win by exactly five at the same rate that a team favored by seven wins by exactly seven. This assumption is a direct contradiction to reality; a score differential of five points is much less common than a score differential of seven. While \citeauthor{Stern}'s approach is the basis for many pregame win probability models, including those at \citet{ProFootballReference}, it can be improved upon by incorporating the higher likelihood of certain score differentials in football games. And while there are many efforts to assess the win probability given a specific point spread, see for example \citet{Huggins}, efforts to quantify the betting edge between a projected point spread and one available to the gambler have not advanced to the public beyond the use of a normal distribution. In what follows, a method is provided for sports bettors to determine if they have a positive expected value bet based on the betting lines available to them and how they think the game will play out. Methods and a publicly available online tool are provided to answer questions like, ``If a model, or an aggregate of models, says Team A wins by 7.9 points, should I bet that team to win by seven or more?'' \section{Investigating Simple Approaches for College Football Betting Edges} \label{sec:approaches} One of the keys to profitability when betting a point spread over the long term is having a reliable projection system. The basic idea is for the gambler to assess what he thinks the point spread should be for a given game. A bettor's point spread for a given game can be based on a power rating system, his own qualitative research, or some combination of these. There are many popular publicly available power ratings. The work in this paper adopts Bill Connelly's SP+, currently available via ESPN+, to illustrate the approaches presented. Connelly's system follows a type of Bayesian paradigm, beginning with some set of data based on at least one previous season's games, updating throughout the current season to produce a power rating for every team. Home-field advantage may be accounted for, which is usually agreed to be somewhere around 2 to 2.5 points in favor of the home team. After accounting for home-field advantage, the difference between the two teams' ratings results in a projected point spread. Hereafter, this is referred to as the ``system projected point spread,'' or ``system point spread.'' Serious bettors consider the information in a projection system like SP+ when deciding when to place a wager. However, they also take into account their own research and knowledge, which may include things like injuries, transfers, or suspensions. Most projection systems may not fully capture such team-specific real-time information. Combining all this, bettors determine their projected point spread, referred to as the ``bettor's projected point spread'' or ``bettor's point spread.'' Every college football game has a ``betting point spread'' as determined by the bookmakers. This betting point spread is often seen in any mention in the media for a game. Attached to that betting point spread are odds, as discussed in Section~\ref{sec:basics}. One common approach is to bet on any team that shows at least a two-point differential from the bettor's projected point spread and the betting point spread. This approach is simple. However, this to has the potential to be misleading since all final score differences in football aren't the same, which was also discussed in Section~\ref{sec:basics}. Because points happen in three and usually seven point chunks, teams make decisions in the second half of play in order to get ahead of, or behind by, certain key numbers. Thus, there is a need to properly assess whether the difference between the bettor's point spread and betting point spread matters, as well as if the edge is large enough to be worthy of an investment. \subsection{Using a Normal Distribution to Quantify Betting Edge} \label{sec:normal} For all college football games, the standard deviation of the score differential is slightly greater than 20 points. However, for games with similar point spreads, the conditional standard deviation decreases to around 15. For the 2021 season, the standard deviation of the score differential for all games was 21.01; for games with similar point spreads, the standard deviation was 15.35. One way to interpret the conditional standard deviation of 15 is as follows: for all teams projected to win by 6.5 points, only a few of them would win by more than $2 \times 15 + 6.5 = 36.5$ or lose by more than $2 \times 15 - 6.5 = 23.5$. During the 2021 season, there were 82 games with a point spread of six to seven. For these games, only three (3.7\%) had a win of more than 36 points or a loss of less than 23. Figure~\ref{fig:plot} is a bubble plot of the score differential versus the point spread for all games of the 2021 season. Each point represents the score differential for at least one game with a specific point spread. Points with a larger physical size represent more than one game. The plot's legend indicates how many games are represented by the various plotting symbol sizes. \begin{figure}[!ht] \caption{Bubble plot of point spread vs.~score differential for 2021 college football games.} \includegraphics[width=0.9\textwidth]{Figures/bubbles.png} \label{fig:plot} \end{figure} A basic approach for finding the probability of covering the projected point spread for a specific game was discussed in Section~\ref{sec:basics}. Because the projected point spreads from sportsbooks or models can be believed to be relatively accurate, an often-suggested starting point for finding the probability is to use a normal distribution with the mean being the projected point spread and a standard deviation of 15.\footnote{The Skellam distribution is sometimes used for finding these probabilities, as it deals with the difference in Poisson distributions. However, this distribution is only appropriate for sports like hockey and baseball where scoring happens one point at a time.} To illustrate, consider the game on October 30, 2021, when the University of Texas traveled to Waco, Texas to play Baylor University. The closing line had Baylor as a 2.5-point favorite. The projected point spread coming from SP$+$ was Baylor $-2.9$. If $\Phi$ represents the normal probability, then $$ P({\mbox{Baylor covers}}) = \Phi\left(\frac{-2.5-(-2.9)}{15}\right) = 0.5106; $$ that is, using the normal distribution with a mean equal to the projected point spread of $-2.9$ and a standard deviation of 15, the probability of Baylor covering the 2.5 point spread is 0.5106. Using this method, SP$+$ suggests that neither side is a profitable wager. Assuming the standard $-110$ wager, neither team has a cover probability larger than 52.4\%. However, because the most common football point differential is three and a tie is impossible, it is clear that outcomes around zero are too heavily weighted, and outcomes around three points are not weighted heavily enough. Consider, for example, the following two scenarios. The first is a bettor's believed spread of 2.5 versus a betting spread of 3.5. The second is a bettor's believed spread of 4.5 versus a betting spread of 5.5. Under the methodology just described, these two scenarios are treated as equivalent, even though they are not. A college football game is much more likely to end with a point differential of three instead of five, making the former a larger betting edge. This equal weighting of nonequivalent events is a flaw that will be addressed. When the point spread is zero or is not an integer, there are two results that can occur; one team covers or the other team covers. In the Texas vs.~Baylor example, the point spread was Baylor by 2.5. The probability that Texas covers is one less than the probability Baylor covers. On the other hand, if the point spread is an integer, there exists the possibility of a ``push''; that is, the score differential ends exactly on the spread. In this case, three possible outcomes and a wide variety of methods and philosophies for dealing with a push. These are not included in this paper. \subsection{Using Historical Data to Quantify Betting Edge} \label{sec:historical} Historical data can be used to weight each point differential to overcome equal weighting of nonequivalent events. The weights are assigned according to how frequently they occur; numbers that occur more frequently, like three, are assigned a greater probability than more unusual numbers, like five. Using data from \citet{JimmyBoyd} that spans from 1980 to 2014, Table~\ref{tab:historical} shows historical college football data for selected point differentials. It is possible that rule changes over time could warrant an approach that uses historical results from only certain years. However, if the number of games is not too small, the trade-offs in which years to include will produce very similar results. Further, a rule change to overtime added in 2021 will likely increase the probability that a game will finish with a point differential of two, but the increase should be relatively small. \begin{table}[!ht] \caption{Historical probabilities of selected point differentials.} \begin{tabular}{r|c||r|c} Point & & Point & \\ Differential & Percentage & Differential & Percentage \\ \hline\hline 0 & 0\% & 8 & 2.4\% \\ 1 & 3.4\% & 9 & 1.2\% \\ 2 & 2.7\% & 10 & 4.3\% \\ 3 & 9.6\% & 11 & 2.3\% \\ 4 & 3.9\% & 12 & 1.8\% \\ 5 & 2.6\% & 13 & 1.8\% \\ 6 & 2.9\% & 14 & 4.3\% \\ 7 & 7.3\% & 15 & 1.1\% \\ \end{tabular} \label{tab:historical} \end{table} An approach centered on historical data of all games works fairly well for National Football League (NFL) games mainly because the point spreads across all games do not have a large range. For example, the likelihood that a team favored to win by seven points actually wins by three is still relevant to that same question for a team favored by six points. According to {\tt sportsbettingdime.com}~\citep{spread}, there have only been nine NFL games since 1976 with point spreads larger than 20. While it could be argued that information about how 20-point favorites perform is not relevant to how teams favored by 3 points will perform, this problem is exacerbated in college football, where spreads as large as 30 or 40 points are not uncommon. In fact, in the 2021 college football season, more than 17\% (176) of the games had spreads larger than 20. Thus, rather than taking an aggregated approach, games that are similar to the game at hand provide the bettor with more relevant information for decision-making. One solution to addressing the large range of point spreads in college football is using historical data to create conditional probabilities. Recall that for a standard $-110$ wager, the break-even point for bets is 52.4\%. In the 2021 season, teams favored by between two and four points (inclusive) won games by more than one point 53.1\% of the time. Thus, if a bettor believes that a team should be favored by three points, but the team is favored by only 1.5 points, a 53.1\% win expectancy translates to a 0.7\% edge over the house. Classifying games into bins creates some challenges, including determining (1) the break-points to create the classifications or ``bins'' and (2) whether bins should overlap or be smoothed to account for a limited number of games in each bin. \section{A New Approach for College Football Betting Edges} \label{sec:new} To address these challenges, a new method is proposed that is a hybrid of the methods in Sections~\ref{sec:basics} and~\ref{sec:historical}. This new technique addresses the inclusion of non-relevant games in the aggregated historical data by using the data to optimally weight a normal distribution, which can then be centered at the bettor's point spread. The use of this new distribution alleviates the problems discussed when only binning the historical data. The cumulative normal distribution with zero mean was applied to the historical data to find a probability distribution that best fits the historical probabilities. Values half a point above and below a specific number are combined to represent that score differential. Using a standard deviation of 21, the actual standard deviation of the games in the 2021 season, as discussed in Section~\ref{sec:normal}, fits the historical data well. However, a standard deviation of 22, which allows for extra variability, resulted in a closer match of the estimated probabilities to the historical data. Incremental changes to the standard deviation beyond 21 or 22 resulted in minimal changes to the final probabilities. \begin{table}[H] \begin{tabular}{r|cc||r|cc} Point & Historical & Normal & Point & Historical & Normal \\ Differential & Probability & Probability & Differential & Probability & Probability \\ \hline\hline 0 & 0\% & 3.6\% & 6 & 2.9\% & 3.5\% \\ 1 & 3.4\% & 3.6\% & 7 & 7.3\% & 3.4\% \\ 2 & 2.7\% & 3.6\% & 8 & 2.4\% & 3.4\% \\ 3 & 9.6\% & 3.6\% & 9 & 1.2\% & 3.3\% \\ 4 & 3.9\% & 3.6\% & 10 & 4.3\% & 3.3\% \\ 5 & 2.6\% & 3.5\% & 11 & 2.3\% & 3.2\% \\ \end{tabular} \caption{Historical and fitted probabilities for selected point differentials in college football.} \label{tab:historical2} \end{table} Next, for each point differential, a constant is found that scales the respective probability, translating it from the normal bell curve to a distribution where outcomes like three and seven have a larger probability, as the historical data suggest. The reasoning behind this step is that while a large favorite isn't likely to win by three, they are still more likely to win by three than by five. The constant is found by dividing the normal probabilities in the third column of Table~\ref{tab:historical2} by the corresponding historical probabilities in column two. For example, the area under a normal curve between 2.5 and 3.5 is 0.036. However, the historical data suggests it should be 0.096. Thus, the probability of a team winning by three when derived from a normal distribution should be multiplied by 2.7. This provides a framework to use the first method presented in Section~\ref{sec:normal}, but modifies that method, using the adjusted probabilities using historical information to take into account which football outcomes are more likely than others. Table~\ref{tab:mult} displays some of the multipliers so the reader can understand each number's relative importance compared to the unadjusted normal probability. \begin{table}[H] \begin{tabular}{r|c||r|c} Point & & Point & \\ Differential & Multiplier & Differential & Multiplier \\ \hline\hline 0 & 0 & 6 & 0.8 \\ 1 & 0.9 & 7 & 2.1 \\ 2 & 0.7 & 8 & 0.7 \\ 3 & 2.7 & 9 & 0.4 \\ 4 & 1.1 & 10 & 1.3 \\ 5 & 0.7 & 11 & 0.7 \\ \end{tabular} \caption{Multipliers for selected point differentials.} \label{tab:mult} \end{table} Next, a matrix is constructed for all differentials of 60 points or less. Each row represents a game's score differential from $-60, -59, \ldots, -1, 0, 1, \ldots 59, 60$, where a negative value means the home team wins, and a positive value means the away team wins. Each column represents the bettor's point spread from $-39$ to 39. If $s$ is the score differential in a particular row, then a cell is the probability of being in the interval $(s - 0.5, s + 0.5)$, where the mean of the normal distribution is the bettor's point spread, and the standard deviation is 15. Each cell is then multiplied by the appropriate weight for that score differential; see Table~\ref{tab:mult} for multipliers of select point differentials. The final step for each column is to sum the numbers in that column and then divide each cell by the sum. The result is that each column is now conditional probability distribution. Specifically, each cell represents the conditional probability that the home team wins by exactly that score differential, given the bettor's point spread. Because most systems project non-whole number results, the final cover probability can be computed via interpolation. For example, if a system indicates a team will win by 2.3, the method takes 30\% of the cover probability using the team projected as a two-point favorite and 70\% of the cover probability using the team projected as a three-point favorite. For the Baylor/Texas game introduced in Section~\ref{sec:normal}, interpolation returns the probability of Baylor covering the 2.5-point spread to be 53.2\%, illustrating the importance of key numbers in wagering on football. In other words, the new method accounts for the reality that Baylor is more likely to win by three than by two. Because this new methodology says there's an 0.8\% edge betting Baylor $-2.5$, this could be considered a worthwhile investment. Baylor covered the spread in a 7-point victory. Finally, for projected spreads above 40 points, there is very little historical data. Consequently, there is relatively little confidence in the resulting probabilities. Projected spreads of greater than 40 are fairly rare, though the exact frequency depends on the bettor's projection system. Games in this realm tend to be avoided by most sports bettors unless there is some qualitative information available as to how such a blowout might be handled by each coaching staff. \section{Online App for Computing the Edge} An online tool that uses these probabilities to compute the edge is available online at {\tt www.pickswiththeprofessor.com/edge/cfb}. A screenshot of the app is seen in Figure~\ref{fig:picks-default}. The user supplies four inputs, where all references are with respect to the team the bettor is interested in wagering on. \begin{figure}[h] \centering \includegraphics[width=0.65\textwidth]{Figures/picks-default.pdf} \caption{Screenshot of ``Picks with the Professor'' online app for computing the edge of a specific bet.} \label{fig:picks-default} \end{figure} \begin{enumerate} \item In the text box beneath {\tt Projected Spread}, the bettor's projected point spread is supplied, which can be determined from any source, like a complex mathematical model, or simply the bettor's belief. The default value is $-3$. \item In the text box beneath {\tt Sportsbook Spread}, the user supplies the betting point spread as provided by some sportsbook. \item The third text box is where the {\tt Odds Format} is selected. The default selection is {\tt American} odds, discussed in Section~\ref{sec:basics}. \item Under {\tt Odds}, the user supplies the odds attached to the wager from Step 2. The default is $-110$. \end{enumerate} Under the default scenario, the bettor has a 0.9\% edge. It should be noted that there are some results that might seem counterintuitive. Most of them align with perception, but because the multiplier is so high for both three and seven, and to some extent, 10, a team that is believed to, on average, lose by eight actually has a 1.2\% edge if being bet at $+7.5$. However, for each bettor's projected point spread, it is confirmed mathematically that the expected value of each conditional distribution is very close to the projection, usually within one- or two-tenths of a point. This phenomenon occurs because of both a lack of symmetry and a lack of smoothness when imposing the multipliers to increase or decrease common and uncommon final score outcomes. \section{Discussion} \label{sec:future} Some sports bettors place bets based on their instinct. However, using some mathematical modeling, perhaps in combination with other research, improves betting outcomes. While transforming the results of a model into whether a wager is warranted is easy in moneyline sports, point spreads add a complication. Point spreads force the bettor to decide the threshold for a profitable wager in a manner that is not straightforward. The work here details a new approach to converting point differentials between a projected point spread and the actual spread into probabilities that can be very useful to a sports bettor. A publicly available online tool on the ``Picks with the Professor'' website provides the percent edge a bettor has by using the spread and odds available along with the projected spread that the bettor believes to be the truth. Further research is needed to understand how these probabilities perform with actual game data. Further, it is believed that probabilities over a certain threshold could produce misleading results because the projection model may not be able to account for every facet of an upcoming game, e.g., key player injuries. While the 2021 college football season provides some insight into these issues, the lingering effects of the COVID-19 pandemic and the unique aspect of sixth-year seniors make the results less than ideal as a benchmark. Data from the 2022 season should provide more clarity and a better evaluation of the projection systems. \newpage \bibliographystyle{apalike}
{ "attr-fineweb-edu": 1.946289, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbtLxK6mkyCfOAv3b
\section{Introduction} \label{sec:intro} In order to organise her staying in Montreal, Alice books an apartment from Bob via the online platform AIPbnb. AIPbnb policy states that owners cannot interact with each other, users can interact with owners only via the platform, and if a user finds a better solution for her accommodation, she must cancel the previous one {\em before} she makes a new reservation for the same dates, otherwise she will be charged for one night there. When Alice discovers that Carol rents a cheaper and larger apartment, she decides to cancel the reservation of Bob's apartment and book Carol's one. This situation can be represented by the global Agent Interaction Protocol $\mathit{modifyRes} = Alice \transmsg{Canc} Bob ~\cdot~ Alice \transmsg{Res} Carol$ where $a1 \transmsg{M} a2$ models the interaction between $a1$ and $a2$ for exchanging message $M$, ``$\cdot$'' models interaction concatenation, and $Canc$ and $Res$ are sent to the recipients by using the AIPbnb platform as required. Alice believes that the above protocol correctly meets AIPbnb policy, but she is charged for one night in Bob's apartment by AIPbnb: Carol received Alice's request before Bob received the cancellation, and this violates the policy. What went wrong is the {\em interpretation} of ``before''. To Alice, it meant that she should send $Canc$ before she sent $Res$, while for AIPbnb it (also) meant that Bob should receive $Canc$ before Carol received $Res$. This ambiguity would have had no impact on Alice if the physical {\em communication model} underlying AIPbnb guaranteed that between the sending and receiving stages of an interaction, nothing could happen. However, if the communication model provides weaker or no guarantees, it may happen that a message sent before another, is delivered after. This simple example shows that enacting the respect of a global protocol without a clear semantics of the ``before'' meaning, without guarantees from the platform implementation on message delivery order, and without hidden communications between the participants (``covert channels''), may not be possible. Many real situations can be resorted to this one: for example, a citizen must wait for the bank to have received (and processed) the request for adding some money to a new, empty account, before sending a request to move that amount to another account, otherwise he can go in debt. Global protocols are modelled using many different formalisms including global types \cite{DBLP:conf/forte/CastagnaDP11}, Petri Nets \cite{Peterson:1977:PN:356698.356702}, WS-CDL \cite{wscdl}, AUML~\cite{Huget05}, Statecharts~\cite{Harel1987231}, and causal logic \cite{DBLP:journals/ai/GiunchigliaLLMT04}. In each of these formalisms the enactability problem, that we define as ``by executing the localised versions of the protocol implemented by each participant, the global protocol behaviour is obtained, with no additional communication'', has been addressed in some form. Despite their diversity, however, most of these formalisms do not support protocol concatenation and recursion, which are needed to achieve a high expressivity: their expressive power is limited to regular languages. Moreover, although -- from an operational point of view -- these approaches agree on the intuition that a global protocol is enactable if the composition of the local protocols, obtained by projecting the global one onto each participant, behaves exactly in the same way as the global protocol, the {\em semantic definition} of enactability is far from being standard and sometimes is also more restrictive than necessary: some protocols will be classified as not enactable, while (under suitable conditions) they could be enacted. The intended {\em message ordering} and the {\em communication model} of the infrastructure in which the agents will be implemented and run are never taken into consideration together. As shown in the example above these two elements are effectively two sides of the same coin which must be both modeled for providing a precise and generally applicable definition of enactability. In a similar way, the need to associate the protocol with a {\em decision structure} to enforce consistent choices, is recognised as a necessity and suitably addressed by \cite{DBLP:conf/www/QiuZCY07} only, and not in conjunction with the other issues that affect enactability. Finally, the availability of a {\em working prototype} to check the enactability of global protocols under message ordering and communication models is usually disregarded in the literature. In this paper we provide a semantic characterisation of enactability which integrates {\em message ordering} and {\em communication model} in a unified framework, along with {\em decision structures}. This combination prevents unnecessary restrictions from the definition, which is as general as possible and suitable for highly expressive protocol representation languages like Trace Expressions \cite{frankDeBoer2015}. We also developed a working prototype in Haskell for enactability checks, which is one key benefit of out approach. \section{Background} \label{sec:backRel} \paragraph{Trace Expressions.}\label{sec:traceexpressions} Trace expressions \cite{frankDeBoer2015} are a compact and expressive formalism inspired by global types \cite{DBLP:conf/dalt/AnconaDM12} and then extended and exploited in different application domains \cite{DBLP:conf/atal/AnconaFM17,DBLP:conf/atal/FerrandoAM17,DBLP:conf/atal/FerrandoDA0M18,DBLP:conf/ecoop/AnconaFFM17,DBLP:conf/atal/FerrandoAM16}. Trace Expressions, initially devised for runtime verification of multiagent systems, are able to define languages that are more than context free. A trace expression $\tau$ denotes a set of possibly infinite event traces, and is defined on top of the following operators:\footnote{Binary operators associate from left, and are listed in decreasing order of precedence, that is, the first operator has the highest precedence.} \begin{itemize} \item $\epsilon$ (empty trace), denoting the singleton set $\{\langle \rangle\}$ containing the empty event trace $\langle \rangle$. \item $M$ (event), denoting a singleton set $\{\langle M \rangle\}$ containing the event trace $\langle M \rangle$. \item $\tau_1{\cdot}\tau_2$ (\emph{concatenation}), denoting the set of all traces obtained by concatenating the traces of $\tau_1$ with those of $\tau_2$. \item $\tau_1{\wedge} \tau_2$ (\emph{intersection}), denoting the intersection of the traces of $\tau_1$ and $\tau_2$. \item $\tau_1{\vee} \tau_2$ (\emph{union}), denoting the union of the traces of $\tau_1$ and $\tau_2$. \item $\tau_1{|} \tau_2$ (\emph{shuffle}), denoting the union of the sets obtained by shuffling each trace of $\tau_1$ with each trace of $\tau_2$ (see \cite{DBLP:journals/iandc/BrodaMMR18} for a more precise definition). \end{itemize} Trace expressions are cyclic terms, thus they can support recursion without introducing an explicit construct. As customary, the operational semantics of trace expressions, defined in \cite{DBLP:conf/birthday/AnconaFM16}, is specified by a transition relation $\delta\subseteq\mathcal{T}\times\mathcal{E}\times\mathcal{T}$, where $\mathcal{T}$ and $\mathcal{E}$ denote the set of trace expressions and of events, respectively. We do not present all the transition rules for space constraints. They are standard ones which state, for example, that $\delta(ev \cdot \tau, ev, \tau)$ (the protocol whose state is modelled by $ev \cdot \tau$ can move to state $\tau$ if $ev$ occurs), and that $\delta(\tau_1 \lor \tau_2, ev, \tau)$ if $\delta(\tau_1, ev, \tau)$ (if the protocol whose state is modelled by $\tau_1$ can move to state $\tau$ if $ev$ occurs, then also the protocol whose state is modelled by $\tau_1 \lor \tau_2$ can). The denotational semantics is defined as follows: \label{sec:standardsemantics} \begin{eqnarray*} \sem{\epsilon} &=& \{ \langle \rangle \} \\ \sem{M} &=& \{\langle M \rangle \} \\ \sem{\tau_1 \cdot \tau_2} &=& \{ t_1 \circ t_2 | t_1 \in \sem{\tau_1} \land t_2 \in \sem{\tau_2} \} \\ \sem{\tau_1 \land \tau_2} &=& \sem{\tau_1} \cap \sem{\tau_2} \\ \sem{\tau_1 \lor \tau_2} &=& \sem{\tau_1} \cup \sem{\tau_2} \\ \sem{\tau_1 | \tau_2} &=& \{ z \; |\; t_1 \in \sem{\tau_1} \land t_2 \in \sem{\tau_2} \land z \in t_1 \bowtie t_2\} \\ \end{eqnarray*} Where $t_1 \bowtie t_2$ is the set of all interleavings of $t_1$ and $t_2$, and $\circ$ is concatenation over sequences. Events can be in principle of any kind. In this paper, we will limit ourselves to consider \emph{interaction} and \emph{message} events. An interaction has the form $a\transmsg{M}b$ and gives information on the protocol from the global perspective, collapsing sending and receiving. We say that $\tau$ is an interaction protocol if all the events therein are interactions. Interaction protocols take other names in other communities, such as Interaction Oriented Choreography \cite{DBLP:conf/sefm/LaneseGMZ08} in the Service Oriented Community, and global type in the community working on process calculi and types \cite{DBLP:conf/forte/CastagnaDP11}. Message events have the form $\send{a}{M}$ ($a$ sends $M$) and $\recv{b}{M}$ ($b$ receives $M$). They model actions that one agent can execute, hence taking a local perspective. A trace expression where all events are messages will be named a message protocol throughout the paper. Message protocols have different names in different communities, such as Process Oriented Choreography \cite{DBLP:conf/sefm/LaneseGMZ08} and ``local type'' or ``session type'' in the global type community \cite{DBLP:conf/esop/HondaVK98,DBLP:conf/parle/TakeuchiHK94}. \paragraph{Communication Models.} \label{sec:commMod} Given that in our proposal we explicitly take the communication model supported by the MAS infrastructure into account, we provide a summary of communication models based on \cite{DBLP:journals/fac/ChevrouHQ16}. We use CM0 to CM6 to identify them in a compact way. \noindent {\bf CM0: Synchronous Communication}. Sending and receiving are synchronised: the sender cannot send if the receiver is not ready to receive. \noindent {\bf CM1: Realisable with Synchronous Communication (RSC)}. After a communication transition consisting of a send event of a message, the only possible communication transition is the receive event of this message. This asynchronous model is the closest one to synchronous communication and can be implemented with a 1-slot unique buffer shared by all agents. \noindent {\bf CM2: FIFO n-n communication}. Messages are globally ordered and are delivered in their emission order: if sending of $M_1$ takes place before sending of $M_2$, then reception of $M_1$ must take place before reception of $M_2$. This model can be implemented by means of a shared centralised object, such as unique queue. \noindent {\bf CM3: FIFO 1-n communication}. Messages from the same sender are delivered in the order in which they were sent. It can be implemented by giving each agent a unique queue where it puts its outgoing messages. Destination peers fetch messages from this queue. \noindent {\bf CM4: FIFO n-1 communication}. A send event is implicitly and globally ordered with regard to all other sending actions toward the same agent. This means that if agent $b$ receives $M_1$ (sent by agent $a$) and later it receives $M_2$ (sent by agent $c$), $b$ knows that the sending of $M_1$ occurred before the sending of $M_2$ in the global execution order, even if there is no causal path between the two sending actions. The implementation of this model can, similarly to FIFO 1-n, be done by providing each agent with a queue: messages are sent by putting them into the queue of the recipient agent. \noindent {\bf CM5: Causal}. Messages are delivered according to the causality of their emissions \cite{Lamport:1978:TCO:359545.359563}: if a message $M_1$ is causally sent before a message $M_2$ then an agent cannot get $M_2$ before $M_1$. An implementation of this model requires the sharing of the causality relation. \noindent {\bf CM6: Fully Asynchronous}. No order on message delivery is imposed. Messages can overtake others or be arbitrarily delayed. The implementation is usually modelled by a bag. \paragraph{Message Ordering.} \label{sec:moiMod} The statement ``one interaction comes before another'' is ambiguous, as exemplified in Section \ref{sec:intro}. This ambiguity has been recognised by some authors who suggested how to interpret message ordering, when moving from the interaction (global) level to the message (local) level. In this section we summarise and compare the proposals by Lanese, Guidi, Montesi and Zavattaro \cite{DBLP:conf/sefm/LaneseGMZ08} and that by Desai and Singh \cite{DBLP:conf/aaai/DesaiS08}. To identify the interpretations, we will use the acronyms used in \cite{DBLP:conf/aaai/DesaiS08} when available, and our own acronyms otherwise. The starting point for interpreting message ordering is the interaction protocol $\tau = a\transmsg{M_1}b {\cdot} c\transmsg{M_2}d$. For the sake of clarity, we denote $\send{a}{M_1}$ with $s1$, $\recv{b}{M_1}$ with $r1$, $\send{c}{M_2}$ with $s2$, and $\recv{d}{M_2}$ with $r2$; we characterise the message ordering interpretations by the traces of messages that respect them. \noindent {\bf RS}: a message send must be followed immediately by the corresponding receive, so w.r.t. $\tau$, $M_1$ must be received before $M_2$ is sent. The set of traces that respect this model is $\{ s1~r1~s2~r2 \}$. This interpretation is named {\em RS (receive before send)} in \cite{DBLP:conf/aaai/DesaiS08} and {\em disjoint semantics} in \cite{DBLP:conf/sefm/LaneseGMZ08}. \noindent {\bf SS}: $M_1$ is sent before $M_2$ is, and there are no constraints on the delivery order. The set of traces that respect this model is $\{ s1~r1~s2~r2,$ $s1~s2~r1~r2,$ $s1~s2~r2~r1 \}$. This interpretation is named {\em SS (send before send)} in \cite{DBLP:conf/aaai/DesaiS08} and {\em sender semantics} in \cite{DBLP:conf/sefm/LaneseGMZ08}. \noindent {\bf RR}: $M_1$ is received before $M_2$ is, and there are no constraints on the sending order. The set of traces that respect this model is $\{ s1~r1~s2~r2, s1~s2~r1~r2, s2~s1~r1~r2 \}$. This interpretation is named {\em RR (receive before receive)} in \cite{DBLP:conf/aaai/DesaiS08} and {\em receiver semantics} in \cite{DBLP:conf/sefm/LaneseGMZ08}. \noindent {\bf RR \& SS}: this combines the requirements of {\bf RR} and of {\bf SS}: $M_1$ is sent before $M_2$ is sent and also $M_1$ is received before $M_2$ is received. The set of traces that respect this model is $\{ s1~r1~s2~r2, s1~s2~r1~r2 \}$: both $s1$ comes before $s2$ (``coming before'' according to the senders), and $r1$ comes before $r2$ (``coming before'' according to the receivers). This interpretation is named {\em sender-receiver semantics} in \cite{DBLP:conf/sefm/LaneseGMZ08}. \noindent {\bf SR}: $M_1$ is sent before $M_2$ is received. The set of traces that respect this model is $\{ s1~r1~s2~r2,$ $s1~s2~r1~r2,$ $s1~s2~r2~r1,$ $s2~s1~r1~r2,$ $s2~s1~r2~r1 \}$. This interpretation is named {\em SR (send before receive)} in \cite{DBLP:conf/aaai/DesaiS08}. It is easy to see that the following inclusions among asynchronous models hold: {\bf RS} $\subset$ {\bf RR \& SS} $\subset$ {\bf SS} $\subset$ {\bf SR} and {\bf RS} $\subset$ {\bf RR \& SS} $\subset$ {\bf RR} $\subset$ {\bf SR}. The {\bf SS} and {\bf RR} interpretations are not comparable. In the remainder of this paper we consider only the four interpretations defined by Desai \& Singh, i.e.~we do not consider ``RR \& SS''. \section{Defining Enactability using a Semantic Approach}\label{sec:semantics} \paragraph{Basic Notation.}\label{sec:sembg} In the following let $\emph{ComModel}= \{CM1, CM2, CM3, CM4, CM5,$ $ CM6\}$ be the set of possible (asynchronous) communication models, and $\emph{MOISet}=$ $\{\emph{SS}$, $\emph{SR}$, $\emph{RS}$, $\emph{RR}$ $\}$ the set of possible message order interpretations that can be imposed. We also define $\mathcal{A}=\{a, b, c, d, a_1, a_2, \ldots , a_n\}$ to be the set of agents involved in the interaction protocol. Recall that we consider both interaction and message protocols. When we say that $\tau$ is an \emph{interaction} protocol, we mean that the protocol represents sequences of \emph{interactions}. The set of traces recognized is obtained following the semantics defined in Section~\ref{sec:traceexpressions}, and for an interaction protocol $\tau$ we have that\footnote{We use ``$\in$'' to also denote membership of an item in a sequence.} ${I} \in \sem{\tau} \implies \forall_{{i} \in {I}}.{i}\in\interactionsin{\tau}$, where we define $\interactionsin{\tau}$ to be the set of interactions involved in the interaction protocol $\tau$. We also define $\mathcal{I}$ to be the set of all possible interactions events. Similarly, when $\tau$ is a \emph{message} protocol (rather than an interaction protocol), it represents sequences of send and receive events of the form $\send{a}{M}$ (send event) and $\recv{b}{M}$ (receive event), and given a particular set of possible interactions $\mathcal{I}$, we define $\mathcal{E}_\interactions$ to be the corresponding set of events: $$\mathcal{E}_\interactions = \{ \send{a}{M} | \exists_{b\in\mathcal{A}}.a\transmsg{M}b\in\mathcal{I} \} \cup \{ \recv{b}{M} | \exists_{a\in\mathcal{A}}.a\transmsg{M}b\in\mathcal{I} \} $$ In a message protocol $\tau$ we have that ${E} \in \sem{\tau} \implies \forall_{{e}\in{E}}.{e} \in \eventsin{\interactionsin{\tau}}$. Given a message protocol $\tau$ we also define $\eventsin{}(\tau)$ to be the set of events that occur in the protocol. Next, we define the language of traces for interaction protocols and message protocols. For interaction protocols, the set of all possible traces is defined to be: $\lang{\interactions}{} = \mathcal{I}^*\cup\mathcal{I}^\omega$. For message protocols the definition is somewhat more complex, since there is a relationship between a send and a receive event. Specifically, the set of all possible traces of events is constrained so that a message being received must be preceded by that message having been sent. We also constrain the set so that each message can be sent at most once, and received at most once (i.e.~message names are unique). The assumption is made by most authors, see \cite{DBLP:journals/fac/ChevrouHQ16} for example, and it is considered as a harmless one; we can integrate many elements to the notion of ``message name'', such as content, protocol id, conversation id, etc, to discriminate between messages at design time. Formally: \begin{eqnarray*} \lang{\events}{} &=& \{ {E} \in \mathcal{E}_\interactions^*\cup\mathcal{E}_\interactions^\omega \; | \\ & & \hspace*{-12mm} (\forall_{i,j\in dom({E})}.{E}[i] = \send{a}{M} \wedge {E}[j] = \send{a}{M} \implies i = j) \wedge \\ & & \hspace*{-12mm}(\forall_{i,j\in dom({E})}.{E}[i] = \recv{b}{M} \wedge {E}[j] = \recv{b}{M} \implies i = j) \wedge \\ & & \hspace*{-12mm} (\forall_{i\in dom({E})}.{E}[i] = \recv{b}{M} \implies (\exists_{j\in dom({E})}.{E}[j] = \send{a}{M} \wedge j < i)) \end{eqnarray*} \paragraph{Message Order Interpretation (MOI).} An interaction protocol $\tau$ defines orderings between messages $M_i$, whereas a message protocol deals in \emph{events} (sending and receiving). If a protocol says that $M_1$ comes before $M_2$, how should we interpret this in terms of events? Should sending $M_1$ come before sending $M_2$, or does it mean that receiving $M_1$ should occur before receiving $M_2$? The \emph{message ordering interpretation} (MOI) specifies this. As discussed earlier, we follow prior work in considering four (natural) interpretations ($\emph{SS}$, $\emph{SR}$, $\emph{RS}$, and $\emph{RR}$). We formalise this by defining a variant semantics that takes an \emph{interaction} protocol $\tau$ and returns its semantics in terms of \emph{events} rather than interactions. The possible sequences of events are constrained: given a situation where $\tau$ specifies that $M_1$ must occur before $M_2$, we constrain the possible sequence of events with the appropriate constraint on events corresponding to the selected MOI. \begin{definition}[Order on interactions in a trace] Let ${I}\in\lang{\interactions}{}$ be a trace of interaction events, ${E}\in\lang{\events}{}$ be a trace of send and receive events, $\moi\in\emph{MOISet}$ a message ordering interpretation, and $a \transmsg{M_1} b \in \mathcal{I}$, $c \transmsg{M_2} d \in \mathcal{I}$ two interactions. Abbreviating $a \transmsg{M_1} b$ as $I_1$ and $c \transmsg{M_2} d$ as $I_2$, we define an order on $M_1$ and $M_2$ for $\moi$ in ${E}$ as follows:\\ \noindent$\beforems{\emph{SS}}{{E}}{I_1}{I_2} \triangleq \before{\send{a}{M_1}}{{E}}{\send{b}{M_2}}$\\ \noindent$\beforems{\emph{SR}}{{E}}{I_1}{I_2} \triangleq \before{\send{a}{M_1}}{{E}}{\recv{d}{M_2}}$\\ \noindent$\beforems{\emph{RS}}{{E}}{I_1}{I_2} \triangleq \before{\recv{b}{M_1}}{{E}}{\send{b}{M_2}}$\\ \noindent$\beforems{\emph{RR}}{{E}}{I_1}{I_2} \triangleq \before{\recv{b}{M_1}}{{E}}{\recv{d}{M_2}}$ \\ where $\before{e_1}{{E}}{e_2}\triangleq \exists_{i,j\in dom({E})}.{E}[i] = e_1 \wedge {E}[j] = e_2 \wedge i \leq j$ \end{definition} Formalising the MOI is not as simple as it might seem. An obvious approach that does not work is to compute the semantics of the interaction protocol $\tau$, and then map each sequence ${I} \in \sem{\tau}$ to a set of message event traces. This does not work because the trace is linear, and therefore a total order, whereas a protocol can specify a partial order. An illustrative example is $\tau = (M_1 \cdot M_2) \; | \; M_3$. This simple protocol has three sequences of interactions: $\{ \langle M_1, M_2, M_3 \rangle, \langle M_1, M_3, M_2 \rangle, \langle M_3, M_1, M_2 \rangle \}$. Assume an RS message ordering interpretation, then each of the message sequences corresponds to exactly one sequence of events, giving\footnote{For readability we use $s(M)$ and $r(M)$ to abbreviate sending and receiving message $M$, eliding the identity of the agents involved.} $\{ \langle s(M_1), r(M_1), s(M_2), r(M_2), s(M_3), r(M_3)\rangle,$ $\langle s(M_1), r(M_1), $ $ s(M_3), r(M_3),$ $ s(M_2), r(M_2) \rangle, \langle s(M_3), r(M_3), s(M_1), r(M_1), s(M_2), $ $ r(M_2) \rangle \}$. However, the protocol does not specify any constraint on $M_3$, so should also allow other interpretations where the occurrences of $s(M_3)$ and $r(M_3)$ are not constrained relative to the other events, for example $ \langle s(M_1), r(M_1), s(M_3), s(M_2), r(M_2),$ $ r(M_3) \rangle $. Instead, we define a variant semantics, which is compositional. The semantics follow the standard semantics (Section~\ref{sec:standardsemantics}) with a few exceptions. Firstly, the semantics of an interaction $I$ is given as the sequence of sending the message, followed by receiving it (denoted respectively $s(I)$ and $r(I)$). Secondly, the semantics for a sequence $\tau_1 \cdot \tau_2$ is given by taking the semantics of $\tau_1$ and of $\tau_2$. These are then combined by interleaving them (rather than simply concatenating them), but with the constraint that the result must satisfy the appropriate MOI constraint ($\beforems{\emph{SS}}{{E}}{I_1}{I_2}$) for all possible final messages of $\tau_1$ ($I_1$) and all possible initial messages of $\tau_2$ ($I_2$). Determining initial and final messages is itself somewhat complex, and is done using partially ordered sets. A partially ordered set (poset) is a pair $(E, <)$ where $E$ is the set of elements (in this case send and receive events) and $<$ is a binary relation on $E$. We define the union operator to act piecewise on posets, and to take the transitive closure of the resulting relation, i.e. $(E_1, <_1) \cup (E_2, <_2) = (E_1 \cup E_2 , (<_1 \cup <_2)^*)$. We can then define the $\mathrm{poset}$ of an interaction protocol as follows \begin{eqnarray*} \mathrm{poset}(\epsilon) &=& (\varnothing, \varnothing)\\ \mathrm{poset}(I) &=& (\{I\},\varnothing)\\ \mathrm{poset}(\tau_1 \land \tau_2) &=& \mathrm{poset}(\tau_1) \cup \mathrm{poset}(\tau_2) \\ \mathrm{poset}(\tau_1 \mathop{|} \tau_2) &=& \mathrm{poset}(\tau_1) \cup \mathrm{poset}(\tau_2) \\ \mathrm{poset}(\tau_1 \lor \tau_2) &=& \mathrm{poset}(\tau_1) \cup \mathrm{poset}(\tau_2) \\ \mathrm{poset}(\tau_1 \cdot \tau_2) &=& \mathrm{poset}(\tau_1) \cdot \mathrm{poset}(\tau_2) \\ (E_1, <_1) \cdot (E_2, <_2) &=& (E_1 \cup E_2 , <_1 \cup <_2 \cup \{ (x,y) \; | \\ & & x \in \max(E_1,<_1) \land y \in \min(E_2,<_2) \}) \end{eqnarray*} Where we define a sequence of two posets $(E_1, <_1) \cdot (E_2, <_2)$ by collecting the orderings of each of $E_1$ and $E_2$, and adding additional ordering constraints between the maximal elements of $E_1$ and the minimal elements of $E_2$. We can now proceed to define $\sem{\tau}_{\moi}$. \begin{eqnarray*} \semAsync{\epsilon}{\moi}{} &=& \{ \epsilon \}\\ \semAsync{I}{\moi}{} &=& \{ \langle s(I), r(I) \rangle \} \\ \semAsync{\tau_1 \land \tau_2}{\moi}{} &=& \semAsync{\tau_1}{\moi}{} \cap \semAsync{\tau_1}{\moi}{} \\ \semAsync{\tau_1 \cdot \tau_2}{\moi}{} &=& \{ t \,|\, t_1 \in \semAsync{\tau_1}{\moi}{} \land t_2 \in \semAsync{\tau_2}{\moi}{} \land t \in t_1 \bowtie t_2 \land {} \\ & & \hspace*{3mm} \forall I_1 \in \mathrm{max}(\mathrm{poset}^{}(\tau_1)), \\ & & \hspace*{6mm} \forall I_2 \in \mathrm{min}(\mathrm{poset}^{}(\tau_2)): \beforems{\moi}{t}{I_1}{I_2} \} \\ \semAsync{\tau_1 \lor \tau_2}{\moi}{} &=& \semAsync{\tau_1}{\moi}{} \cup \semAsync{\tau_1}{\moi}{} \\ \semAsync{\tau_1 | \tau_2}{\moi}{} &=& \{ z \; | \; t_1 \in \semAsync{\tau_1}{\moi}{} \land t_2 \in \semAsync{\tau_2}{\moi}{} \land z \in t_1 \bowtie t_2\} \end{eqnarray*} Where $t_1 \bowtie t_2$ is the set of all interleavings of $t_1$ and $t_2$. \paragraph{Communication Model Semantics.}\label{sec:communicationmodelsemantics} We formalise the defined communication model semantics by defining for each communication model $CMi$ a corresponding language of event traces that incorporates the appropriate restriction, ruling out event sequences that violate the communication model. The definitions below are those already provided in Section \ref{sec:backRel}. For example, for $CM1$ the constraint is that immediately after each sending event in $u$ we have its corresponding receiving event, with nothing in the middle; etc. \begin{eqnarray*} \lang{CM1}{\mathcal{E}_\interactions} &=& \{{E} \in \lang{\events}{} | \forall_{a\transmsg{M_1}b\in\mathcal{I}}.\forall_{k\in dom({E})}. \send{a}{M_1} = {E}[k-1] \implies \\ & & \recv{b}{M_1} = {E}[k] \} \\ \lang{CM2}{\mathcal{E}_\interactions} &=& \{{E} \in \lang{\events}{} | \forall_{a\transmsg{M_1}b\in\mathcal{I}}.\forall_{c\transmsg{M_2}d\in\mathcal{I}}.\forall_{i,j,k,l\in dom({E})}. \\ & & \recv{b}{M_1} = {E}[i] \wedge \recv{d}{M_2} = {E}[j] \wedge \send{a}{M_1} = {E}[k] \wedge \\ & & \send{c}{M_2} = {E}[l] \wedge k < l \implies i < j\} \\ \lang{CM3}{\mathcal{E}_\interactions} &=& \{{E} \in \lang{\events}{} | \forall_{a\transmsg{M_1}b\in\mathcal{I}}.\forall_{a\transmsg{M_2}d\in\mathcal{I}}.\forall_{i,j,k,l\in dom({E})}. \\ & & \recv{b}{M_1} = {E}[i] \wedge \recv{d}{M_2} = {E}[j] \wedge \\ & & \send{a}{M_1} = {E}[k] \wedge \send{a}{M_2} = {E}[l] \wedge k < l \implies i < j\} \end{eqnarray*} \begin{eqnarray*} \lang{CM4}{\mathcal{E}_\interactions} &=& \{{E} \in \lang{\events}{} | \forall_{a\transmsg{M_1}b\in\mathcal{I}}.\forall_{c\transmsg{M_2}b\in\mathcal{I}}.\forall_{i,j,k,l\in dom({E})}. \\ & & \recv{b}{M_1} = {E}[i] \wedge \recv{b}{M_2} = {E}[j] \wedge \send{a}{M_1} = {E}[k] \wedge \\ & & \send{c}{M_2} = {E}[l] \wedge k < l \implies i < j\} \\ \lang{CM5}{\mathcal{E}_\interactions} &=& \{{E} \in \lang{\events}{} | \forall_{a\transmsg{M_1}b\in\mathcal{I}}.\forall_{a\transmsg{M_2}b\in\mathcal{I}}.\forall_{i,j,k,l\in dom({E})}. \\ & & \recv{b}{M_1} = {E}[i] \wedge \recv{b}{M_2} = {E}[j] \wedge \send{a}{M_1} \causalrel{{E}} \send{a}{M_2} \\ & & \implies i < j\} \\ & & \mbox{where } \send{a}{M_1}\causalrel{u}\send{b}{M_2} \iff\\ & & \hspace*{1cm}( (a = b \lor M_1=M_2) \wedge \\ & & \hspace*{1.2cm} \exists_{i,j \in dom(u)}.(u[i]=\send{a}{M_1} \wedge \send{b}{M_2}=u[j] \wedge i < j)) \\ & & \hspace*{1cm} \vee \hspace*{0.2cm}(\exists_{ev \in {E}}.\send{a}{M_1}\causalrel{u}ev\wedge ev\causalrel{u}\send{b}{M_2}) \\ \lang{CM6}{\mathcal{E}_\interactions} &=& \lang{\events}{} \end{eqnarray*} We can then apply a particular communication model to an \emph{interaction} protocol $\tau_i$ using $\semAsync{\tau_i}{\moi}{\emph{CM}}$, and to a \emph{message} protocol $\tau_m$ using $\semAsync{\tau_m}{}{\emph{CM}}$, which are defined as follows: \begin{eqnarray*} \semAsync{\tau_i}{\moi}{\emph{CM}} &=& \semAsync{\tau_i}{\moi}{}\cap\lang{\emph{CM}}{\eventsin{\interactionsin{\tau}}} \\ \semAsync{\tau_m}{}{\emph{CM}} &=& \semAsync{\tau_m}{}{}\cap\lang{\emph{CM}}{\eventsin{}(\tau)} \end{eqnarray*} \paragraph{Projection.}\label{sec:projection} \newcommand{\distrib}[1]{\ulcorner{#1}\urcorner} Projection is defined, intuitively, as focussing on the aspects of the protocol that are relevant for a given role. It is defined as follows, where we write $\tau^A$ to denote projecting trace $\tau$ for role $A$. \begin{eqnarray*} (\epsilon)^A &=& \epsilon \\ (\atomicInt{a}{M}{b})^A &=& \send{a}{M}, \mbox{if } a=A \\ &=& \recv{b}{M}, \mbox{if } b=A\\ &=& \epsilon, \mbox{otherwise} \\ (\send{a}{M})^A &=& \mbox{if } a=A \mbox{ then } \send{a}{M} \mbox{ else } \epsilon \\ (\recv{a}{M})^A &=& \mbox{if } a=A \mbox{ then } \recv{a}{M} \mbox{ else } \epsilon \\ (\tau_1 \otimes \tau_2)^A &=& (\tau_1)^A \otimes (\tau_2)^A \\ & & \mbox{Where $\otimes$ is any operator.} \end{eqnarray*} We then define the \emph{distribution} of $\tau$, denoted $\distrib{\tau}$, where $\tau$ involves roles $a_1 \ldots a_n$ as\footnote{We use $\|$ to distinguish between parallel composition of different agents, and parallel composition within a protocol. This distinction is used later in this section.}: \begin{eqnarray*} \distrib{\tau} &=& \tau^{a_1} \| \ldots \| \tau^{a_n} \end{eqnarray*} To make an example, let us consider again the scenario proposed in Section \ref{sec:intro}. Alice decided to book Carol's apartment and now Carol needs some pieces of information from Alice in order to complete the reservation. This information can be wrong or incomplete, and Carol might need to ask Alice twice or more times. This can be represented using a cyclic specification $$\mathit{reqInfo} = Alice \transmsg{Info} Carol ~\cdot~ $$ $$(Carol \transmsg{Wrong} Alice ~\cdot~ \mathit{reqInfo} ~\lor~ Carol\transmsg{Booked}Alice)$$ where if the information provided by Alice is not satisfactory, Carol tells Alice and asks for new one (recursion on $\mathit{reqInfo}$). Once Carol will be satisfied with Alice' answer, she will confirm the booking. Thanks to cyclic specifications, we can represent protocols with infinite behaviours. Let us consider $\mathit{main}$ as the combination of the two protocols: $\mathit{main} = \mathit{modifyRes} ~\cdot~ \mathit{reqInfo}$. The projection of $\mathit{main}$ on each single agent would generate \begin{eqnarray*} \distrib{\mathit{main}} &=& \mathit{main}^{Alice} \;\|\; \mathit{main}^{Bob} \;\|\; \mathit{main}^{Carol} \end{eqnarray*} \begin{eqnarray*} \mathit{main}^{Alice} &=& \mathit{modifyRes}^{Alice} ~\cdot~ \mathit{reqInfo}^{Alice}\\ \mathit{modifyRes}^{Alice} &=& \send{Alice}{Canc} ~\cdot~ \send{Alice}{Res}\\ \mathit{reqInfo}^{Alice} &=& \send{Alice}{Info} ~\cdot~ \\ & & (\recv{Alice}{Wrong} ~\cdot~ \mathit{reqInfo}^{Alice} ~\lor~ \recv{Alice}{Booked}) \end{eqnarray*} \begin{eqnarray*} \mathit{main}^{Bob} &=& \mathit{modifyRes}^{Bob} ~\cdot~ \mathit{reqInfo}^{Bob}\\ \mathit{modifyRes}^{Bob} &=& \recv{Bob}{Canc}\\ \mathit{reqInfo}^{Bob} &=& \epsilon \end{eqnarray*} \begin{eqnarray*} \mathit{main}^{Carol} &=& \mathit{modifyRes}^{Carol} ~\cdot~ \mathit{reqInfo}^{Carol}\\ \mathit{modifyRes}^{Carol} &=& \recv{Carol}{Res}\\ \mathit{reqInfo}^{Carol} &=& \recv{Carol}{Info} ~\cdot~ \\ & & (\send{Carol}{Wrong} ~\cdot~ \mathit{reqInfo}^{Carol} ~\lor~ \send{Carol}{Booked}) \end{eqnarray*} In order to define the semantics of a projected protocol we need to first define what we term a \emph{decision structure}. This is needed in the semantics in order to deal correctly with projected protocols. Specifically, the intuition for enactability (see Section~\ref{sec:enactability}) is that an interaction protocol $\tau$ involving, say, three roles $a$, $b$ and $c$ is enactable iff there exist three protocols $\tau^a$, $\tau^b$ and $\tau^c$ such that their concurrent interleaving results in the same behaviour as the original protocol. However, when a protocol contains choices ($\lor$) we need to ensure that the occurrences of $\lor$ in each of $\tau^a$, $\tau^b$ and $\tau^c$ arising from the same $\lor$ in $\tau$ are treated consistently. For example, consider the protocol $\tau = a\transmsg{M_1}b \lor a\transmsg{M_2}c$. This protocol is simple: it specifies that agent $a$ can either send a message (``$M_1$'') to $b$, or it can send a different message (``$M_2$'') to agent $c$. When we distribute the protocol by projecting it (see Section~\ref{sec:projection}) and forming $\tau^a \| \tau^b \| \tau^c$ we obtain the distributed protocol $ (\send{a}{M_1} \lor \send{a}{M_2}) \| (\recv{b}{M_1} \lor \varepsilon) \| (\varepsilon \lor \recv{c}{M_2}) $. However, if we interpret each $\lor$ independently (as the semantics would naturally do) then we can have \emph{inconsistent} choices. For example, we could have $ (\send{a}{M_1}) \| (\varepsilon) \| (\varepsilon) $ where the message is sent by $a$, but $b$ does not elect to receive it. So what we need to do is ensure that each of the three occurrences of ``$\lor$'' represent the \emph{same} choice, and that the choice should be made consistently. The heart of the issue is that the trace expression notation offers a choice operator ($\lor$), which is adequate for global protocols. However, for local protocols it is important to be able to distinguish between a choice that represents a free (local) choice, and a choice that is forced by earlier choices. In this example, $a$ can freely choose whether to send $M_1$ or $M_2$. However, the choice of $b$ whether to receive $M_1$ or not is not a free choice, but is forced by $a$'s earlier choice. Our semantics handles this by defining a \emph{decision structure} which is used to enforce consistent choices. Formally, given a protocol $\tau$ we define $d(\tau)$ as a set of \emph{decision structures} (formal definition below). A decision structure is a syntactic structure that mirrors the structure of $\tau$, except that each $\lor$ is annotated with a decision (e.g.~$L$ or $R$). We define three operations defined on a decision structure: to get the sub-decision structure corresponding to the left part (denoted $d.L$), to get the right part ($d.R$) and to get the decision (L or R) associated with the current $\lor$ node (denoted $d.D$). We define $d(\tau)$ to create a set of decision structures, each of which corresponds to the structure of $\tau$, but where all possible assignments of decisions are made. Observe that If $\tau$ contains $N$ occurrences of $\lor$ then the set $d(\tau)$ contains $2^N$ elements. For example, given $\tau = \atomicInt{a}{M_1}{b} \lor \atomicInt{a}{M_2}{b}$ we have that $ d(\tau) = \{ \_ \overset{L}{\lor} \_, \_ \overset{R}{\lor} \_ \} $ where we use $\_$ to indicate an irrelevant part of a decision structure, and $\overset{L}{\lor}$ to denote a node tagged with a decision $L$. In addition to decisions of $L$ and $R$, the definition of $d(\tau_1 \lor \tau_2)$ has a second case ($\ldots \cup \{ t_1 \overset{LR}{\lor} t_2 \ldots$). The reason is that it is only possible to enforce consistent choice if the choice is made by a single agent. If this is not the case, then we annotate with ``$LR$'' to indicate that a mixed choice is possible. For example, given $\tau = \atomicInt{b}{M_1}{a} \lor \atomicInt{a}{M_2}{b}$ we have that $ d(\tau) = \{ \_ \overset{LR}{\lor} \_ \} $ because $\ags{\tau_1} = \{b\} \neq \ags{\tau_2} = \{a\}$. \begin{eqnarray*} d(\varepsilon) &=& \{\varepsilon\} \\ d(I) &=& \{I\} \\ d(\tau_1 \lor \tau_2) &=& \{t_1 \overset{x}{\lor} t_2 \,|\, t_1 \in d(\tau_1) \land t_2 \in d(\tau_2) \\ & & \hspace*{3mm} {} \land x \in \{R,L\} \land \ags{\tau_1} = \ags{\tau_2} \land |\ags{\tau_1}|=1 \} \\ & & {} \cup \{t_1 \overset{LR}{\lor} t_2 \,|\, t_1 \in d(\tau_1) \land t_2 \in d(\tau_2) \\ & & \hspace*{3mm} {} \land ( (\ags{\tau_1} \neq \ags{\tau_2}) \lor (|\ags{\tau_1}| \neq 1)) \} \\ & & \mbox{where } \ags{\tau} = \{p \; | \; p \transmsg{M} r \in \min(\mathrm{poset}(\tau)) \} \\ d(\tau_1 \oplus \tau_2) &=& \{t_1 \oplus t_2 \; | \; t_1 \in d(\tau_1) \land t_2 \in d(\tau_2)\} \\ (\tau_L \otimes \tau_R).L &=& \tau_L \hspace*{0.85cm} (\tau_L \otimes \tau_R).R \; \; = \; \; \tau_R \\ (\tau_L \overset{X}{\lor} \tau_R).D &=& X \end{eqnarray*} Where $\otimes$ is any operator, and $\oplus$ is any operator other than $\lor$. We now specify the semantics of a distributed protocol, denoted $\sem{\tau}_{\mathrm{dist}}$. The semantics is defined in terms of a union over possible decision structures (first line). The remaining of the equations for the semantics carry along the decision structure, and follow it in recursive calls, and for the semantics of $\lor$ it enacts the decision specified in the structure, rather than considering both sub-protocols. Note that projection is defined using $\|$ rather than the usual $|$ - this differs in the semantics below, in that $\|$ passes the \emph{same} decision structure to both arguments. This ensures consistency between agents, but not within agents. \begin{eqnarray*} \sem{\tau}_{\mathrm{dist}} &=& \bigcup_{dt \in d(\tau)} \sem{\tau^{a_1} \| \ldots \| \tau^{a_n}}^{dt} \\ \sem{M}^{dt} &=& \{\langle M \rangle \} \\ \sem{\varepsilon}^{dt} &=& \{ \langle \rangle \} \\ \sem{\tau_1 \cdot \tau_2}^{dt} &=& \{ t_1 \circ t_2 | t_1 \in \sem{\tau_1}^{dt.L} \land t_2 \in \sem{\tau_2}^{dt.R} \} \\ \sem{\tau_1 \land \tau_2}^{dt} &=& \sem{\tau_1}^{dt.L} \cap \sem{\tau_2}^{dt.R} \\ \sem{\tau_1 \lor \tau_2}^{dt} &=& \mbox{if } dt.D=R \mbox{ then } \sem{\tau_2}^{dt.R} \\ & & \mbox{ elseif } dt.D=L \mbox{ then } \sem{\tau_1}^{dt.L}\\ & & \mbox{ else } \sem{\tau_2}^{dt.R} \cup \sem{\tau_1}^{dt.L} \\ \sem{\tau_1 | \tau_2}^{dt} &=& \{ z | t_1 \in \sem{\tau_1}^{dt.L} \land t_2 \in \sem{\tau_2}^{dt.R} \land z \in t_1 \bowtie t_2\} \\ \sem{\tau_1 \| \tau_2}^{dt} &=& \{ z | t_1 \in \sem{\tau_1}^{dt} \land t_2 \in \sem{\tau_2}^{dt} \land z \in t_1 \bowtie t_2\} \end{eqnarray*} Where $t_1 \bowtie t_2$ is the set of all interleavings of $t_1$ and $t_2$, and $\circ$ is concatenation over sequences. Note that if $\tau$ does not contain any occurrences of $\lor$ then the semantics above reduce to the standard semantics. Finally, we define $\semAsync{\tau_i}{\mathrm{dist}}{\emph{CM}}$, which computes the semantics of an interaction protocol $\tau_i$ by distributing it, and also applies a particular communication model $\emph{CM}$. \begin{eqnarray*} \semAsync{\tau_i}{\mathrm{dist}}{\emph{CM}} &=& \semAsync{\tau_i}{\mathrm{dist}}{}\cap\lang{\emph{CM}}{\eventsin{\interactionsin{\tau}}} \end{eqnarray*} \paragraph{Enactability.}\label{sec:enactability} \begin{figure*}[!t] \begin{scriptsize} \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{$\atomicInt{a}{M_1}{b} ~\cdot~ \atomicInt{b}{M_5}{c}$} \\\hline CM&RS&RR&SS&SR\\\hline CM1 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM2 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ CM3 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ CM4 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ CM5 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ CM6 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ \hline \end{tabular} \enskip \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{$\atomicInt{a}{M_1}{b} ~\cdot~ \atomicInt{a}{M_2}{c}$} \\\hline CM&RS&RR&SS&SR\\\hline CM1 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM2 & \ding{56} & \ding{52} & \ding{52} & (\ding{52})\\ CM3 & \ding{56} & \ding{52} & \ding{52} & (\ding{52})\\ CM4 & \ding{56} & \ding{56} & \ding{52} & (\ding{52})\\ CM5 & \ding{56} & \ding{56} & \ding{52} & (\ding{52})\\ CM6 & \ding{56} & \ding{56} & \ding{52} & (\ding{52})\\ \hline \end{tabular} \enskip \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{$\atomicInt{a}{M_1}{b} ~\cdot~ \atomicInt{c}{M_6}{b}$} \\\hline CM&RS&RR&SS&SR\\\hline CM1 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM2 & \ding{56} & \ding{52} & \ding{52} & (\ding{52})\\ CM3 & \ding{56} & \ding{52} & \ding{56} & (\ding{52})\\ CM4 & \ding{56} & \ding{52} & \ding{52} & (\ding{52})\\ CM5 & \ding{56} & \ding{52} & \ding{56} & (\ding{52})\\ CM6 & \ding{56} & \ding{52} & \ding{56} & (\ding{52})\\ \hline \end{tabular} \enskip \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{$\atomicInt{a}{M_1}{b} ~\cdot~ \atomicInt{c}{M_4}{a}$} \\\hline CM&RS&RR&SS&SR\\\hline CM1 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM2 & \ding{56} & \ding{56} & \ding{56} & \ding{52}\\ CM3 & \ding{56} & \ding{56} & \ding{56} & \ding{52}\\ CM4 & \ding{56} & \ding{56} & \ding{56} & \ding{52}\\ CM5 & \ding{56} & \ding{56} & \ding{56} & \ding{52}\\ CM6 & \ding{56} & \ding{56} & \ding{56} & \ding{52}\\ \hline \end{tabular} \\[3mm] \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{$\atomicInt{a}{M_1}{b} ~\cdot~ \atomicInt{a}{M_2}{b}$} \\\hline CM&RS&RR&SS&SR\\\hline CM1 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM2 & \ding{56} & \ding{52} & \ding{52} & (\ding{52})\\ CM3 & \ding{56} & \ding{52} & \ding{52} & (\ding{52})\\ CM4 & \ding{56} & \ding{52} & \ding{52} & (\ding{52})\\ CM5 & \ding{56} & \ding{52} & \ding{52} & (\ding{52})\\ CM6 & \ding{56} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ \hline \end{tabular} \enskip \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{$\atomicInt{a}{M_1}{b} ~\cdot~ \atomicInt{b}{M_3}{a}$} \\\hline CM&RS&RR&SS&SR\\\hline CM1 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM2 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ CM3 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ CM4 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ CM5 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ CM6 & \ding{52} & (\ding{52}) & (\ding{52}) & (\ding{52})\\ \hline \end{tabular} \enskip \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{$\atomicInt{a}{M_1}{b} ~\lor~ \atomicInt{a}{M_2}{c}$} \\\hline CM&RS&RR&SS&SR\\\hline CM1 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM2 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM3 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM4 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM5 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM6 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ \hline \end{tabular} \enskip \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|c|}{$\atomicInt{a}{M_1}{b} ~\lor~ \atomicInt{b}{M_3}{a}$} \\\hline CM&RS&RR&SS&SR\\\hline CM1 & \ding{52} & \ding{52} & \ding{52} & \ding{52}\\ CM2 & \ding{56} & \ding{56} & \ding{56} & \ding{56}\\ CM3 & \ding{56} & \ding{56} & \ding{56} & \ding{56}\\ CM4 & \ding{56} & \ding{56} & \ding{56} & \ding{56}\\ CM5 & \ding{56} & \ding{56} & \ding{56} & \ding{56}\\ CM6 & \ding{56} & \ding{56} & \ding{56} & \ding{56}\\ \hline \end{tabular} \caption{Automatically generated analyses of enactability}\label{figtable2}\label{figtable1} \end{scriptsize} \end{figure*} We are now finally in a position to define enactability. The intuition is that an interaction protocol $\tau$ is enactable iff the semantics of $\tau$, with respect to a selected message ordering interpretation and communication model, can be realised by a distributed version of the protocol. In other words, if there exists for each role $r$ a corresponding message protocol $\tau_r$ such that the combination of these protocols realises the same behaviour as $\tau$. However, instead of considering whether there exists some $\tau_r$, we let $\tau_r = \tau^r$, i.e.~we take for each role the projected protocol as its protocol. We also consider a notion of \emph{weak} enactability. This applies in a situation where the a distributed enactment is able to avoid violating the behaviour specified by $\tau$, but is not able to recreate all of the behaviours that $\tau$ specifies. This situation can arise with weaker message ordering interpretations (see below for examples). Weak enactability can also arise in situations where two ordered messages have two overlapping roles (e.g. $\tau = \atomicInt{a}{M_1}{b} \cdot \atomicInt{b}{M_2}{a}$). In this situation the projection operator is too strict: it has $\tau^b = r(M_1) \cdot s(M_2)$, but if we adopt an SR message ordering interpretation, then we do not need to ensure that $M_2$ is sent after $M_1$ is received, only that $M_1$ is sent before $M_2$ is received, which role $a$ can ensure on its own. \begin{definition}[Strongly/Weakly Enactable] \label{sw-enact-async-def} Let $\tau$ be an interaction protocol, $\{ a_1, a_2,$ $...,$ $a_n \}$ the set of agents involved in $\tau$, $\moi\in\emph{MOISet}$ a message order interpretation and $\emph{CM}\in\emph{ComModel}$ a communication model. We say that, $\tau$ is strongly (weakly) enactable, for $\moi$ semantics in $\emph{CM}$ model iff the decomposition of $\tau$ through projection on its agents $\{ a_1,a_2,...,a_n \}$ recognizes the same (a subset of) traces recognized by $\tau$. Formally: \begin{eqnarray*} \mathit{enact}(\tau)^{\emph{CM}}_{\moi} & \mbox{iff} & \semAsync{\tau}{\mathrm{dist}}{\emph{CM}} = \semAsync{\tau}{\moi}{\emph{CM}} \\ \mathit{weak\_enact}(\tau)^{\emph{CM}}_{\moi} & \mbox{iff} & \semAsync{\tau}{\mathrm{dist}}{\emph{CM}} \subseteq \semAsync{\tau}{\moi}{\emph{CM}} \end{eqnarray*} \end{definition} If a protocol is weak enactable, the interleaving of the corresponding local protocols generates a subset of its traces (with a fixed moi and communication model). In practice, this means that our implementation is sound (generates only valid traces), but it is not complete (not all the traces are generated). Consequently, our system will be more restrictive than we wanted. Figure \ref{figtable2} show the results of applying this definition to a number of cases, with different message ordering interpretation, and different communication models. These tables were all generated by the Haskell implementation of the definitions in this paper, in which \ding{52} and (\ding{52}) denote \emph{strongly} and \emph{weakly} enactable, respectively. The prototype counts \textasciitilde300 LOC. It implements the trace expression standard semantics, message order interpretation, communication model semantics and enactability check\footnote{The code is available on the web at: \url{http://enactability.altervista.org/}}. Looking at the tables in Figure~\ref{figtable1}, we make the following observations. Firstly, CM1 is quite strict: all the cases considered are enactable under CM1, regardless of the selected message ordering interpretation. This is expected: we know that CM1 is quite strong. Secondly, for many examples there is not a difference in enactability with the different communication models (other than CM1), except where the communication model corresponds to the combination of MOI and the pattern in the protocol. For example, in the top row, second table from the right, the simple protocol is enactable given SS message ordering interpretation only with CM2 and CM4 (and, of course, CM1). This is because for this protocol both messages are received by the same agent but sent by different agents, and, given an RR MOI, the desired constraint that agent $B$ receives the first message before the second, can only be enforced using a communication model that guarantees delivery of messages to the same recipient in the order in which messages were sent. Both CM2 and CM4 provide this guarantee (in fact CM4 provides exactly this, and CM2 is stronger). Thirdly, RS appears to be a good choice for message ordering interpretation, since it is the only MOI where protocols are never weakly enactable. For the other message ordering interpretations, there are protocols that are only weakly enactable (for communication models other than CM1). A protocol being weakly enactable indicates that the desired behaviour specified by the MOI is too loose: it permits behaviours that the distributed realisation cannot realise. On the other hand, in the case of the left-most table on the bottom row (protocol $\atomicInt{a}{M_1}{b} ~\cdot~ \atomicInt{a}{M_2}{b}$), the protocol is not enactable under RS (except for CM1), but is enactable under SS and under RR. Turning to SR, we observe that it seems to be too weak: almost all the protocols in the figure are enactable (although in most cases only weakly enactable). Returning to the example from the introduction: $$\mathit{modifyRes} = Alice \transmsg{Canc} Bob ~\cdot~ Alice \transmsg{Res} Carol$$ where $a1 \transmsg{M} a2$ this example corresponds to the second table from the left in the top row of Figure~\ref{figtable1}. This shows that, if one desires an $RR$ MOI, i.e.~that what is meant by $Canc$ coming before $Res$ is that Bob receives the $Canc$ message before Carol receives the $Res$ message, then the underlying message communication must be $CM1$, $CM2$ or $CM3$, in order for the protocol to be enactable. \section{Discussion}\label{sec:relatedwork} Despite the large amount of work on enactability, very few approaches consider how message ordering and decision structures affect its definition, very few come with an implemented prototype, and none considers the issues raised by the communication model. Although one motivation might be that it is generally desirable to have robust protocol specifications that are independent of the underlying platform implementation, also ensuring separation of concerns, we observe that robustness could make the protocol too complex, or harder to maintain. Considering what the underlying implementation guarantees w.r.t. communication model, we can relax our specifications, and above all, a protocol that is not enactable in some platform, can be in some other. This makes our work relevant to platform designers, and protocol designers. Taking all these features into account in a unified semantic-driven way, and demonstrating the potential of the approach on a highly expressive protocol language, are the innovative and original features of this contribution. Desai and Singh \cite{DBLP:conf/aaai/DesaiS08} limit their investigation to the RS message ordering interpretation, that they consider the standard of correctness. Hence, despite the nice introduction they provide to other message orderings and to the problems they might raise, the definition of enactability they provide is not parametric in the MOI. Lanese et al. \cite{DBLP:conf/sefm/LaneseGMZ08} move a step further, but the generality of their approach is still limited. They define three different notions of enactability, that they name conformance: sender conformance, receiver conformance, and disjoint conformance. That approach is more flexible that the one by Desai and Singh, but less general than ours, where the definition of enactability is parametric in the MOI and does not require different cases. Also, they only consider how sequence and choice are affected by MOIs, leaving the study of other operators for the future. Moreover, when discussing interaction protocols whose most external operator is a choice, they put a very strong constraint for enactability, namely that the agents involved in the two branches of the choice (excluding the agents involved in the choice itself) are the same. We added decision structures to overcome this restriction, and provide a notion of enactability that can succeed even when that constraint is not met. Neither Desai and Singh, nor Lanese et al., use formalisms for protocol representation as expressive as trace expressions, and neither of them presents experiments obtained from a working prototype, as we do. With respect to the introduction of decision structures to remove unnecessary restrictions on enactability of protocols when choice is involved, our proposal is similar to that by Qiu et al., \cite{DBLP:conf/www/QiuZCY07}, as for the other works we have discussed in this section, we implemented our enactability checker, whereas their work only provides definitions. Additionally, our approach is simpler in that we do not need to label the choice operator with agents as they do. In the future, we will address both theoretical and practical issues. On the theoretical side, we will carry out a systematic analysis of the relationships between Communication Model and Message Ordering Interpretation, to identify those combinations which provide some guarantees by design. We will also consider the relationships between enactability and distributed monitorability \cite{DBLP:conf/atal/FerrandoAM17}, as they might turn out to resort to the same definition. On the practical part, we plan to improve our working prototype to provide a useful tool to assess protocols for enactability. Apart from providing a user-friendly interface, a key issue to address will be to provide a way to isolate the part of a non-enactable protocol that makes it non-enactable. Also, trace expressions are interpreted in a coinductive way \cite{Sangiorgi:2009:OBC:1516507.1516510} to represent infinite traces of events. Since Haskell does not support coinduction, the existing prototype can be only used on acyclic message and interactions protocols. Haskell has been chosen because the implementation mimics the semantics requiring next to no effort. In order to fully implement the proposed features we are planning to develop the enactability check using SWI-Prolog\footnote{\url{http://www.swi-prolog.org}}, which natively supports coinduction. To stress-test the prototype and assess its performance from a qualitative and quantitative viewpoint we plan to create a library of interaction protocols known to be ``problematic'' w.r.t. enactability, and perform systematic experiments. Finally, this work highlighted the need of characterising the existing agent infrastructures like Jade \cite{JadeBook}, Jason \cite{Bordini:2007:PMS:1197104}, Jadex \cite{Pokahr2005}, etc, in terms of the communication model they support. This would allow us to state if a protocol is enactable on a given infrastructure, strengthening the potential of our proposal to be exploited in real applications. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.491211, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbAQ4ubnjoqgDUed8
\section{Introduction} \label{sec:intro} Extractive summarization remains a simple and fast approach to produce summaries which are grammatical and accurately represent the source text. In the news domain, these systems are able to use a dominant signal: the position of a sentence in the source document. Due to journalistic conventions which place important information early in the articles, the lead sentences often contain key information. In this paper, we explore how systems can look beyond this simple trend. Naturally, automatic systems have all along exploited position cues in news as key indicators of important content \citep{Schiffman:2002:EMS,hong2014improving,ext_bert}. The `lead' baseline is rather strong in single-document news summarization \citep{brandow1995automatic,nenkova2005automatic}, with automatic systems only modestly improving the results. Nevertheless, more than 20-30\% of summary-worthy sentences come from the second half of news documents \citep{data2_nallapati2016abstractive,kedzie2018content}, and the lead baseline, as shown in Table \ref{tab:lead_ex}, does not always produce convincing summaries. So, systems must balance the position bias with representations of the semantic content throughout the document. Alas, preliminary studies \citep{kedzie2018content} suggest that even the most recent neural methods predominantly pick sentences from the lead, and that their content selection performance drops greatly when the position cues are withheld. \begin{table}[t] \centering \small \begin{tabular}{|p{0.94\linewidth}|} \hline \textbf{Lead-3:} Bangladesh beat fellow World Cup quarter-finalists Pakistan by 79 runs in the first one-day international in Dhaka. Tamim Iqbal and Mushfiqur Rahim scored centuries as Bangladesh made 329 for six and Pakistan could only muster 250 in reply. Pakistan will have the chance to level the three-match series on Sunday when the second odi takes place in Mirpur. \\ \hline \textbf{Reference:} Bangladesh beat fellow World Cup quarter-finalists Pakistan by 79 runs. Tamim Iqbal and Mushfiqur Rahim scored centuries for Bangladesh. Bangladesh made 329 for six and Pakistan could only muster 250 in reply. Pakistan will have the chance to level the three-match series on Sunday. \\ \hline \hline \textbf{Lead-3}: Standing up for what you believe. What does it cost you? What do you gain? \\ \hline \textbf{Reference:} Indiana town's Memories Pizza is shut down after online threat. Its owners say they'd refuse to cater a same-sex couple's wedding. \\ \hline \end{tabular} \caption{`Lead' (first 3 sentences of source) can produce extremely faithful (top) to disastrously inaccurate (bottom) summaries. Gold standard summaries are also shown.} \label{tab:lead_ex} \end{table} In this paper, we verify that sentence position and lead bias dominate the learning signal for state-of-the-art neural extractive summarizers in the news domain. We then present techniques to improve content selection in the face of this bias. The first technique makes use of `unbiased data' created by permuting the order of sentences in the training articles. We use this shuffled dataset for pre-training, followed by training on the original (unshuffled) articles. The second method introduces an auxiliary loss which encourages the model's scores for sentences to mimic an estimated score distribution over the sentences, the latter computed using ROUGE overlap with the gold standard. We implement these techniques for two recent reinforcement learning based systems, RNES \citep{DBLP:conf/aaai/WuH18} and BanditSum \citep{dong2018banditsum}, and evaluate them on the CNN/Daily Mail dataset \citep{hermann2015teaching}. We find that our auxiliary loss achieves significantly better ROUGE scores compared to the base systems, and that the improvement is even more pronounced when the true best sentences appear later in the article. On the other hand, the pretraining approach produces mixed results. We also confirm that when summary-worthy sentences appear late, there is a large performance discrepancy between the oracle summary and state-of-the-art summarizers, indicating that learning to balance lead bias with other features of news text is a noteworthy issue to tackle. \section{Related Work}\label{sec:related_work} Modern summarization methods for news are typically based on neural network-based sequence-to-sequence learning \cite{cnn1_kalchbrenner2014convolutional,cnn2_kim2014convolutional,rnn2_chung2014gru,ext2_2015Yin,ext3_cao2015learning,ext4_cheng2016neural,ext5_summarunner,narayan2018don,neusum}. In MLE-based training, extractive summarizers are trained with gradient ascent to maximize the likelihood of heuristically-generated ground-truth binary labels \citep{ext5_summarunner}. Many MLE-based models do not perform as well as their reinforcement learning-based (RL) competitors that directly optimize ROUGE \cite{abs5_paulus2017deep,DBLP:Narayan/2018,dong2018banditsum,DBLP:conf/aaai/WuH18}. As RL-based models represent the state of the art for extractive summarization, we analyze them in this paper. The closest work to ours is a recent study by \citet{kedzie2018content} which showed that MLE-based models learn a significant bias for selecting early sentences when trained on news articles as opposed to other domains. As much as 58\% of selected summary sentences come directly from the lead. Moreover, when these models are trained on articles whose sentences are randomly shuffled, the performance drops considerably for news domain only. While this drop could be due to the destruction of position cues, it may also arise because the article's coherence and context were lost. In this paper, we employ finer control on the distortion of sentence position, coherence, and context, and confirm that performance drops are mainly due to the lack of position cues. We also propose the first techniques to counter the effects of lead bias in neural extractive systems. \section{Base Models for Extractive Summarization} In supervised systems, given a document $D = \{ s_1, . . . , s_n \}$ with $n$ sentences, a summary can be seen as set of binary labels $y_1,\dots, y_n \in \{0, 1\}$, where $y_i = 1$ indicates that the $i$-th sentence is included in the summary. We choose to experiment with two state-of-the-art RL-based extractive models: {\bf RNES} \cite{DBLP:conf/aaai/WuH18} and {\bf BanditSum} \citep{dong2018banditsum}. Both employ an encoder-decoder structure, where the encoder extracts sentence features into fixed-dimensional vector representations $h_1, \dots,h_n$, and a decoder produces the labels $y_1,\dots, y_n$ based on these sentence representations. RNES uses a CNN+bi-GRU encoder, and BanditSum a hierarchical bi-LSTM. RNES's decoder is \textit{auto-regressive}, meaning it predicts the current sentence's label based on decisions made on previous sentences; i.e., $y_t = f(D,h_t, y_{1:t-1})$. In BanditSum, there is no such dependence: it produces affinity scores for each sentence and the top scoring sentences are then selected. \section{Lead Bias of News Systems}\label{sec:lead_bias} \begin{table*}[h] \centering \small \begin{tabular}{c|ccccc|cc} \toprule \diagbox{train setting}{test setting} &original & random & reverse & insert-lead & insert-lead3 & Mean & Std. Dev. \\ \hline Lead-3 baseline &32.68 & 22.81&17.94&27.67&27.68 &25.76 &5.00\\ \hline original & \textbf{33.85} & 26.18 &20.71 &31.71 & 31.11 &28.71 & 4.72\\ random & 30.88 & \textbf{29.70} & 29.79 & 29.97 & 30.09 &\textbf{30.09}& \textbf{0.42}\\ reverse & 21.35 & 26.32 & \textbf{33.59} & 21.63 & 21.65 &24.91 & 4.72 \\ insert-lead & 33.21 & 26.07 & 20.70 & \textbf{33.41} & 31.59 &29.00 &4.93 \\ insert-lead3 & 32.29 & 25.57 & 20.22 & 32.92 & \textbf{32.15} &28.63&4.98 \\ \bottomrule \end{tabular} \caption{BanditSum's performance---calculated as the average between ROUGE-1,-2, and -L F1---on the validation set of the CNN/Daily Mail corpus. The sentence position information is perturbed at different levels, as explained in Section \ref{sec:lead_bias}.} \label{tab:data_manipulation} \end{table*} First, we investigate the impact of sentence position on our models. We manipulate the \textbf{original} CNN/Daily Mail dataset to preserve sentence position information at different levels. In the \textbf{random} setting, sentences are shuffled randomly; in \textbf{reverse}, they are in reverse order; in \textbf{insert-lead} and \textbf{insert-lead3}, we insert an out-of-document sentence (chosen randomly from the corpus) as the first sentence or randomly as one of the first three sentences, respectively. In Table \ref{tab:data_manipulation}, we show BanditSum's performance,\footnote{We notice the same trends on RNES. } when trained and tested on the various datasets. All models (except random) perform worse when tested on a mismatched data perturbation. Even when the distortion is at a single lead position in \textbf{insert-lead} and \textbf{insert-lead3}, the performance on the original data is significantly lower than when trained without the distortion. These results corroborate \citet{kedzie2018content}'s findings for RL-based systems. Interestingly, the \textbf{random} model has the best mean performance and the lowest variation indicating that completely removing the position bias may allow a model to focus on learning robust sentence semantics. \section{Learning to Counter Position Bias} We present two methods which encourage models to locate key phrases at diverse parts of the article. \subsection{Multi-Stage Training} This technique is inspired by the robust results from the \textbf{random} model in section \ref{sec:lead_bias}. We implement a multi-stage training method for both BanditSum and RNES where in the first few epochs, we train on an `unbiased' dataset where the sentences in every training document are randomly shuffled. We then fine-tune the models by training on the original training articles. The goal is to prime the model to learn sentence semantics independently of position, and then introduce the task of balancing semantics and positional cues. \subsection{ROUGE-based Auxiliary Loss} We observed that BanditSum tends to converge to a low-entropy policy, in the sense that the model's affinity scores are either 1 or 0 at the end of training. Furthermore, over 68\% of its selections are from the three leading sentences of the source. Regularizing low-entropy policies can increase a model's propensity to explore potentially good states or stay close to a known good policy \citep{nachum2017improving,galashov2019information}. We extend this idea to summarization by introducing a ROUGE-based loss which regularizes the model policy using an estimate of the value of individual sentences. These sentence-level estimates are computed as a \textit{distribution} $P_R$: \begin{equation} P_R( x = i ) = \frac{r(s_i, \mathcal{G})}{\sum_{j=1}^{n}{r(s_j, \mathcal{G})}}, \end{equation} \noindent where $r$ is the average of ROUGE-1, -2 and \mbox{-L} F\textsubscript{1} scores between sentence $s_i$ in the article and the reference summary $\mathcal{G}$. We would like the model's predictive distribution $P_\mathcal{M}$ to approximately match $P_R$. To compute $P_\mathcal{M}$, we normalize the predicted scores from a non-auto-regressive model. In an auto-regressive model such as RNES, the decision of including a sentence depends on those selected so far. So a straightforward KL objective is hard to implement, and we use this technique for BanditSum only. Our auxiliary loss is defined as the KL divergence: $\mathcal{L}_{\KL} = D_{\KL} \infdivx{P_R}{P_\mathcal{M}}$. The update rule then becomes: \begin{equation} \label{eq:kl_loss} \theta^{(t+1)} = \theta^{(t)} + \alpha \left( \nabla \mathcal{L}_{\mathcal{M}}(\theta^{(t)}) + \beta \nabla \mathcal{L}_{\KL}(\theta^{(t)}) \right) \end{equation} where $\theta^{(t)}$ represents the model's parameters at time step $t$, $\mathcal{L}_{\mathcal{M}}$ is the original model's loss function, and $\beta$ is a hyperparameter. \section{Experimental Setup} We use the CNN/Daily Mail dataset \citep{hermann2015teaching} with the standard train/dev/test splits of 287,227/13,368/11,490. To avoid inconsistencies, we built on top of the author-provided implementations for BanditSum and our faithful reimplementation of RNES. To reduce training time, we pre-compute and store the average of ROUGE-1, -2, and -L for every sentence triplet of each article, using a HDF5 table and PyTables \cite{pytables, hdf5}. This allows for a considerable increase in training speed. We limit the maximum number of sentences considered in an article to the first 100. All the models were trained for 4 epochs. For the multi-stage training, we pretrain for 2 epochs, then train on the original articles for 2 epochs. We set the auxiliary loss hyperparameters $\alpha=1e-4$ and $\beta=0.0095$ in eq. \ref{eq:kl_loss} based on a grid search using the Tune library \cite{tune}. We also train a baseline {\bf entropy} model by replacing $\mathcal{L}_{\KL}$ with the negated entropy of $P_\mathcal{M}$ in eq. \ref{eq:kl_loss}. This loss penalizes low entropy, helping the model explore, but it is `undirected' compared to our proposed method. We present the results of Lead-3 baseline (first 3 sentences), and two other competitive models---Refresh\footnote{We are unable to evaluate this model on the lead overlap measure due to lack of access to the model outputs.} \citep{narayan2018don} and NeuSum \citep{neusum}. Lastly, we include results from an \textit{oracle} summarizer, computed as the triplet of source sentences with the highest average of ROUGE-1, -2 and -L scores against the abstractive gold standard. \section{Results and Discussion} \label{sec:result} Table \ref{tab:results} reports the F1 scores for ROUGE-1,-2 and -L \cite{eva1_lin:2004:ACLsummarization}. We use the \emph{pyrouge}\footnote{\url{www.github.com/bheinzerling/pyrouge}} wrapper library to evaluate the final models, while training with a faster Python-only implementation\footnote{\url{www.github.com/Diego999/py-rouge}}. \begin{table}[t] \centering \small \begin{tabular}{l|ccc|c} \toprule Model & \multicolumn{3}{c}{ROUGE} & Overlp \\ & 1 & 2 & L & \% \\ \hline Lead-3 & 40.06 & 17.53 & 36.18 & 100.0 \\ Oracle & 56.53 & 32.65 & 53.12 & 27.24 \\ Refresh & 40.0 & 18.2 & 36.6 & -- \\ NeuSum & 40.15 & 17.80 & 36.63 & 58.24 \\\hline RNES & 41.15 & 18.81 & 37.75 & 68.44 \\ RNES+pretrain & 41.29 & 18.85 & 37.79 & 68.22 \\ \hline BanditSum & 41.68 & 18.78 & 38.00 & 69.87 \\ B.Sum+pretrain & 41.68 & 18.79 & 37.99 & 70.77 \\ B.Sum+entropy & 41.71 & 18.87 & 38.04 & 64.83 \\ BanditSum+KL & \textbf{41.81*} & \textbf{18.96*} & \textbf{38.16*} & 65.13 \\ \bottomrule \end{tabular} \caption{ROUGE scores for systems. `Overlp' denotes the model's overlap in extraction choices with the lead-3 baseline. Scores significantly higher than BanditSum with $p<0.001$ (bootstrap resampling test) are marked with *.} \label{tab:results} \end{table} We test for significance between the baseline models and our proposed techniques using the bootstrap method. This method was first recommended for testing significance in ROUGE scores by \citet{eva1_lin:2004:ACLsummarization}, and has subsequently been advocated as an appropriate measure in works such as \citet{dror-hitchhikers} and \citet{berg-kirkpatrick}. The simple entropy regularizer has a small but not significant improvement, and pretraining has a similar improvement only for RNES. But the auxiliary ROUGE loss significantly ($p<0.001$) improves over BanditSum, obtaining an extra 0.15 ROUGE points on average. The last column reports the percentage of summary sentences which overlap with the lead. The auxiliary loss leads to a 4.7\% absolute decrease in such selections compared to the base system, while also reaching a better ROUGE score. Figure \ref{fig:train_curves} shows that the reward (average ROUGE-1,-2,-L) for the auxiliary loss model is consistently above the base. \begin{figure} \centering \includegraphics[width=\linewidth]{images/final_dev_fig.png} \caption{Training curves for BanditSum based models. Average ROUGE is the average of ROUGE-1, -2 and -L F1.} \label{fig:train_curves} \vspace{2mm} \end{figure} We also examined the auxiliary loss model on documents where the summary is mostly comprised of lead sentences $D_{\mathrm{early}}${}, mostly sentences much later in the article $D_{\mathrm{late}}${}, and a dataset at the midway point, $D_{\mathrm{med}}${}. To create these sets, we rank test articles using the average index of its summary sentences in the source document. The 100 test articles with lowest average index are $D_{\mathrm{early}}${}, the 100 with highest value are $D_{\mathrm{late}}${} and the 100 closest to the median are $D_{\mathrm{med}}${}. In Table~\ref{d_early}, we can see that the auxiliary loss model's improvements are even more amplified on $D_{\mathrm{med}}${} and $D_{\mathrm{late}}${}. On the other hand, our pretraining results are mixed. We hope to employ more controlled multi-tasking methods \citep{kiperwasser2018scheduled} in the future to deal with the issue. The second line in Table \ref{d_early} reports the oracle ROUGE scores of the best possible extractive summary. While all systems are quite close to the oracle on $D_{\mathrm{early}}${} they only reach half the performance on $D_{\mathrm{late}}${}. This gap indicates that our improvements only scratch the surface, but also that this problem is worthy and challenging to explore. It is worth noting that we have attempted to build a single model which can summarize both lead-biased articles and those whose information is spread throughout. Our aim was to encourage the model to explore useful regions as a way of learning better document semantics. But we hypothesize that our models can be further improved by learning to automatically predict when the lead paragraph suffices as a summary, and when the model should look further in the document. \begin{table} \centering \small \begin{tabular}{l|l|l|l} \toprule Model & $D_{\mathrm{early}}$ & $D_{\mathrm{med}}$ & $D_{\mathrm{late}}$ \\ \hline Lead-3 & 46.17 & 30.90 & 20.18 \\ Oracle & 50.52 & 47.92 & 42.21 \\ RNES & 41.76 &32.11 &20.62 \\ RNES+pretrain & 41.66 & 32.38 & 20.64 \\ \hline BanditSum & 43.10 & 32.65 & 21.63 \\ BanditSum+entropy & 41.96 & 32.59 & 22.12 \\ BanditSum+KL & 42.63 & 33.05 & 21.96 \\ \bottomrule \end{tabular} \caption{Average ROUGE-1, -2 and -L F1 scores on $D_{\mathrm{early}}$, and $D_{\mathrm{med}}$, $D_{\mathrm{late}}$. Each set contains 100 documents.} \label{d_early} \end{table} \vspace{2mm} \section{Conclusion} In this paper, we have presented the first approaches for learning a summarization system by countering the strong effect of summary-worthy lead sentences. We demonstrate that recent summarization systems over-exploit the inherent lead bias present in news articles, to the detriment of their summarization capabilities. We explore two techniques aimed at learning to better balance positional cues with semantic ones. While our auxiliary loss method achieves significant improvement, we note that there is a large gap which better methods can hope to bridge in the future. One approach, building on ours, is to examine other ways to combine loss signals \cite{maml}, and to encourage exploration \cite{sac}. We will also carry out deeper study of the properties of $D_{\mathrm{early}}${} and $D_{\mathrm{late}}${} type documents and use them to inform new solutions. On cursory analysis, the most frequent terms in $D_{\mathrm{early}}${} tend to be about UK politics, while in $D_{\mathrm{late}}${} they are often related to British soccer. \section*{Acknowledgments} This work is supported by the Natural Sciences and Engineering Research Council of Canada, the Institute for Data Valorisation (IVADO), Compute Canada, and the CIFAR Canada AI Chair program.
{ "attr-fineweb-edu": 1.84082, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdB85qhDACtwISNBu
\section{Introduction}\label{intro} Hockey analysts have developed several metrics that attempt to quantify an NHL player's contribution to his team. Tom Awad's Goals Versus Threshold in \cite{awad}, Jim Corsi's Corsi rating as described in \cite{boersma}, Gabriel Desjardins' Behindthenet Rating, along with his on-ice/off-ice, strength of opponents, and strength of linemates statistics in \cite{gabe}, Iian Fyffe's Point Allocation in \cite{fyffe}, Ken Krzywicki's shot quality, as presented in \cite{ken1} and updated in \cite{ken2}, Alan Ryder's Player Contribution in \cite{ryder}, and Timo Seppa's Even-Strength Total Rating in \cite{seppa} are a few examples. In this paper, we propose a new metric, adjusted plus-minus ($APM$), that attempts to estimate a player's contribution to his team in even-strength (5-on-5, 4-on-4, and 3-on-3) situations, independent of that player's teammates and opponents, and in the units of goals per season. $APM$ can also be expressed in terms of goals per 60 minutes ($APM/60$). We find both an adjusted offensive plus-minus component ($OPM$) and an adjusted defensive plus-minus component ($DPM$), which estimate the offensive and defensive production of a player at even-strength, independent of teammates and opponents, and in the units of goals per season. Inspired by the work in basketball by \cite{rosenbaum}, \cite{ilardibarzilai}, and \cite{eli}, we use weighted multiple linear regression models to estimate $OPM$ per 60 minutes ($OPM/60$) and $DPM$ per 60 minutes ($DPM/60$). The estimates are a measure of the offensive and defensive production of a player in the units of goals per 60 minutes. These statistics, along with average minutes played per season, give us $OPM$ and $DPM$. Adding $OPM/60$ and $DPM/60$ gives $APM/60$, and adding $OPM$ and $DPM$ gives $APM$. We emphasize that we consider only even-strength situations. The main benefit of the weighted linear regression model is that the resulting adjusted plus-minus statistics for each player should in theory be independent of that player's teammates and opponents. The traditional plus-minus statistic in hockey is highly dependent on a player's teammates and opponents, and the use of the regression removes this dependence. One drawback of our model is statistical noise. In order to improve the estimates and reduce the standard errors in the estimates, we use data from three NHL seasons, and we combine the results of two different models, one inspired by \cite{ilardibarzilai}, and the other by \cite{rosenbaum}. \subsection{Example of the Results} Before we describe the models in detail, we give the reader an example of the results. The typical NHL fan has some idea of who the best offensive players in the league are, so we give the top 10 players in average $OPM$ during the 2007-08, 2008-09, and 2009-10 seasons, sorted by $OPM$, in Table \ref{opmp}. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in OPM } \label{opmp} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & OErr & DPM & APM & Mins & GF60 & OPM60 & GF \\ \midrule 1 & Pavel Datsyuk & C & 15.4 & 3.4 & 6.2 & 21.6 & 1186 & 3.39 & 0.777 & 67 \\ 2 & Alex Ovechkin & LW & 15.2 & 3.8 & 0.2 & 15.4 & 1262 & 3.69 & 0.723 & 78 \\ 3 & Sidney Crosby & C & 14.4 & 2.6 & $-$0.9 & 13.5 & 1059 & 3.59 & 0.818 & 63 \\ 4 & Henrik Sedin & C & 14.0 & 4.5 & $-$5.7 & 8.3 & 1169 & 3.35 & 0.718 & 65 \\ 5 & Evgeni Malkin & C & 13.2 & 2.7 & $-$3.0 & 10.2 & 1164 & 3.37 & 0.681 & 65 \\ 6 & Zach Parise & LW & 12.6 & 3.2 & 3.0 & 15.6 & 1164 & 2.94 & 0.652 & 57 \\ 7 & Joe Thornton & C & 12.0 & 3.2 & 3.6 & 15.6 & 1222 & 3.13 & 0.590 & 64 \\ 8 & Eric Staal & C & 11.5 & 3.1 & $-$0.5 & 11.0 & 1159 & 3.00 & 0.594 & 58 \\ 9 & Ilya Kovalchuk & LW & 10.9 & 2.8 & $-$4.2 & 6.7 & 1189 & 2.93 & 0.551 & 58 \\ 10 & Marian Gaborik & RW & 10.2 & 2.2 & 4.3 & 14.5 & 853 & 3.28 & 0.715 & 47 \\ \bottomrule \end{tabular} } \end{center} \end{table} Note that $Rk$ is the rank of that player in terms of $OPM$, $Pos$ is the player's position, $OErr$ is the standard error in the $OPM$ estimates, $Mins$ is the number of minutes that the player played on average during the 2007-08, 2008-09, and 2009-10 seasons, $GF60$ are the goals per 60 minutes that a player's team scored while he was the ice at even-strength, and $GF$ are the goals per season that a player's team scored while he was the ice at even-strength. The 10 players in this list are arguably the best offensive players in the game. Ovechkin, Crosby, Datsyuk, and Malkin, perhaps the league's most recognizable superstars, make the top 5 along with Henrik Sedin, who led the NHL in even-strength points during the 2009-2010 season with 83 points, which was 10 more points than the next leading scorer. We highlight two interesting numbers in this list. First, note that Pavel Datsyuk, who is regarded by many as the best two-way player in the game, has the highest defensive rating among these top offensive players. Datsyuk's excellent two-way play gives him the highest $APM$ estimate among forwards and defensemen. We give the list of top 10 forwards and defensemen in $APM$, and discuss several other top 10 lists for $OPM/60, OPM, DPM/60, DPM, APM/60,$ and $APM$, in Section \ref{resultssummary}. Second, note that Henrik Sedin has a much higher $OErr$ than the other players in the list. This increased error is likely due to the fact that Henrik plays most of his minutes with his brother Daniel, and the model has a difficult time separating the contributions of the twin brothers. The Sedin twins provide us with a great example to use when analyzing the errors, and we discuss the Sedins and the errors in more detail in Section \ref{errors}. \subsection{Complete Results} A \verb .csv file containing the complete results can be obtained by contacting the author. An interested reader may prefer to open these results in a spreadsheet program and filter by position or sort by a particular statistic. Also, the \verb .csv file contains more columns than the list given in Table \ref{opmp}. For example, the file includes the three most frequent linemates of each player along with the percentage of minutes played with each of those linemates during the 2007-08, 2008-09, and 2009-10 NHL regular seasons. An example of these additional columns is given in Table \ref{linemateexample}. \begin{table}[h!] \begin{center} \caption{Example of Linemate Details} \label{linemateexample} {\small \begin{tabular}{lrrrrrrrr} \addlinespace[.3em] \toprule Player & Pos & Mins & Teammate.1 & min1 & Teammate.2 & min2 & Teammate.3 & min3 \\ \midrule Henrik Sedin & C & 1169 & D.Sedin & 83\% & R.Luongo & 76\% & A.Edler & 35\% \\ Daniel Sedin & LW & 1057 & H.Sedin & 92\% & R.Luongo & 77\% & A.Edler & 35\% \\ \bottomrule \end{tabular} } \end{center} \end{table} \noindent Notice that, as suggested in the table, Henrik Sedin played 83\% of his minutes with brother Daniel, and Daniel played 92\% of his minutes with Henrik. Finally, the file includes columns for the goals a player's team scored ($GF$), the goals a player's team allowed ($GA$), and the net goals a player's team scored ($NG$), while he was on the ice at even-strength. These statistics are in terms of average goals per season during the 2007-08, 2008-09, and 2009-10 NHL regular seasons. We also give $GF/60, GA/60,$ and $NG/60$, which are $GF, GA,$ and $NG$ in terms of goals per 60 minutes. An example of this information is given in Table \ref{goalsexample}. \begin{table}[h!] \begin{center} \caption{Example of GF, GA, and NG statistics} \label{goalsexample} {\small \begin{tabular}{lrrrrrrr} \addlinespace[.3em] \toprule Player & Pos & GF60 & GA60 & NG60 & GF & GA & NG \\ \midrule Sidney Crosby & C & 3.59 & 2.55 & 1.04 & 63 & 45 & 18 \\ Pavel Datsyuk & C & 3.39 & 1.84 & 1.55 & 67 & 36 & 31 \\ \bottomrule \end{tabular} } \end{center} \end{table} These raw statistics, along with the linemate information, will be helpful in the analysis of the results of our model. The rest of this paper is organized as follows. In Section \ref{models}, we describe the two models we use to compute $OPM$, $DPM$, and $APM$. In Section \ref{resultssummary}, we summarize and discuss the results of these models by giving various top 10 lists, indicating the best forwards, defensemen, and goalies according to $OPM$, $DPM$, and $APM$, as well as their corresponding per 60 minute statistics. Section \ref{discussion} contains a discussion of the model. We summarize and discuss the advantages \eqref{adv} and disadvantages \eqref{disadv} of these statistics. Next, we give more details about the formation of the model, including the selection of the variables (\ref{variables}), and selection of the observations (\ref{observations}). Also, we discuss our assumptions (\ref{assumptions}) as well as the standard errors (\ref{errors}) in the estimates. We finish with ideas for future work and some conclusions \eqref{futurework}. \section{Two weighted least-squares models}\label{models} We now define our variables and state our models. In each model, we use players who have played a minimum of 4000 shifts over the course of the 2007-08, 2008-09, and 2009-10 seasons (see Section \ref{discussion} for a discussion). We define a shift to be a period of time during the game when no substitutions are made. The observations in each model are weighted by the duration of that observation in seconds. \subsection{Ilardi-Barzilai-type model}\label{model1} Inspired by \cite{ilardibarzilai}, we use the following linear model: \begin{align}\label{model1eq} y = \beta_0 + \beta_1 X_1 + \cdots + \beta_J X_J &+ \delta_1 D_1 + \cdots + \delta_J D_J + \gamma_1 G_1 + \cdots + \gamma_K G_K + \epsilon, \end{align} \noindent where $J$ is the number of skaters in the league, and $K$ is the number of goalies in the league. The variables in the model are defined as follows: \begin{align*} y &= \text{goals per 60 minutes during an observation} \\ X_j &= \left\{ \begin{array}{ll} 1, & \hbox{ skater $j$ is on offense during the observation;} \\ 0, & \hbox{ skater $j$ is not playing or is on defense during the observation;} \end{array} \right. \\ D_j &= \left\{ \begin{array}{ll} 1, & \hbox{ skater $j$ is on defense during the observation;} \\ 0, & \hbox{ skater $j$ is not playing or is on offense during the observation;} \end{array} \right. \\ G_k &= \left\{ \begin{array}{ll} 1, & \hbox{ goalie $k$ is on defense during the observation;} \\ 0, & \hbox{ goalie $k$ is not playing or is on offense during the observation;} \end{array} \right. \end{align*} where $1 \leq j \leq J,$ and $1\leq k \leq K$. Note that by ``skater" we mean a forward or a defensemen, but not a goalie. The coefficients in the model have the following interpretation: \begin{align}\label{coeffs} \beta_j &= \text{goals per 60 minutes contributed by skater $j$ on offense,} \notag\\ -\delta_j &= \text{goals per 60 minutes contributed by skater $j$ on defense,} \notag\\ -\gamma_k &= \text{goals per 60 minutes contributed by goalie $k$ on defense,} \\ \beta_0 &= \text{intercept,} \notag\\ \epsilon &= \text{error.}\notag \end{align} The coefficient $\beta_1$, for example, gives an estimate, in goals per 60 minutes, of how $y$ changes when Skater 1 is on the ice on offense ($X_1=1$) versus when Skater 1 is not on the ice on offense ($X_1=0$), independent of all other players on the ice. The coefficients $\beta_j, -\delta_j,$ and $-\gamma_k$ are estimates of $OPM/60$ for Skater $j$, $DPM/60$ for Skater $j$, and $DPM/60$ for Goalie $k$, respectively. They are playing-time-independent rate statistics, measuring the offensive and defensive value of a player in goals per 60 minutes. Notice the negative sign in front of $\delta_j$ and $\gamma_k$ in \eqref{coeffs}. Note that a negative value for one of these coefficients corresponds to a positive contribution. For example, if Skater 1 has a defensive coefficient of $\delta_1 = -0.8$, he prevents $0.8$ goals per 60 minutes when he is on defense. We could have chosen to define a skater's $DPM/60$ to be $+\delta_j$, in which case negative values for $DPM/60$ would be good. Instead, we prefer that positive contributions be represented by a positive number, so we define $DPM/60 = -\delta_j$ for skaters. Likewise, we define $DPM/60 = -\gamma_k$ for goalies. For Skater 1's $DPM/60$ in our example, we have $$ DPM/60 = -\delta_1 = -(-0.8) = +0.8, $$ which means that Player 1 has a positive contribution of +0.8 goals per 60 minutes on defense. Note that for the observations in this model, each shift is split into two lines of data: one line corresponding to the home team being on offense, and one line corresponding to the away team being on offense. It is assumed that in hockey, unlike in other sports, a team plays offense and defense concurrently, and the two observations for each shift are given equal weight. Also, note that we include separate defensive variables for goalies, but no offensive variables. Here we are assuming that goalies do not contribute on offense. See Section \ref{assumptions} for a discussion of these assumptions. Finally, we note that the data used for the model were obtained from the shift charts published in \cite{nhlcom} for games played in the 2007-08, 2008-09, and 2009-10 regular seasons. See Section \ref{observations} for more about the data used and to see how it was selected. \subsection{Calculating $OPM$, $DPM$, and $APM$}\label{apm} A player's contribution in terms of goals over an entire season is useful as well, and may be preferred by some NHL fans and analysts. We use the regression coefficients and minutes played to give playing-time-dependent counting statistic versions of the rate statistics from the regression model. These counting statistics are $OPM$, $DPM$, and $APM$, and they measure the offensive, defensive, and total value of a player, in goals per season. To get a skater's $OPM$, for example, we multiply a skater's offensive contribution per minute by the average number of minutes that the skater played per season from 2007-2010. The value for $DPM$ is found likewise, and $APM$ for a player is the sum of his $OPM$ and his $DPM$. Goalies have no $OPM$, so a goalie's $APM$ is simply his $DPM$. Let \begin{align*} MinO_j &= \text{minutes per season on offense for skater $j$,}\\ MinD_j &= \text{minutes per season on defense for skater $j$, and}\\ MinG_k &= \text{minutes per season on defense for goalie $k$.} \end{align*} Then, we can calculate $OPM, DPM$, and $APM$ for skaters and goalies as follows: \begin{align}\label{apmformulas} \hskip 1.25in OPM_j &= \,\,\,\,\,\,\, \beta_j \, MinO_j/60, \notag\\ \hskip 1.25in DPM_j &= -\delta_j \, MinD_j/60 &\text{ (for skaters), \hskip 1in }\notag\\ \hskip 1.25in DPM_k &= -\gamma_k \, MinG_k/60 &\text{ (for goalies), \hskip 1in } \\ \hskip 1.25in APM_j &= OPM_j + DPM_j &\text{ (for skaters), \hskip 1in }\notag\\ \hskip 1.25in APM_k &= DPM_k &\text{ (for goalies). \hskip 1in }\notag \end{align} In order to estimate $Err$, the standard errors for the $APM$ estimates, we assume that $OPM$ and $DPM$ are uncorrelated, and we have \begin{align*} {Err} = \sqrt{Var(APM)} = \sqrt{({OErr})^2 + ({DErr})^2}. \end{align*} where $OErr$ and $DErr$ are the standard errors in the $OPM$ and $DPM$ estimates, respectively, and $Var$ is variance. The assumption that offensive and defensive contributions of a player are uncorrelated is debatable. See, for example, \cite{corey}. \subsection{Rosenbaum-type model}\label{model2} In an effort to improve the estimates and their errors, we use a second linear model, this one inspired by \cite{rosenbaum}: \begin{align}\label{model2eq} y_{net} = \eta_0 + \eta_1 N_1 + \cdots + \eta_{J+K} N_{J+K} + \epsilon. \end{align} The variables in the model are defined as follows: \begin{align*} y_{net} &= \text{net goals per 60 minutes for the home team during an observation} \\ N_j &= \left\{ \begin{array}{ll} \,\,\,\,\,\,\, 1, & \hbox{ player $j$ is on the home team during the observation;} \\ -1, & \hbox{ player $j$ is on the away team during the observation;} \\ \,\,\,\,\,\,\, 0, & \hbox{ player $j$ is not playing during the observation.} \end{array} \right. \end{align*} The coefficients in the model have the following interpretation: \begin{align*} \eta_j &= \text{net goals per 60 minutes contributed by player $j,$} \\ \eta_0 &= \text{intercept}, \\ \epsilon &= \text{error}. \end{align*} The coefficients $\eta_1, \ldots, \eta_J$ of this model are estimates of each player's $APM/60$. By ``net goals" we mean the home team's Goals For ($GF$) minus the home team's Goals Against ($GA$). Also, note that by ``player" we mean forward, defensemen, or goalie. The data used for these variables were also obtained from the shift charts published on NHL.com for games played in the 2007-08, 2008-09, and 2009-10 regular seasons. In this model, unlike the Ilardi-Barzilai model, each observation in this model is simply one shift. We do not split each shift into two lines of data. In order to separate offense and defense, we follow Rosenbaum and form a second model: \begin{align}\label{model3eq} y_{tot} = \tau_0 + \tau_1 T_1 + \cdots + \tau_{J+K} T_{J+K} + \epsilon. \end{align} The variables in the model are defined as follows: \begin{align*} y_{tot} &= \text{total goals per 60 minutes scored by both teams during an observation} \\ T_j &= \left\{ \begin{array}{ll} 1, & \hbox{ player $j$ is on the ice (home or away) during the observation;} \\ 0, & \hbox{ player $j$ is not on the ice during the observation.} \end{array} \right. \end{align*} The coefficients in the model have the following interpretation: \begin{align*} \tau_j &= \text{total goals per 60 minutes contributed by skater $j$}, \\ \tau_0 &= \text{intercept}, \\ \epsilon &= \text{error}. \end{align*} By total goals, we mean $GF+GA$. Recall that the coefficients in \eqref{model2eq} were estimates of each player's $APM/60$, or net goals contributed per 60 minutes. Likewise, the coefficients in \eqref{model3eq} are estimates of each player's $TPM/60$, or total goals contributed per 60 minutes. In \eqref{apmformulas}, we used playing time to convert $APM/60$ to $APM$, and likewise, we can convert $TPM/60$ to $TPM.$ We know from before that \begin{align}\label{apmformula} APM/60 = OPM/60 + DPM/60, \end{align} and we also have that \begin{align} \label{tpmformula} {TPM/60} = {OPM/60 } - { DPM/60}. \end{align} Using equations \eqref{apmformula} and \eqref{tpmformula}, if we add a player's $TPM/60$ and $APM/60$, and divide by 2, the result is that player's $OPM/60$: \begin{align*} &\frac{1}{2}(APM/60 + TPM/60) = OPM/60. \end{align*} Likewise, \begin{align*} \frac{1}{2} (APM/60 - TPM/60) = DPM/60. \end{align*} Using playing time, we can convert $OPM/60$, $DPM/60$, and $APM/60$ to $OPM$, $DPM$, and $APM$ as we did with our first model in Section \ref{model1}. Note that in this model, unlike the model in Section \ref{model1}, all players are treated the same, which means that the model gives offensive estimates for goalies and skaters alike. While a goalie can impact a team's offensive production, we typically do not use these offensive estimates for goalies. Goalies and offense are discussed more in Section \ref{goalies}. \subsection{Averaging results from the two models} The estimates obtained from the models in Section \ref{model1} and \ref{model2} can be averaged, and the resulting estimates will have smaller standard errors than the individual estimates from either of the two models. Let $OPM^{ib}_j$, $DPM^{ib}_j$, and $APM^{ib}_j$ be the $OPM$, $DPM$, and $APM$ results for player $j$ from the Ilardi-Barzilai-type model (Section \ref{model1}), and likewise let $OPM^r_j$, $DPM^r_j$, and $APM^r_j$ be the corresponding results from the Rosenbaum-type model (Section \ref{model2}). We average the results from our two models to arrive at our final metrics $OPM$, $DPM$, and $APM$: \begin{align*} OPM_j &= \frac{1}{2} (OPM^{ib}_j + OPM^r_j), \\ DPM_j &= \frac{1}{2} (DPM^{ib}_j + DPM^r_j), \\ APM_j &= \frac{1}{2} (APM^{ib}_j + APM^r_j). \end{align*} Each model has its advantages and disadvantages, so we have chosen to weight the results from the two models equally. See Section \ref{goalies} for a discussion of the benefits and drawbacks of each model. Assuming the errors are uncorrelated we can estimate them as follows: \begin{align*} OErr_j &= \frac{1}{2} \sqrt{(OErr^{ib}_j)^2 + (OErr^r_j)^2 }, \\ DErr_j &= \frac{1}{2} \sqrt{(DErr^{ib}_j)^2 + (DErr^r_j)^2 }, \\ Err_j &= \frac{1}{2} \sqrt{( Err^{ib}_j)^2 + ( Err^r_j)^2 }. \end{align*} Note that the errors $OErr_j$ are smaller than the errors $OErr_j^{r}$ and $OErr_j^{ib}$. Likewise, $DErr_j$ and $Err_j$ are smaller than each of the components used to compute them. \section{Summary of Results}\label{resultssummary} In this section we will summarize the results of the model by giving various top 10 lists, indicating the best offensive, defensive, and overall players in the league according to the estimates found in the model. \subsection{$OPM/60$} Recall that $OPM/60$ is a measure of the offensive contribution of a player at even-strength in terms of goals per 60 minutes of playing time. Recall also that we assume that goalies do not contributed on offense, so we list only forwards and defensemen in this section. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in OPM60 } \label{opm60p} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & OErr & DPM60 & APM60 & Mins & GF60 & OPM & GF \\ \midrule 1 & Sidney Crosby & C & 0.818 & 0.148 & $-$0.052 & 0.766 & 1059 & 3.59 & 14.4 & 63 \\ 2 & Pavel Datsyuk & C & 0.777 & 0.174 & 0.314 & 1.091 & 1186 & 3.39 & 15.4 & 67 \\ 3 & Alex Radulov & RW & 0.758 & 0.222 & $-$0.248 & 0.510 & 343 & 3.67 & 4.3 & 21 \\ 4 & Alex Ovechkin & LW & 0.723 & 0.178 & 0.010 & 0.733 & 1262 & 3.69 & 15.2 & 78 \\ 5 & Henrik Sedin & C & 0.718 & 0.231 & $-$0.294 & 0.424 & 1169 & 3.35 & 14.0 & 65 \\ 6 & Marian Gaborik & RW & 0.715 & 0.155 & 0.303 & 1.018 & 853 & 3.28 & 10.2 & 47 \\ 7 & Evgeni Malkin & C & 0.681 & 0.141 & $-$0.156 & 0.525 & 1164 & 3.37 & 13.2 & 65 \\ 8 & Zach Parise & LW & 0.652 & 0.166 & 0.155 & 0.807 & 1164 & 2.94 & 12.6 & 57 \\ 9 & Jakub Voracek & RW & 0.642 & 0.186 & $-$0.045 & 0.597 & 621 & 3.03 & 6.6 & 31 \\ 10 & C. Gunnarsson & D & 0.608 & 0.245 & 0.233 & 0.841 & 240 & 2.91 & 2.4 & 12 \\ \bottomrule \end{tabular} } \end{center} \end{table} The list of top players in $OPM/60$ is given in Table \ref{opm60p}. The players in this list are are regarded by many as being among the best offensive players in the game, with the exception of a few players with low minutes played and higher errors: Alexander Radulov, Jakub Voracek, and Carl Gunnarsson. Those players have far fewer minutes than the other players, and their estimates are less reliable. Interestingly, Henrik Sedin actually has the second highest standard error in the list, even higher than Radulov and Voracek, despite the fact that he has much higher minutes played totals. This is likely due to the fact that he spends most of his time playing with his brother Daniel (see Section \ref{errors}). \begin{figure}[h!] \centering \caption[Kernel Density Estimation for $OPM/60$ and $OPM$] {Kernel Density Estimation for $OPM/60$ Estimates and $OPM$ Estimates.}\label{opmfig} \includegraphics[width=.9\textwidth]{opmfig} \end{figure} Forwards dominated the list in Table \ref{opm60p}. That forwards are more prevalent than defensemen on this list is not unexpected, as one would probably assume that forwards contribute to offense more than defensemen do. We can see this trend more clearly by plotting the kernel density estimation for $OPM/60$ for both forwards and defensemen. This plot gives us an approximation of the histogram of our $OPM/60$ estimates for forwards and defensemen. See Figure \ref{opmfig}. The curve for forwards lies to the right of the curve for defensemen, suggesting that the $OPM/60$ estimates for forwards are generally higher than the $OPM/60$ estimates for defensemen. Since forwards dominated the list of top players in $OPM/60$, we give the top defensemen in $OPM/60$ in Table \ref{opm60d}. \begin{table}[h!] \begin{center} \caption{ Top 10 Defensemen in OPM60 (minimum 700 minutes) } \label{opm60d} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & OErr & DPM60 & APM60 & Mins & GF60 & OPM & GF \\ \midrule 10 & C. Gunnarsson & D & 0.608 & 0.245 & 0.233 & 0.841 & 240 & 2.91 & 2.4 & 12 \\ 46 & Ville Koistinen & D & 0.424 & 0.210 & 0.082 & 0.506 & 380 & 2.89 & 2.7 & 18 \\ 71 & Andrei Markov & D & 0.370 & 0.162 & 0.036 & 0.405 & 1114 & 2.69 & 6.9 & 50 \\ 79 & Mike Green & D & 0.357 & 0.146 & 0.195 & 0.552 & 1334 & 3.30 & 7.9 & 73 \\ 82 & Mark Streit & D & 0.353 & 0.136 & $-$0.060 & 0.293 & 1199 & 2.60 & 7.1 & 52 \\ 95 & Johnny Oduya & D & 0.342 & 0.143 & 0.116 & 0.458 & 1209 & 2.78 & 6.9 & 56 \\ 112 & S. Robidas & D & 0.313 & 0.147 & 0.115 & 0.428 & 1316 & 2.60 & 6.9 & 57 \\ 113 & Ian White & D & 0.311 & 0.127 & 0.028 & 0.338 & 1343 & 2.73 & 6.9 & 61 \\ 119 & S. Brookbank & D & 0.302 & 0.164 & 0.163 & 0.465 & 640 & 2.44 & 3.2 & 26 \\ 122 & Bret Hedican & D & 0.297 & 0.166 & $-$0.162 & 0.135 & 594 & 2.96 & 2.9 & 29 \\ \bottomrule \end{tabular} } \end{center} \end{table} There are some top offensive defensemen in this list, along with some players with low minutes, high errors, and skeptical ratings. One example is Carl Gunnarsson, who tops this list and is one of the players with low minutes. Gunnarsson did have a decent season offensively in 2009-10, scoring 12 even-strength points in 43 games, while playing just the 6th most minutes per game among defensemen on his team. Projecting his statistics over 82 games, Gunnarsson would have had 23 even-strength points, tying him with Tomas Kaberle for the team lead among defensemen, despite playing less minutes. So we do get an idea of why the model gave him this high estimate. We note that the lower end of the 95\% confidence interval for Gunnarsson's $OPM/60$ is 0.118, suggesting that, at worst, he was still an above average offensive defenseman at even-strength during the limited minutes that he played. \subsection{$OPM$} Recall that $OPM$ is a measure of the offensive contribution of a player at even-strength in terms of goals over an entire season. Once again, we list only forwards and defensemen in this section. The top 10 players in $OPM$ were already given and discussed in the introduction, Table \ref{opmp}. That list was dominated by forwards, a trend that can also be seen in Figure \ref{opmfig}, so we now discuss the top 10 defensemen in $OPM$ given in Table \ref{opmd}. \begin{table}[h!] \begin{center} \caption{ Top 10 Defensemen in OPM } \label{opmd} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & OErr & DPM & APM & Mins & GF60 & OPM60 & GF \\ \midrule 22 & Mike Green & D & 7.9 & 3.2 & 4.3 & 12.3 & 1334 & 3.30 & 0.357 & 73 \\ 31 & Mark Streit & D & 7.1 & 2.7 & $-$1.2 & 5.9 & 1199 & 2.60 & 0.353 & 52 \\ 35 & Andrei Markov & D & 6.9 & 3.0 & 0.7 & 7.5 & 1114 & 2.69 & 0.370 & 50 \\ 37 & Ian White & D & 6.9 & 2.8 & 0.6 & 7.6 & 1343 & 2.73 & 0.311 & 61 \\ 39 & S. Robidas & D & 6.9 & 3.2 & 2.5 & 9.4 & 1316 & 2.60 & 0.313 & 57 \\ 40 & Johnny Oduya & D & 6.9 & 2.9 & 2.3 & 9.2 & 1209 & 2.78 & 0.342 & 56 \\ 44 & Zdeno Chara & D & 6.6 & 3.9 & 2.2 & 8.8 & 1441 & 2.64 & 0.276 & 63 \\ 48 & Dion Phaneuf & D & 6.4 & 3.2 & $-$0.7 & 5.6 & 1443 & 2.76 & 0.265 & 66 \\ 53 & Duncan Keith & D & 6.2 & 4.2 & 7.3 & 13.5 & 1532 & 2.94 & 0.245 & 75 \\ 67 & Dan Boyle & D & 5.6 & 2.7 & $-$1.6 & 4.0 & 1169 & 2.69 & 0.286 & 52 \\ \bottomrule \end{tabular} } \end{center} \end{table} Most of the players in Table \ref{opmd} are among the top offensive defensemen in the league at even-strength. Nicklas Lidstrom is one notable omission. Lidstrom is 11th among defensemen with an $OPM$ of 5.5. Interestingly, the Ilardi-Barzilai model estimates a 3.8 $OPM$ for Lidstrom, while the Rosenbaum-type model, with goalies included on offensive, estimates a 7.3 $OPM$. It seems that including, or not including, goalies on offense has a big effect on Lidstrom's estimate. It turns out that other Detroit Red Wings skaters are affected also. We discuss goalies and offense, and the effect it had on the Detroit Red Wings, as well as the New York Rangers, in Section \ref{goalies}. \subsection{$DPM/60$}\label{dpm60} Recall that $DPM/60$ is a measure of the defensive contribution of a player in terms of goals per 60 minutes of playing time at even-strength. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in DPM60 } \label{dpm60-no-mins} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & DErr & APM60 & Mins & GA60 & DPM & GA \\ \midrule 763 & Pekka Rinne & G & NA & 0.845 & 0.232 & 0.845 & 1680 & 2.12 & 23.7 & 59 \\ 717 & Dan Ellis & G & NA & 0.757 & 0.218 & 0.757 & 1509 & 2.32 & 19.0 & 58 \\ 433 & George Parros & RW & 0.035 & 0.576 & 0.220 & 0.611 & 387 & 0.98 & 3.7 & 6 \\ 424 & Derek Dorsett & RW & 0.040 & 0.571 & 0.225 & 0.611 & 317 & 1.45 & 3.0 & 8 \\ 166 & Peter Regin & C & 0.242 & 0.531 & 0.229 & 0.773 & 324 & 1.73 & 2.9 & 9 \\ 688 & Adam Hall & RW & $-$0.336 & 0.528 & 0.211 & 0.192 & 329 & 1.58 & 2.9 & 9 \\ 505 & Paul Martin & D & $-$0.026 & 0.526 & 0.170 & 0.500 & 916 & 1.55 & 8.0 & 24 \\ 336 & Mark Fistric & D & 0.107 & 0.510 & 0.173 & 0.617 & 594 & 1.48 & 5.1 & 15 \\ 456 & Drew Miller & LW & 0.021 & 0.490 & 0.205 & 0.510 & 370 & 1.62 & 3.0 & 10 \\ 156 & Josef Vasicek & C & 0.251 & 0.484 & 0.253 & 0.735 & 343 & 1.69 & 2.8 & 10 \\ \bottomrule \end{tabular} } \end{center} \end{table} Without specifying a minimum minutes played limit, we get two goalies, then several players with low minutes, in the list of top players in $DPM/60$ given in Table \ref{dpm60-no-mins}. In order to remove the players with low minutes from this list, we restrict the list to those players with more than 700 minutes played. The new list is given in Table \ref{dpm60p}. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in DPM60 (minimum 700 minutes) } \label{dpm60p} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & DErr & APM60 & Mins & GA60 & DPM & GA \\ \midrule 1 & Pekka Rinne & G & NA & 0.845 & 0.232 & 0.845 & 1680 & 2.12 & 23.7 & 59 \\ 2 & Dan Ellis & G & NA & 0.757 & 0.218 & 0.757 & 1509 & 2.32 & 19.0 & 58 \\ 7 & Paul Martin & D & $-$0.026 & 0.526 & 0.170 & 0.500 & 916 & 1.55 & 8.0 & 24 \\ 12 & Mikko Koivu & C & 0.383 & 0.469 & 0.176 & 0.852 & 1032 & 2.02 & 8.1 & 35 \\ 14 & Chris Mason & G & NA & 0.460 & 0.169 & 0.460 & 2384 & 2.31 & 18.3 & 92 \\ 15 & Ryan Callahan & RW & 0.022 & 0.453 & 0.153 & 0.475 & 878 & 1.78 & 6.6 & 26 \\ 18 & Marco Sturm & LW & 0.169 & 0.439 & 0.179 & 0.608 & 702 & 1.57 & 5.1 & 18 \\ 22 & Jason Pominville & RW & 0.309 & 0.424 & 0.176 & 0.733 & 1052 & 2.30 & 7.4 & 40 \\ 24 & Marty Turco & G & NA & 0.424 & 0.154 & 0.424 & 2787 & 2.27 & 19.7 & 105 \\ 27 & Tomas Plekanec & C & 0.004 & 0.411 & 0.168 & 0.414 & 1053 & 2.13 & 7.2 & 37 \\ \bottomrule \end{tabular} } \end{center} \end{table} In that list we get a mix of goalies, forwards and defensemen. To see if this trend continues outside the top 10, we can again plot a kernel density estimation of $DPM/60$ estimates for forwards, defensemen, and goalies. See Figure \ref{dpmfig}. \begin{figure}[h!] \centering \caption[Kernel Density Estimation for $DPM/60$ and $DPM$] {Kernel Density Estimation for $DPM/60$ Estimates and $DPM$ Estimates.}\label{dpmfig} \includegraphics[width=.9\textwidth]{dpmfig} \end{figure} Forwards, defensemen, and goalies seem to have a fairly similar distribution of $DPM/60$ estimates, though defensemen may be slightly behind forwards and goalies. It may seem counterintuitive that defensemen have lower ratings than forwards. The estimates seem to indicate that forwards contribute more to defense (per 60 minutes of ice time) than defensemen do, but the difference in estimates is so small that it could be simply due to noise. The trends are slightly different with $DPM$, which is playing-time dependent. The top goalies in $DPM/60$ typically play more minutes than forwards and defensemen, and their $DPM$'s are much higher. Top defensemen typically play more minutes than top forwards, so their ratings are helped when playing-time is considered as well. We now discuss the top 10 in $DPM/60$ for skaters, forwards, defensemen, and goalies, separately. \begin{table}[h!] \begin{center} \caption{ Top 10 Skaters in DPM60 (minimum 700 minutes) } \label{dpm60s} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & DErr & APM60 & Mins & GA60 & DPM & GA \\ \midrule 7 & Paul Martin & D & $-$0.026 & 0.526 & 0.170 & 0.500 & 916 & 1.55 & 8.0 & 24 \\ 12 & Mikko Koivu & C & 0.383 & 0.469 & 0.176 & 0.852 & 1032 & 2.02 & 8.1 & 35 \\ 15 & Ryan Callahan & RW & 0.022 & 0.453 & 0.153 & 0.475 & 878 & 1.78 & 6.6 & 26 \\ 18 & Marco Sturm & LW & 0.169 & 0.439 & 0.179 & 0.608 & 702 & 1.57 & 5.1 & 18 \\ 22 & Jason Pominville & RW & 0.309 & 0.424 & 0.176 & 0.733 & 1052 & 2.30 & 7.4 & 40 \\ 27 & Tomas Plekanec & C & 0.004 & 0.411 & 0.168 & 0.414 & 1053 & 2.13 & 7.2 & 37 \\ 31 & Willie Mitchell & D & $-$0.122 & 0.404 & 0.143 & 0.281 & 1178 & 1.90 & 7.9 & 37 \\ 37 & Manny Malhotra & C & 0.167 & 0.380 & 0.142 & 0.547 & 922 & 1.82 & 5.8 & 28 \\ 38 & Andrew Greene & D & $-$0.118 & 0.379 & 0.161 & 0.262 & 1001 & 1.70 & 6.3 & 28 \\ 39 & Jan Hejda & D & 0.061 & 0.373 & 0.158 & 0.434 & 1269 & 2.24 & 7.9 & 47 \\ \bottomrule \end{tabular} } \end{center} \end{table} A mix of defensive forwards and defensive defensemen appear in the list of top skaters in Table \ref{dpm60s}. Paul Martin leads this list by a sizeable margin, partly because of his very low $GA/60$ of 1.55. Martin is followed by 5 forwards, including Marco Sturm. Sturm has just above the minimum for minutes played, but since he has a $GA/60$ of 1.57 during his limited playing time, his estimate is believable. \begin{table}[h!] \begin{center} \caption{ Top 10 Forwards in DPM60 (minimum 700 minutes) } \label{dpm60f} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & DErr & APM60 & Mins & GA60 & DPM & GA \\ \midrule 12 & Mikko Koivu & C & 0.383 & 0.469 & 0.176 & 0.852 & 1032 & 2.02 & 8.1 & 35 \\ 15 & Ryan Callahan & RW & 0.022 & 0.453 & 0.153 & 0.475 & 878 & 1.78 & 6.6 & 26 \\ 18 & Marco Sturm & LW & 0.169 & 0.439 & 0.179 & 0.608 & 702 & 1.57 & 5.1 & 18 \\ 22 & Jason Pominville & RW & 0.309 & 0.424 & 0.176 & 0.733 & 1052 & 2.30 & 7.4 & 40 \\ 27 & Tomas Plekanec & C & 0.004 & 0.411 & 0.168 & 0.414 & 1053 & 2.13 & 7.2 & 37 \\ 37 & Manny Malhotra & C & 0.167 & 0.380 & 0.142 & 0.547 & 922 & 1.82 & 5.8 & 28 \\ 41 & Tyler Kennedy & C & 0.343 & 0.365 & 0.163 & 0.708 & 730 & 1.75 & 4.4 & 21 \\ 42 & David Krejci & C & 0.281 & 0.361 & 0.189 & 0.642 & 912 & 1.89 & 5.5 & 29 \\ 44 & Travis Moen & LW & $-$0.323 & 0.350 & 0.152 & 0.028 & 969 & 1.84 & 5.7 & 30 \\ 45 & Daniel Sedin & LW & 0.364 & 0.346 & 0.231 & 0.710 & 1057 & 2.03 & 6.1 & 36 \\ \bottomrule \end{tabular} } \end{center} \end{table} In the list of top forwards in $DPM/60$ given in Table \ref{dpm60f}, we get some well-known defensive forwards, and a couple interesting names. Once again a Sedin twin has the highest errors in a list. The model seems to be giving Daniel the credit for the Sedin line's success in $DPM/60$, whereas for $OPM/60$, Henrik gets the credit. Henrik's $DPM/60$ estimate is actually negative ($-0.294$), while his $OPM/60$ estimate (0.718) is much higher than Daniel's $OPM/60$ estimate (0.364). \begin{table}[b!] \begin{center} \caption{ Top 10 Defensemen in DPM60 (minimum 700 minutes) } \label{dpm60d} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & DErr & APM60 & Mins & GA60 & DPM & GA \\ \midrule 7 & Paul Martin & D & $-$0.026 & 0.526 & 0.170 & 0.500 & 916 & 1.55 & 8.0 & 24 \\ 31 & Willie Mitchell & D & $-$0.122 & 0.404 & 0.143 & 0.281 & 1178 & 1.90 & 7.9 & 37 \\ 38 & Andrew Greene & D & $-$0.118 & 0.379 & 0.161 & 0.262 & 1001 & 1.70 & 6.3 & 28 \\ 39 & Jan Hejda & D & 0.061 & 0.373 & 0.158 & 0.434 & 1269 & 2.24 & 7.9 & 47 \\ 46 & Mike Weaver & D & $-$0.419 & 0.341 & 0.153 & $-$0.078 & 819 & 1.73 & 4.7 & 24 \\ 62 & Nicklas Lidstrom & D & 0.242 & 0.307 & 0.191 & 0.549 & 1374 & 1.82 & 7.0 & 42 \\ 67 & Tobias Enstrom & D & $-$0.005 & 0.301 & 0.166 & 0.296 & 1319 & 2.70 & 6.6 & 59 \\ 68 & Sean O'donnell & D & 0.027 & 0.298 & 0.133 & 0.325 & 1180 & 1.75 & 5.9 & 34 \\ 69 & Mike Lundin & D & $-$0.035 & 0.297 & 0.157 & 0.263 & 738 & 2.14 & 3.7 & 26 \\ 70 & Marc-E Vlasic & D & 0.030 & 0.296 & 0.150 & 0.326 & 1311 & 1.89 & 6.5 & 41 \\ \bottomrule \end{tabular} } \end{center} \end{table} Tyler Kennedy is part of what many hockey analysts consider to be one of the top defensive lines in hockey, and his $DPM/60$ estimate supports that belief. We note that Jordan Staal's $DPM/60$ estimate is $.207 \pm .151$, and he has a $GA/60$ of 1.95, so we can see one possible reason why the model gave Kennedy the higher estimate of the two linemates. Incidentally, if we weight the observations in the model so that the 2009-2010 season counts more heavily than the other two seasons, Staal makes the list of top 10 forwards in $DPM/60$. This estimate supports his nomination for the 2009-2010 Selke trophy, which is given each season to the top defensive forward in the game. Mike Weaver and Mike Lundin are members of the list of top defensemen in $DPM/60$, which is shown in Table \ref{dpm60d}. Weaver has the second lowest $GA/60$ in this list at 1.73, and his most common teammates, Chris Mason (2.31) and Carlo Colaiacovo (2.41) have a higher $GA/60,$ so we see one reason why the model gave him a high $DPM/60$ rating. Similarly, Lundin's $GA/60$, while not as low as Weaver's, is still fairly low, despite the fact that his most common teammates have a very high $GA/60$ (Mike Smith, 2.44; Vincent Lecavalier, 3.03; Martin St. Louis, 2.97). Also, according to Gabriel Desjardins' Quality of Competition (QualComp) statistic from \cite{gabe}, Lundin had the highest QualComp in 2009-2010 among players with at least 10 games played, indicating that he performed well against strong competition. \begin{table}[h!] \begin{center} \caption{ Top 10 Goalies in DPM60 (minimum 700 minutes) } \label{dpm60g} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & DErr & APM60 & Mins & GA60 & DPM & GA \\ \midrule 1 & Pekka Rinne & G & NA & 0.845 & 0.232 & 0.845 & 1680 & 2.12 & 23.7 & 59 \\ 2 & Dan Ellis & G & NA & 0.757 & 0.218 & 0.757 & 1509 & 2.32 & 19.0 & 58 \\ 14 & Chris Mason & G & NA & 0.460 & 0.169 & 0.460 & 2384 & 2.31 & 18.3 & 92 \\ 24 & Marty Turco & G & NA & 0.424 & 0.154 & 0.424 & 2787 & 2.27 & 19.7 & 105 \\ 29 & Erik Ersberg & G & NA & 0.410 & 0.199 & 0.410 & 728 & 2.14 & 5.0 & 26 \\ 30 & H. Lundqvist & G & NA & 0.410 & 0.220 & 0.410 & 3223 & 2.10 & 22.0 & 113 \\ 34 & Jonathan Quick & G & NA & 0.394 & 0.187 & 0.394 & 1781 & 2.15 & 11.7 & 64 \\ 54 & Cam Ward & G & NA & 0.317 & 0.151 & 0.317 & 2655 & 2.23 & 14.0 & 99 \\ 75 & Tuukka Rask & G & NA & 0.293 & 0.259 & 0.293 & 763 & 1.70 & 3.7 & 22 \\ 76 & Ty Conklin & G & NA & 0.293 & 0.178 & 0.293 & 1392 & 2.13 & 6.8 & 49 \\ \bottomrule \end{tabular} } \end{center} \end{table} Interestingly, the top 3 goalies in $DPM/60$, as shown in Table \ref{dpm60g}, have played for the Nashville Predators at some point during the past three seasons. There are a couple possible reasons for this trend. One reason is that the estimates are noisy, so it could simply be a coincidence that those three goalies ended up at the top of this list. The estimates for Rinne and Ellis are significantly higher than those of the other goalies, but several other goalies are within one standard error of the top 3 in $DPM/60$, and a few are within two standard errors of the top spot. Note that the low end of the 95\% confidence intervals of the $DPM/60$ estimates for Rinne and Ellis are still in the top 7 in $DPM/60$, suggesting that, at worst, they were still very good. Even if the goalies' $DPM/60$ estimates were not noisy, $DPM/60$ would still not be the best way to isolate and measure goalie's individual ability. Recall that the interpretation of a goalie's $DPM/60$ is goals per 60 minutes contributed by the goalie on defense, or goals per 60 minutes prevented by the goalie. We could think of $DPM/60$ as measuring the difference between a goalie's goals against average at even strength and the league's goals against average at even strength, while adjusting for the strength of the teammates and opponents of the goalie. A goalie who has a relatively low goals against average at even strength should in general have a relatively high (good) $DPM/60$ estimate. In general, this relationship is true for our results. In Table \ref{dpm60g}, the 10 goalies with the highest $DPM/60$ estimates also have low $GA/60$ statistics. Unfortunately, goals against average is not the best measure of a goalie's ability. The number of goals per 60 minutes allowed by a team depends on not only the goalie's ability at stopping shots on goal, but also the frequency and quality of the shots on goal that his team allows. So goals against average is a measure not just of a goalie's ability, but also of his team's ability at preventing shots on goal. Ideally, the model would be able to correctly determine if the goalie or the team in front of him deserves credit for a low goals against average, but that does not seem to be happening. One reason could be the relatively low number of goalies on each team. Another reason could be that there is some team-level effect not accounted for. If we include team variables in the model, the results are even worse. The team estimates are very noisy (with errors around 0.50), the goalie estimates are even noisier with the team variables than without the team variables, and the model still does not isolate a goalie's ability. Different techniques for measuring a goalie's ability and contribution to his team would be preferred over $DPM/60$. Most methods would likely use different information, including the quality and frequency of the shots on goal that his team allows. See, for example, Ken Krzywicki's shot quality model in \cite{ken1} and \cite{ken2}. \subsection{$DPM$}\label{dpm} Recall that $DPM$ is a measure of the defensive contribution of a player at even-strength in terms of goals over an entire season. We now discuss the top 10 players, skaters, forwards, and defensemen in $DPM$. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in DPM } \label{dpmp} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & DPM & DErr & APM & Mins & GA60 & DPM60 & GA \\ \midrule 1 & Pekka Rinne & G & NA & 23.7 & 6.5 & 23.7 & 1680 & 2.12 & 0.845 & 59 \\ 2 & H. Lundqvist & G & NA & 22.0 & 11.8 & 22.0 & 3223 & 2.10 & 0.410 & 113 \\ 3 & Marty Turco & G & NA & 19.7 & 7.1 & 19.7 & 2787 & 2.27 & 0.424 & 105 \\ 4 & Dan Ellis & G & NA & 19.0 & 5.5 & 19.0 & 1509 & 2.32 & 0.757 & 58 \\ 5 & Chris Mason & G & NA & 18.3 & 6.7 & 18.3 & 2384 & 2.31 & 0.460 & 92 \\ 6 & Ryan Miller & G & NA & 14.2 & 9.6 & 14.2 & 3078 & 2.29 & 0.276 & 118 \\ 7 & Cam Ward & G & NA & 14.0 & 6.7 & 14.0 & 2655 & 2.23 & 0.317 & 99 \\ 8 & Jonathan Quick & G & NA & 11.7 & 5.6 & 11.7 & 1781 & 2.15 & 0.394 & 64 \\ 9 & Tomas Vokoun & G & NA & 10.2 & 8.8 & 10.2 & 2885 & 2.22 & 0.211 & 107 \\ 10 & Ilja Bryzgalov & G & NA & 10.0 & 7.6 & 10.0 & 2942 & 2.17 & 0.203 & 106 \\ \bottomrule \end{tabular} } \end{center} \end{table} The list of top players in $DPM$ given in Table \ref{dpmp} is entirely made up of goalies, which is not unexpected. Many people consider goalie the most important position in hockey, and this list seems to support that claim, at least for the defensive component of the game. While many goalies have a lower $DPM/60$ than many skaters, the comparatively high minutes played for goalies bump many of them to the top of the list in $DPM$. Another consequence of the high minutes played is that the standard errors for goalies are now very high for $DPM$. This fact makes the $DPM$ estimates for goalies less reliable than the $DPM$ estimates for skaters. We reiterate what we discussed at the end of Section \ref{dpm60}: other methods of rating goalies are preferred over $DPM/60$ and $DPM$. \begin{table}[h!] \begin{center} \caption{ Top 10 Skaters in DPM } \label{dpms} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & DPM & DErr & APM & Mins & GA60 & DPM60 & GA \\ \midrule 13 & Mikko Koivu & C & 6.6 & 8.1 & 3.0 & 14.7 & 1032 & 2.02 & 0.469 & 35 \\ 14 & Paul Martin & D & $-$0.4 & 8.0 & 2.6 & 7.6 & 916 & 1.55 & 0.526 & 24 \\ 15 & Jan Hejda & D & 1.3 & 7.9 & 3.3 & 9.2 & 1269 & 2.24 & 0.373 & 47 \\ 16 & Willie Mitchell & D & $-$2.4 & 7.9 & 2.8 & 5.5 & 1178 & 1.90 & 0.404 & 37 \\ 18 & Jay Bouwmeester & D & $-$3.1 & 7.5 & 3.2 & 4.4 & 1532 & 2.17 & 0.292 & 55 \\ 19 & Jason Pominville & RW & 5.4 & 7.4 & 3.1 & 12.9 & 1052 & 2.30 & 0.424 & 40 \\ 20 & Duncan Keith & D & 6.2 & 7.3 & 4.2 & 13.5 & 1532 & 2.21 & 0.284 & 56 \\ 21 & Tomas Plekanec & C & 0.1 & 7.2 & 3.0 & 7.3 & 1053 & 2.13 & 0.411 & 37 \\ 23 & Nicklas Lidstrom & D & 5.5 & 7.0 & 4.4 & 12.6 & 1374 & 1.82 & 0.307 & 42 \\ 26 & Ryan Callahan & RW & 0.3 & 6.6 & 2.2 & 6.9 & 878 & 1.78 & 0.453 & 26 \\ \bottomrule \end{tabular} } \end{center} \end{table} Unlike the list of top skaters in $DPM/60$ (Table \ref{dpm60s}), defensemen are more prevalent than forwards on the list of top skaters in $DPM$ given in Table \ref{dpms}, due to their higher minutes played. Defensemen make up 5 of the first 7, and 9 of the first 13 skaters in $DPM$. Beyond the top 13, the distribution of $DPM$ estimates for forwards are actually very similar (see Figure \ref{dpmfig}). \begin{table}[h!] \begin{center} \caption{ Top 10 Forwards in DPM } \label{dpmf} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & DPM & DErr & APM & Mins & GA60 & DPM60 & GA \\ \midrule 13 & Mikko Koivu & C & 6.6 & 8.1 & 3.0 & 14.7 & 1032 & 2.02 & 0.469 & 35 \\ 19 & Jason Pominville & RW & 5.4 & 7.4 & 3.1 & 12.9 & 1052 & 2.30 & 0.424 & 40 \\ 21 & Tomas Plekanec & C & 0.1 & 7.2 & 3.0 & 7.3 & 1053 & 2.13 & 0.411 & 37 \\ 26 & Ryan Callahan & RW & 0.3 & 6.6 & 2.2 & 6.9 & 878 & 1.78 & 0.453 & 26 \\ 33 & Pavel Datsyuk & C & 15.4 & 6.2 & 3.5 & 21.6 & 1186 & 1.84 & 0.314 & 36 \\ 35 & Daniel Sedin & LW & 6.4 & 6.1 & 4.1 & 12.5 & 1057 & 2.03 & 0.346 & 36 \\ 39 & Manny Malhotra & C & 2.6 & 5.8 & 2.2 & 8.4 & 922 & 1.82 & 0.380 & 28 \\ 41 & D. Langkow & C & $-$0.4 & 5.7 & 2.8 & 5.2 & 1010 & 2.02 & 0.336 & 34 \\ 43 & Travis Moen & LW & $-$5.2 & 5.7 & 2.5 & 0.4 & 969 & 1.84 & 0.350 & 30 \\ 45 & David Krejci & C & 4.3 & 5.5 & 2.9 & 9.8 & 912 & 1.89 & 0.361 & 29 \\ \bottomrule \end{tabular} } \end{center} \end{table} We now look at forwards and defensemen separately. Many of the top forwards in Table \ref{dpmf} are known to be very solid defensive forwards. Mikko Koivu is often praised for his work defensively, and Pavel Datsyuk is a two-time Selke Trophy winner for the best defensive forward in the league. Jason Pominville's ranking is surprising given that his $GA/60$ is the worst among players on this list. Checking his most common linemates, we find Ryan Miller (2.29 $GA/60$), Jochen Hecht (2.64), and Toni Lydman (2.36), whose $GA/60$ are not significantly different than Pominville's. Pominville did lead his team in traditional plus-minus in 2007-2008 (+16), and was tied for second in 2009-2010 (+13), which may have caused the high rating, but he was also a $-4$ in 2008-2009. \begin{table}[h!] \begin{center} \caption{ Top 10 Defensemen in DPM } \label{dpmd} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & DPM & DErr & APM & Mins & GA60 & DPM60 & GA \\ \midrule 14 & Paul Martin & D & $-$0.4 & 8.0 & 2.6 & 7.6 & 916 & 1.55 & 0.526 & 24 \\ 15 & Jan Hejda & D & 1.3 & 7.9 & 3.3 & 9.2 & 1269 & 2.24 & 0.373 & 47 \\ 16 & Willie Mitchell & D & $-$2.4 & 7.9 & 2.8 & 5.5 & 1178 & 1.90 & 0.404 & 37 \\ 18 & Jay Bouwmeester & D & $-$3.1 & 7.5 & 3.2 & 4.4 & 1532 & 2.17 & 0.292 & 55 \\ 20 & Duncan Keith & D & 6.2 & 7.3 & 4.2 & 13.5 & 1532 & 2.21 & 0.284 & 56 \\ 23 & Nicklas Lidstrom & D & 5.5 & 7.0 & 4.4 & 12.6 & 1374 & 1.82 & 0.307 & 42 \\ 27 & Tobias Enstrom & D & $-$0.1 & 6.6 & 3.7 & 6.5 & 1319 & 2.70 & 0.301 & 59 \\ 28 & Marc-E Vlasic & D & 0.7 & 6.5 & 3.3 & 7.1 & 1311 & 1.89 & 0.296 & 41 \\ 30 & Andrew Greene & D & $-$2.0 & 6.3 & 2.7 & 4.4 & 1001 & 1.70 & 0.379 & 28 \\ 36 & Ron Hainsey & D & $-$2.9 & 6.1 & 2.9 & 3.2 & 1264 & 2.50 & 0.289 & 53 \\ \bottomrule \end{tabular} } \end{center} \end{table} Paul Martin, the leader among skaters in $DPM/60$, tops the list of best defensemen in $DPM$ given in Table \ref{dpmd}, despite much lower minutes played than the others. Martin has a nice list of most common teammates (Martin Brodeur, 1.92 $GA/60$; Johnny Oduya, 2.00; Zach Parise, 1.74) but his $GA/60$ is extremely low, which is probably the cause of his low $DPM/60$ and $DPM$ estimates. Tobias Enstrom's $DPM/60$ and $DPM$ estimates are high given that his 2.70 $GA/60$ and 59 $GA$ statistics are the worst in the list. Enstrom's $GA/60$ is not significantly different than the $GA/60$ of his most common linemates, Niclas Havelid (2.87 $GA/60$), Johan Hedberg (2.67), and Kari Lehtonen (2.63). Further down Enstrom's list of common linemates is Ilya Kovalchuk, whose $3.09$ $GA/60$ and $-4.2$ $DPM$ are among the worst in the league. All teammates (and opponents) affect the model's estimates, not just the three most common teammates, so Kovalchuk and some other teammates with low defensive abilities could be increasing Enstrom's defensive estimates. Another Atlanta defensemen, Ron Hainsey, also has a high $DPM$ given his raw statistics. Looking deeper, his 2.50 $GA/60$ is actually second best on his team among players with more than 700 minutes. Our model seems to be saying that Hainsey, like Enstrom, is better than his raw statistics suggest, mostly because of the quality of teammates that he plays with. \subsection{$APM/60$} We now begin to look at the top players in the league in terms of $APM/60$ and $APM$. Recall that $APM/60$ is a measure of the total (offensive and defensive) contribution of a player at even-strength in terms of net goals (goals for minus goals against) per 60 minutes of playing time. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in APM60 (minimum 700 minutes) } \label{apm60p} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & APM60 & Err & Mins & NG60 & APM & NG \\ \midrule 1 & Pavel Datsyuk & C & 0.777 & 0.314 & 1.091 & 0.247 & 1186 & 1.55 & 21.6 & 31 \\ 2 & Marian Gaborik & RW & 0.715 & 0.303 & 1.018 & 0.222 & 853 & 1.01 & 14.5 & 15 \\ 3 & Mikko Koivu & C & 0.383 & 0.469 & 0.852 & 0.246 & 1032 & 0.52 & 14.7 & 9 \\ 4 & Pekka Rinne & G & NA & 0.845 & 0.845 & 0.232 & 1680 & NA & 23.7 & NA \\ 7 & Zach Parise & LW & 0.652 & 0.155 & 0.807 & 0.236 & 1164 & 1.20 & 15.6 & 23 \\ 9 & Joe Thornton & C & 0.590 & 0.177 & 0.767 & 0.227 & 1222 & 1.08 & 15.6 & 22 \\ 10 & Sidney Crosby & C & 0.818 & $-$0.052 & 0.766 & 0.212 & 1059 & 1.04 & 13.5 & 18 \\ 11 & Dan Ellis & G & NA & 0.757 & 0.757 & 0.218 & 1509 & NA & 19.0 & NA \\ 12 & Tim Connolly & C & 0.501 & 0.244 & 0.745 & 0.247 & 710 & 0.90 & 8.8 & 11 \\ 15 & Alex Ovechkin & LW & 0.723 & 0.010 & 0.733 & 0.254 & 1262 & 1.42 & 15.4 & 30 \\ \bottomrule \end{tabular} } \end{center} \end{table} Datysuk, considered by many to be the best two-way player in the game, tops the list of best players in $APM/60$ given in Table \ref{apm60p}. Only two goalies make the top 10. We plot the kernel density estimate for $APM/60$ and $APM$ in Figure \ref{apmfig} to get an idea of whether this trend continues outside the top 10. \begin{figure}[h!] \centering \caption[Kernel Density Estimation for APM/60 and $APM$] {Kernel Density Estimation for APM/60 Estimates and $APM$ Estimates.}\label{apmfig} \includegraphics[width=.9\textwidth]{apmfig} \end{figure} Forwards seem to have higher estimates than goalies and defensemen. Note that the picture changes slightly for $APM$, but defensemen still seem to have the lowest estimates in general. Goalies seem to have the widest spread in $APM$, which is expected because of their high minutes played. \begin{table}[h!] \begin{center} \caption{ Top 10 Forwards in APM60 (minimum 700 minutes) } \label{apm60f} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & APM60 & Err & Mins & NG60 & APM & NG \\ \midrule 1 & Pavel Datsyuk & C & 0.777 & 0.314 & 1.091 & 0.247 & 1186 & 1.55 & 21.6 & 31 \\ 2 & Marian Gaborik & RW & 0.715 & 0.303 & 1.018 & 0.222 & 853 & 1.01 & 14.5 & 15 \\ 3 & Mikko Koivu & C & 0.383 & 0.469 & 0.852 & 0.246 & 1032 & 0.52 & 14.7 & 9 \\ 7 & Zach Parise & LW & 0.652 & 0.155 & 0.807 & 0.236 & 1164 & 1.20 & 15.6 & 23 \\ 9 & Joe Thornton & C & 0.590 & 0.177 & 0.767 & 0.227 & 1222 & 1.08 & 15.6 & 22 \\ 10 & Sidney Crosby & C & 0.818 & $-$0.052 & 0.766 & 0.212 & 1059 & 1.04 & 13.5 & 18 \\ 12 & Tim Connolly & C & 0.501 & 0.244 & 0.745 & 0.247 & 710 & 0.90 & 8.8 & 11 \\ 15 & Alex Ovechkin & LW & 0.723 & 0.010 & 0.733 & 0.254 & 1262 & 1.42 & 15.4 & 30 \\ 16 & Jason Pominville & RW & 0.309 & 0.424 & 0.733 & 0.249 & 1052 & 0.55 & 12.9 & 10 \\ 17 & Alex Burrows & LW & 0.459 & 0.272 & 0.730 & 0.210 & 1023 & 0.94 & 12.5 & 16 \\ \bottomrule \end{tabular} } \end{center} \end{table} We now discuss the estimates for forwards and defensemen separately, starting with the top forwards in $APM/60$ given in Table \ref{apm60f}. Burrows' case is an interesting one. He did not appear in the top 10 lists for $OPM/60$ or $DPM/60$, but makes the top $APM/60$ list for forwards in Table \ref{apm60f} with balanced offensive and defensive estimates. He has been playing frequently with the Sedin twins this year, so one might think his rating would be difficult to separate from the twins' estimates. However, on average over the last three years, the Sedins are not among Burrows' three most frequent linemates, so his high estimates can not be attributed to statistical noise caused by frequently playing with the twins. Burrows actually has the lowest errors in this list of players, probably because of the varied linemates that he has had over the past three years. \begin{table}[h!] \begin{center} \caption{ Top 10 Defensemen in APM60 (minimum 700 minutes) } \label{apm60d} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM60 & DPM60 & APM60 & Err & Mins & NG60 & APM & NG \\ \midrule 58 & Mike Green & D & 0.357 & 0.195 & 0.552 & 0.208 & 1334 & 1.02 & 12.3 & 22 \\ 61 & Nicklas Lidstrom & D & 0.242 & 0.307 & 0.549 & 0.266 & 1374 & 1.09 & 12.6 & 25 \\ 67 & Duncan Keith & D & 0.245 & 0.284 & 0.529 & 0.235 & 1532 & 0.73 & 13.5 & 19 \\ 83 & Paul Martin & D & $-$0.026 & 0.526 & 0.500 & 0.239 & 916 & 0.92 & 7.6 & 14 \\ 91 & Kent Huskins & D & 0.278 & 0.198 & 0.476 & 0.210 & 915 & 0.72 & 7.3 & 11 \\ 105 & Johnny Oduya & D & 0.342 & 0.116 & 0.458 & 0.204 & 1209 & 0.78 & 9.2 & 16 \\ 106 & Jeff Schultz & D & 0.234 & 0.220 & 0.454 & 0.219 & 1094 & 1.15 & 8.3 & 21 \\ 117 & Jan Hejda & D & 0.061 & 0.373 & 0.434 & 0.222 & 1269 & 0.08 & 9.2 & 2 \\ 120 & S. Robidas & D & 0.313 & 0.115 & 0.428 & 0.210 & 1316 & 0.12 & 9.4 & 3 \\ 142 & Andrei Markov & D & 0.370 & 0.036 & 0.405 & 0.235 & 1114 & 0.28 & 7.5 & 5 \\ \bottomrule \end{tabular} } \end{center} \end{table} Huskins and Schultz sneak onto the list of top defensemen in $APM/60$ in Table \ref{apm60d} with balanced ratings, despite being held off of the $OPM/60$ and $DPM/60$ top 10 lists. We note that excluding goalies, Schultz's most common linemates are Mike Green, Alexander Ovechkin, Nicklas Backstrom, and Alexander Semin. Schultz has accumulated fairly low goal and assist totals over the past three seasons while playing with some of league's best offensive players, and yet his $OPM/60$ estimates are still high. In his case, the model may not be properly separating his offensive contribution from those of his teammates. It is also possible that Schultz does a lot of little things on the ice that do not appear in box score statistics, but that contribute to his team's offensive success nonetheless. In hockey, it is difficult to separate offense and defense. A good defensive team, which can clear the puck from the defensive zone quickly, can help its offense by increasing its time of possession. Likewise, a team with a good puck possession offense can help its defense by simply keeping the puck away from the opposition. Time of possession data could help in separating offense and defense, but such data is not readily available. The model may or may not be doing a very good job of separating the two in some cases. See \cite{corey} for a discussion by Corey Pronman about the connection between offense and defense. \subsection{$APM$} Recall that $APM$ is a measure of the total (offensive and defensive) contribution of a player at even-strength in terms of net goals over an entire season. A hockey fan familiar with the traditional plus-minus statistic can think of $APM$ in the same way, remembering that $APM$ has been adjusted for both the strength of a player's teammates and the strength of his opponents. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in APM } \label{apmp} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & DPM & APM & Err & Mins & NG60 & APM60 & NG \\ \midrule 1 & Pekka Rinne & G & NA & 23.7 & 23.7 & 6.5 & 1680 & NA & 0.845 & NA \\ 2 & Henrik Lundqvist & G & NA & 22.0 & 22.0 & 11.8 & 3223 & NA & 0.410 & NA \\ 3 & Pavel Datsyuk & C & 15.4 & 6.2 & 21.6 & 4.9 & 1186 & 1.55 & 1.091 & 31 \\ 4 & Marty Turco & G & NA & 19.7 & 19.7 & 7.1 & 2787 & NA & 0.424 & NA \\ 5 & Dan Ellis & G & NA & 19.0 & 19.0 & 5.5 & 1509 & NA & 0.757 & NA \\ 6 & Chris Mason & G & NA & 18.3 & 18.3 & 6.7 & 2384 & NA & 0.460 & NA \\ 7 & Zach Parise & LW & 12.6 & 3.0 & 15.6 & 4.6 & 1164 & 1.20 & 0.807 & 23 \\ 8 & Joe Thornton & C & 12.0 & 3.6 & 15.6 & 4.6 & 1222 & 1.08 & 0.767 & 22 \\ 9 & Alex Ovechkin & LW & 15.2 & 0.2 & 15.4 & 5.3 & 1262 & 1.42 & 0.733 & 30 \\ 10 & Mikko Koivu & C & 6.6 & 8.1 & 14.7 & 4.2 & 1032 & 0.52 & 0.852 & 9 \\ \bottomrule \end{tabular} } \end{center} \end{table} Goalies dominate the top of the list of best players in $APM$ in Table \ref{apmp}, as is common with many of the advanced metrics used by hockey analysts. This trend can also be seen in Figure \ref{apmfig}. We reiterate again that other statistics are preferred over $APM/60$ and $APM$ for estimating the contribution of goalies. \begin{table}[h!] \begin{center} \caption{ Top 10 Skaters in APM } \label{apms} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & DPM & APM & Err & Mins & NG60 & APM60 & NG \\ \midrule 3 & Pavel Datsyuk & C & 15.4 & 6.2 & 21.6 & 4.9 & 1186 & 1.55 & 1.091 & 31 \\ 7 & Zach Parise & LW & 12.6 & 3.0 & 15.6 & 4.6 & 1164 & 1.20 & 0.807 & 23 \\ 8 & Joe Thornton & C & 12.0 & 3.6 & 15.6 & 4.6 & 1222 & 1.08 & 0.767 & 22 \\ 9 & Alex Ovechkin & LW & 15.2 & 0.2 & 15.4 & 5.3 & 1262 & 1.42 & 0.733 & 30 \\ 10 & Mikko Koivu & C & 6.6 & 8.1 & 14.7 & 4.2 & 1032 & 0.52 & 0.852 & 9 \\ 11 & Marian Gaborik & RW & 10.2 & 4.3 & 14.5 & 3.2 & 853 & 1.01 & 1.018 & 15 \\ 14 & Sidney Crosby & C & 14.4 & $-$0.9 & 13.5 & 3.7 & 1059 & 1.04 & 0.766 & 18 \\ 15 & Duncan Keith & D & 6.2 & 7.3 & 13.5 & 6.0 & 1532 & 0.73 & 0.529 & 19 \\ 16 & Jason Pominville & RW & 5.4 & 7.4 & 12.9 & 4.4 & 1052 & 0.55 & 0.733 & 10 \\ 17 & Nicklas Lidstrom & D & 5.5 & 7.0 & 12.6 & 6.1 & 1374 & 1.09 & 0.549 & 25 \\ \bottomrule \end{tabular} } \end{center} \end{table} Pavel Datysuk has won the Selke Trophy in the 2007-08 and 2008-09 seasons, and is widely regarded as one of the top two-way players in the game, at least among forwards. He is third among players in $APM$, and he leads the list of top skaters in Table \ref{apms} by a wide margin. It should also be pointed out that all of the other players on this list are still within two standard errors of the top spot. Interestingly, Crosby and Ovechkin are the players with the lowest defensive estimates on this list, which hurts their overall ratings. Since forwards dominated the list in Table \ref{apms}, we list the top 10 defensemen separately in Table \ref{apmd}. \begin{table}[h!] \begin{center} \caption{ Top 10 Defensemen in APM } \label{apmd} {\small \begin{tabular}{llrrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & OPM & DPM & APM & Err & Mins & NG60 & APM60 & NG \\ \midrule 15 & Duncan Keith & D & 6.2 & 7.3 & 13.5 & 6.0 & 1532 & 0.73 & 0.529 & 19 \\ 17 & Nicklas Lidstrom & D & 5.5 & 7.0 & 12.6 & 6.1 & 1374 & 1.09 & 0.549 & 25 \\ 20 & Mike Green & D & 7.9 & 4.3 & 12.3 & 4.6 & 1334 & 1.02 & 0.552 & 22 \\ 40 & Stephane Robidas & D & 6.9 & 2.5 & 9.4 & 4.6 & 1316 & 0.12 & 0.428 & 3 \\ 43 & Johnny Oduya & D & 6.9 & 2.3 & 9.2 & 4.1 & 1209 & 0.78 & 0.458 & 16 \\ 45 & Jan Hejda & D & 1.3 & 7.9 & 9.2 & 4.7 & 1269 & 0.08 & 0.434 & 2 \\ 50 & Zdeno Chara & D & 6.6 & 2.2 & 8.8 & 5.6 & 1441 & 0.68 & 0.367 & 16 \\ 63 & Jeff Schultz & D & 4.3 & 4.0 & 8.3 & 4.0 & 1094 & 1.15 & 0.454 & 21 \\ 64 & Keith Ballard & D & 3.8 & 4.5 & 8.3 & 4.3 & 1392 & 0.22 & 0.359 & 5 \\ 70 & Ian White & D & 6.9 & 0.6 & 7.6 & 4.1 & 1343 & 0.18 & 0.338 & 4 \\ \bottomrule \end{tabular} } \end{center} \end{table} The top 3 in $DPM/60$ (Table \ref{dpm60d}) are once again in the top 3 here, though in a different order. One player we have not discussed is Jan Hejda, who seems to be an underrated player according to his $DPM$ estimate. Looking at his traditional box score statistics, we see that during both the 2007-2008 season (+20, 13 more than the second highest Blue Jacket) and the 2008-2009 season (+23, 11 more than the second highest Blue Jacket) seasons, Hejda led his team in plus-minus by a wide margin. His numbers were much worse during his injury shortened 2009-2010 season in which the entire Blue Jackets team struggled defensively, but his performance in the previous two seasons is one reason the model could be giving him a high estimate for $DPM$. Also, his two most common linemates, Mike Commodore (2.71) and Rick Nash (2.69) have a higher $GA/60$ than does Hejda (2.24), which may be helping his defensive rating. \section{Discussion of the Model}\label{discussion} We now discuss several aspects of the formation and analysis of our model. In Section \ref{adv} we summarize some advantages of $APM$, including that $APM$ is independent of teammates, opponents, and box score statistics. We discuss disadvantages of $APM$ in Section \ref{disadv}, including statistical noise and difficulties in computing the estimates. We discuss the selection of the explanatory variables and response variables in Section \ref{variables}, and selection of the observations in Section \ref{observations}. Closely related to the selection of the components in the model are the assumptions that we made, and we discuss those in Section \ref{assumptions}. The main assumptions discussed in that section are that in hockey teams play offense and defense concurrently, that goalies do not contribute on offense, and that there are no interactions between players. One of the main disadvantages of $APM$ listed in Section \ref{disadv} is the errors associated with the estimates, and we discuss these errors in greater detail in Section \ref{errors}. Also in that section, we give top 10 lists of the players with the highest and lowest errors in $APM/60$ and $APM$. Finally, in Section \ref{futurework}, we finish with some concluding remarks and give two ideas for future work: modeling special teams situations, and accounting for the zone in which each shift starts. \subsection{Advantages of $APM$}\label{adv} As with any metric of its kind, $APM$ has its advantages and disadvantages. As we have mentioned previously, the most important benefit of $APM$ is that a player's $APM$ does not depend on the strength of that player's teammates or opponents. A major downside of the traditional plus-minus statistic is that it \textit{does} depend on both teammates and opponents, so it is not always a good measure of a player's individual contribution. For example, a player on a below-average team could have a traditional plus-minus that is lower than average simply because of the linemates he plays with on a regular basis. An average player on a hypothetical line with Wayne Gretzky and Mario Lemieux would probably have a traditional plus-minus that is very high, but that statistic would not necessarily be a good measure of his contribution to his team. On the other hand, the coefficients in our model, which we use to estimate a player's $APM$, are a measure of the contribution of a player when he is on the ice versus when he is off the ice, independent of all other players on the ice. Another benefit of $APM$ is that the estimate, in theory, incorporates all aspects of the game, not just those areas that happen to be measured by box score statistics. Box score statistics do not describe everything that happens on the ice. For example, screening the goalie on offense and maintaining good positioning on defense are two valuable skills, but they are not directly measured using box score statistics. $APM$ is like traditional plus-minus in that it attempts to measure how a player effects the outcome on the ice in terms of goals scored by his team on offense and goals allowed by his team on defense. A player's personal totals in goals, assists, points, hits, and blocked shots, for example, are never used in computing $APM$. Nothing is assumed about the value of these box score statistics and how they impact a player's and a team's performance. Another benefit of our model is that we make minimal \textit{ad hoc} assumptions about about which positions deserve the most credit for goal scoring or goal prevention. We do not assume, for example, that goalies or defensemen deserve more credit than forwards in goal prevention, or that forwards deserve more of the credit when a goal is scored. From Figure \ref{opmfig}, it seems that forwards contribute more than defensemen to goal scoring, but no such assumption was made during the formation of the model. The one assumption about position we did make in our first model in Section \ref{model1} was that goalies do not contribute on offense (see Section \ref{goalies}). \subsection{Disadvantages of $APM$}\label{disadv} One main drawback of the $APM$ estimates is statistical noise. In particular, the standard errors in the $APM$ estimates for goalies are currently high. A priority in future research is to take measures to reduce the errors. We discuss the errors in detail in Section \ref{errors}. Another drawback of $APM$ is that the estimates do not include shootouts, and do not include the value of either penalties drawn by a player or penalties taken by a player. A team's performance in shootouts has a big impact on their place in the standings. Shootout specialists can be very valuable to a team during the regular season, and ideally shootout performance would be accounted for in $APM$. Penalties drawn and taken also impact the outcome of a game. Penalties drawn by a player lead to more power plays for that player's team, which in turn leads to more goals for his team. Likewise, penalties taken by a player lead to more power plays for, and more goals for, the opposing team. If the value of shootout performance, penalties drawn, and penalties taken were estimated using another method that gives results the units of goals per game or goals per season, those values could easily be combined with $APM$. Another difficulty with $APM$ is that the data required to calculate it is large, difficult to obtain in a usable form, and difficult to work with. Collecting and managing the data was easily the most time-consuming aspect of this research. Also, the data required for this model is only available (at least publicly) for very recent seasons. This model could not be used to estimate the value of Wayne Gretzky, Mario Lemieux, and Bobby Orr, for example, independent of their teammates and opponents. The final downside to $APM$ is that the model requires knowledge of linear regression or linear algebra, and is not easily computed from traditional statistics. The mathematics required makes the calculation of $APM$ accessible to fewer hockey fans. It was a priority to ensure that at least the estimates themselves could be easily understood, even if the methods of calculating them are not. \subsection{Selection of the variables}\label{variables} We now make a few remarks on how we chose the explanatory variables and the response variable in the model. For the model, we included players who played more than 4000 shifts during the 2007-2008, 2008-2009, and 2009-2010 seasons. In terms of minutes played, this cutoff is roughly 200 minutes per season on average. Players with less than 4000 shifts during those seasons would have very noisy estimates which would not be very reliable. Increasing the 4000 shift cutoff would have reduced the errors slightly, but we would have also obtained estimates for fewer players. In our model, the units of the coefficients are the same as the units of $y$. We wanted to estimate a player's contribution to his team in terms of goals per 60 minutes, so we chose our response variable with those same units. The choice of goals per 60 minutes for the units of our estimates was important because we could rate players based on this statistic, and we could also convert this rate statistic to a counting statistic, total goals over an entire season, using the minutes played by each player. The resulting estimates have the units of goals, and they can be easily compared with traditional plus-minus as well as advanced metrics already in existence. Also, a priority was to ensure our estimates could be easily interpreted by the average hockey fan. Since the units of $APM$ are goals over an entire season, any hockey fan familiar with traditional plus-minus can understand the meaning of $APM$. \subsection{Selection of the observations}\label{observations} Recall that we define a shift to be a period of time during the game when there are no substitutions made. During the 2007-2008, 2008-2009, and 2009-2010 seasons, there were 990,861 shifts. We consider only shifts that take place at even-strength (5-on-5, 4-on-4, 3-on-3), and we also require that two goalies be on the ice. All power play and empty net situations were removed. We noticed some errors in the data. There is a minimum of four players (counting goalies) and a maximum of 6 players (counting goalies) that can be on the ice for a team at the same time. However, for some shifts, there are less than four players or more than six players on the ice for a team. These shifts may have occurred in the middle of a line change, during which it is difficult to record in real-time which players are on or off the ice. Such shifts were removed. We also note that five games had missing data, and a few more games had incomplete data, such as data for just one or two periods. The equivalent of about 10 games of data is missing out of a total of 3,690 games during the three seasons in question. After removing shifts corresponding to empty net situations and special teams situations, and shifts where errors were identified, 798,214 shifts remained. \subsection{Discussion of assumptions}\label{assumptions} Some of the assumptions used in the model require discussion. First, in our Ilardi-Barzilai-type model (Section \ref{model1}), we split each shift into two observations, one corresponding to the home team being on offense, and one corresponding to the away team being on offense. We assume that in hockey a team plays offense and defense concurrently during the entire shift, and we give the two observations equal weight. This assumption of concurrency was suggested by Alan Ryder and was used in \cite{ryder}. In other sports, offense and defense are more distinct and more easily defined. However, because of the chaotic nature of play in hockey, defining what it means for a team to be on offense is tricky. Even if one could define what it means to be ``on offense", the data needed to determine if a team is on offense might not be available. Alteratively, we could say that the split into to two observations with equal weights was made by assuming that for each shift, a team was playing offense for half the shift and playing defense for half the shift. One problem with this assumption is that there may be some teams that spend more time with the puck than without it. \subsubsection{Goalie contribution on offense}\label{goalies} In our first model (Section \ref{model1}), we ultimately decided to treat goalies differently than skaters by including only a defensive variable for each goalie. This decision was based on the the assumption that a goalie's contribution on offense is negligible. This assumption is debatable. There are some great puck-handling goalies, and some poor ones, and that could affect both the offensive and defensive performance of their team. Some analysts have attempted to quantify the effects of puck handling for goalies and have come up with some interesting results. See, for example, \cite{goalieassists}. While we ultimately decided against including offensive variables for goalies, we did try the model both ways, and compared the results. We compared the offensive results for skaters, and defensive results for both skaters and goalies. First, the defensive coefficients, the $DPM$ results, and the errors associated with them, stayed very similar for all skaters and goalies. The offensive coefficients, and the $OPM$ estimates, stayed similar for most skaters when goalies were included, but in some extreme cases, the results varied greatly. For example, Henrik Lundqvist's offensive rating was extremely high, with very high errors. As a result, the offensive results for several New York Rangers were significantly lower when goalies were included. It was as if the model was giving Lundqvist much of the credit for offensive production, while giving less credit to the skaters. The standard errors in $OPM$ for these Rangers also increased. Similarly, three Detroit Red Wings goalies, Dominic Hasek, Chris Osgood, and Jimmy Howard, had very low offensive ratings. Several Detroit players saw a significant boost in offensive production when goalies were included. Once again, the errors for these players saw a significant increase. One problem with these changes in offensive estimates is that the goalie ratings are extremely noisy and are not very reliable, so the effect that the goalie ratings had on the skater ratings cannot be considered reliable either. On the other hand, there could be some positives gained from including goalies. Recall that we do not consider empty net situations in our model, so anytime each skater is on the ice, he is on the ice with a goalie. For teams who rely very heavily on one goalie, that goalie could get 90\% of the playing time for his team during the season. That goalie's variable could be acting similar to an indicator variable for that team. So the goalie's offensive estimates could be a measure of some sort of team-level effect, or coaching effect, on offensive production. For example, a low estimate for a goalie's $OPM$ could be considered partially as an adjustment for an organizational philosophy, or a coaching system, that favors a more conservative, defensive-minded approach. In the end, we decided against including offensive variables for goalies in our first model because of the noisiness of the goalies' results, the effect that it had on the skaters' offensive ratings, the increase in interactions with other players, and the increase in errors that came with those changes. Note that in our second model we do not have separate offensive and defensive variables for any of the players, including goalies, and goalies are considered on offense. So we have one model that does not include goalies for offensive purposes, and one model that does. When we average the results of the two models, we balance the effects of including goalies in one model, and excluding them in the other model. \subsubsection{Interactions between players} By not including interaction terms in the model, we do not account for interactions between players. Chemistry between two particular teammates, for example, is ignored in the model. The inclusion of interaction terms could reduce the errors. The disadvantages of this type of regression would be that it is much more computationally intensive, and the results would be harder to interpret. \subsection{Discussion of Errors}\label{errors} In the introduction, and elsewhere in this paper, we noted that Henrik and Daniel Sedin have a much higher error than other players with a similar number of shifts. One reason for this high error could be that the twin brothers spend most of their time on the ice together. Daniel spent 92\% of his playing time with Henrik, the highest percentage of any other player combination where both players have played over 700 minutes. Because of this high colinearity between the twins, it is difficult to separate the individual effect that each player has on the net goals scored on the ice. It seems as though the model is giving Henrik the bulk of the credit for the offensive contributions, and Daniel most of the credit for defense. Henrik's defensive rating is strangely low given his low goals against while on the ice. Likewise, Daniel's offensive rating is unusually low. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in Highest Err in APM60 (minimum 700 minutes) } \label{err60high} {\small \begin{tabular}{llrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & APM60 & Err & Mins & Teammate.1 & min1 & Teammate.2 & min2 \\ \midrule 69 & Henrik Sedin & C & 0.424 & 0.328 & 1169 & D.Sedin & 83\% & R.Luongo & 76\% \\ 73 & Daniel Sedin & LW & 0.710 & 0.326 & 1057 & H.Sedin & 92\% & R.Luongo & 77\% \\ 143 & Ryan Getzlaf & C & 0.501 & 0.288 & 1116 & C.Perry & 83\% & J.Hiller & 49\% \\ 157 & B. Morrow & LW & 0.141 & 0.283 & 805 & M.Ribeiro & 73\% & M.Turco & 71\% \\ 161 & Corey Perry & RW & 0.370 & 0.282 & 1130 & R.Getzlaf & 82\% & J.Hiller & 48\% \\ 199 & T. Holmstrom & LW & 0.175 & 0.269 & 724 & P.Datsyuk & 87\% & N.Lidstrom & 51\% \\ 205 & N. Lidstrom & D & 0.549 & 0.266 & 1374 & B.Rafalski & 70\% & P.Datsyuk & 49\% \\ 210 & David Krejci & C & 0.642 & 0.265 & 912 & T.Thomas & 60\% & B.Wheeler & 49\% \\ 218 & N. Kronwall & D & 0.405 & 0.264 & 1055 & B.Stuart & 46\% & C.Osgood & 46\% \\ 221 & Jason Spezza & C & 0.390 & 0.263 & 1075 & D.Alfredss & 60\% & D.Heatley & 59\% \\ \bottomrule \end{tabular} } \end{center} \end{table} The ten players with the highest error in $APM/60$ are shown in Table \ref{err60high}. Note that if we do not impose a minutes played minimum, the list is entirely made up of players who played less than 200 minutes, so we have restricted this list to players that have played more than 700 minutes on average over the last three seasons. The Sedins have significantly larger errors than the next players in the list, and all of the players in this list are ones who spent a large percent of their time on the ice with a particular teammate or two. In Table \ref{err60lowa}, we list the players with the lowest errors in $APM/60$. \begin{table}[h!] \begin{center} \caption{ Top 10 Players in Lowest Err in APM60 (minimum 700 minutes) } \label{err60lowa} {\small \begin{tabular}{llrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & APM60 & Err & Mins & Teammate.1 & min1 & Teammate.2 & min2 \\ \midrule 1 & Mike Smith & G & 0.222 & 0.144 & 1704 & M.St. Loui & 27\% & V.Lecavali & 26\% \\ 2 & D. Roloson & G & 0.141 & 0.145 & 2259 & T.Gilbert & 24\% & S.Staios & 24\% \\ 3 & Martin Biron & G & 0.012 & 0.149 & 2077 & B.Coburn & 28\% & K.Timonen & 25\% \\ 4 & J. Labarbera & G & 0.165 & 0.150 & 1205 & A.Kopitar & 22\% & P.O'Sulliv & 20\% \\ 5 & Cam Ward & G & 0.317 & 0.151 & 2655 & E.Staal & 32\% & T.Gleason & 31\% \\ 6 & Alex Auld & G & 0.245 & 0.152 & 1409 & C.Phillips & 17\% & D.Heatley & 15\% \\ 7 & A. Niittymaki & G & 0.177 & 0.154 & 1448 & B.Coburn & 19\% & K.Timonen & 17\% \\ 8 & Ilja Bryzgalov & G & 0.203 & 0.154 & 2942 & Z.Michalek & 35\% & E.Jovanovs & 32\% \\ 9 & Marty Turco & G & 0.424 & 0.154 & 2787 & S.Robidas & 35\% & T.Daley & 35\% \\ 10 & Manny Legace & G & 0.253 & 0.155 & 1680 & B.Jackman & 28\% & E.Brewer & 24\% \\ \bottomrule \end{tabular} } \end{center} \end{table} Goalies dominate this list, partially because of playing time, and partially because goalies share the ice with a wider variety of players than skaters do. Also, with the exception of Turco and Ward, all of the goalies in the list have the benefit of playing with more than one team, further diversifying the number of players that they have played with. While goalies have lower errors in $APM/60$ than skaters do, that changes with playing-time dependent $APM$ statistic (see Figure \ref{errorsfig}). If we remove goalies from consideration, we get the top 10 skaters in lowest standard errors as shown in Table \ref{err60lows}. \begin{table}[h!] \begin{center} \caption{ Top 10 Skaters in Lowest Err in APM60 (minimum 700 minutes) } \label{err60lows} {\small \begin{tabular}{llrrrrrrrr} \addlinespace[.3em] \toprule Rk & Player & Pos & APM60 & Err & Mins & Teammate.1 & min1 & Teammate.2 & min2 \\ \midrule 20 & J. Bouwmeester & D & 0.170 & 0.177 & 1532 & T.Vokoun & 50\% & M.Kiprusof & 29\% \\ 24 & Olli Jokinen & C & 0.172 & 0.178 & 1165 & T.Vokoun & 30\% & M.Kiprusof & 27\% \\ 28 & D. Seidenberg & D & 0.090 & 0.179 & 1067 & C.Ward & 45\% & T.Vokoun & 28\% \\ 29 & C. Ehrhoff & D & 0.296 & 0.180 & 1285 & E.Nabokov & 54\% & R.Luongo & 28\% \\ 30 & Ian White & D & 0.338 & 0.181 & 1343 & V.Toskala & 53\% & M.Stajan & 30\% \\ 31 & Bryan Mccabe & D & 0.193 & 0.181 & 1172 & T.Vokoun & 53\% & V.Toskala & 22\% \\ 35 & P. O'Sullivan & C & 0.074 & 0.183 & 1042 & A.Kopitar & 30\% & J.Labarber & 23\% \\ 36 & Greg Zanon & D & 0.003 & 0.183 & 1307 & D.Hamhuis & 30\% & N.Backstro & 27\% \\ 39 & Keith Ballard & D & 0.359 & 0.184 & 1392 & T.Vokoun & 48\% & D.Morris & 23\% \\ 40 & Lee Stempniak & RW & 0.463 & 0.184 & 982 & M.Legace & 27\% & V.Toskala & 25\% \\ \bottomrule \end{tabular} } \end{center} \end{table} Most of these players are defensemen and have been on the ice for a high number of minutes. Every player in Table \ref{err60lows} has played for two or more teams during the past three seasons. Stempniak, who has the lowest minutes played on the list, probably made the list because he has played for three different teams. Also, Stempniak shared the ice with his most common linemate, Manny Legace, for just 27\% of his time on the ice. We can look at the overall trend in $APM/60$ errors and $APM$ errors in Figure \ref{errorsfig}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{errorsfig} \caption{Kernel Density Estimation for $APM/60$ Errors and $APM$ Errors.}\label{errorsfig} \end{figure} The trends we noticed in the top 10 lists continue outside of the top 10. In particular, it appears that goalies tend to have the lowest errors in $APM/60$. The downside is that since many goalies get much more playing time than skaters, those goalies have much noisier $APM$ estimates. \subsection{Future work and conclusions}\label{futurework} We highlight two improvements that could be made to our model. The most important addition to this work would probably be to include a player's offensive and defensive contributions in special teams situations. While performance at even-strength is a good indicator of a player's offensive and defensive value, some players seem to have much more value when they are on special teams. Teemu Selanne is an example of one player who has the reputation of being a power play goal-scoring specialist, and we could quantify his ability using an estimate that includes special teams contributions. Another improvement we could make is accounting for whether a shift starts in the offensive zone, defensive zone, or neutral zone, and accounting for which team has possession of the puck when the shift begins. The likelihood that a goal is scored during a shift is dependent on the zone in which a shift begins and is dependent on which team has possession of the puck when the shift begins. See, for example, \cite{thomaspossession}. This fact could be affecting the estimates of some players. Players who are relied upon for their defensive abilities, for example, may start many of their shifts in their own zone. This trend could result in more goals against for those players than if they had started most of their shifts in their offensive zone. In the current model, there is no adjustment for this bias. We believe that $APM$ is a useful addition to the pool of hockey metrics already in existence. The fact that $APM$ is independent of teammates and opponents is the main benefit of the metric. $APM$ can be improved by addressing special teams and initial zones, and reducing the statistical noise is a priority in future research. We hope that GM's, coaches, hockey analysts, and fans will find $APM$ a useful tool in their analysis of NHL players. \bibliographystyle{BEPress}
{ "attr-fineweb-edu": 1.554688, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUddY5qhLA6_8s3HCx
\section{Introduction} SRM (single round matches) are regular programming contests organized by TopCoder \cite{TopCoder} since 2001. TopCoder has developed its own rating system, which has been used throughout its 19 year history \cite{TCrating}. Various shortcomings of SRM ratings have been documented on TopCoder forums and elsewhere. Our purpose here is not to discuss these issues, but to provide a concrete proposal to remedy them. Players have sometimes asked: What would SRM ratings be if they were Elo-based? We would like to obtain a reasonable answer to this question. There isn't a standard method for applying Elo ratings to rounds of more than two players. We could consider a ranking of players as the set of results between each pair of players. However, such results are not independent from each others: a ranked result is the product of a single performance by each player. Instead, we will consider the ranking as a tournament. From desirable properties, we will deduce a formula for performance in ranked games. We then use this formula to specifically rate SRM. Our goal is to more accurately predict the players' performances after each round. \section{Performance} \subsection{Rank performance} Let $\mathit{RP}(n,r)$ be the performance of a player ranked $r$ in a round of $n$ players. We consider the ranking as an elimination tournament, and count the number of wins. If the tournament has $2^n$ players, the winner must win $n$ 1-1 rounds, so we have: \begin{equation} \mathit{RP}(2^n,1) = n \end{equation} The player ranked $2$ has one less win than rank $1$: \begin{equation} \mathit{RP}(n,2) = \mathit{RP}(n,1) - 1 \end{equation} Multiplying both $r$ and $n$ by any $k > 0$ does not change the number of wins: \begin{equation} \mathit{RP}(n,r) = \mathit{RP}(kn,kr) \end{equation} With these constraints, we find the only solution is: \begin{equation} \mathit{RP}(n,r) = \log_2{n} - \log_2{r} \end{equation} \subsection{Expectations} We use the standard Elo formula for expectations \cite{WikiElo}. A player with rating $R_i$ is expected to outperform a player with rating $R_j$ with probability: \begin{equation} \mathit{WP}(R_i,R_j) = \frac{1}{1+10^{(R_j-R_i)/400}} \end{equation} As with SRM ratings, the rating of new players is initially 1200. \subsection{Ties} In programming contests like SRM, ties generally reflect a limitation of the problem set or scoring, rather than an unexpected performance of the tied players. We would like ties to not affect the ratings. We experimented with several accounting methods. We find the most accurate is to split the ties equally in actual and expected ranks, $.5$ in each. Slightly less accurate is to compute expected ranks regardless of ties, then split the tied ranks as is expected. We now split the ties equally. \subsection{Relative performance} We have the results of a round, which may include multiple divisions and ties. We consider the results of each division separately. The result of a round is a list of scores $S$, where $s_i$ is the score obtained by player $i$. We compute rank and expected rank for each player $i$: \begin{equation} \begin{gathered} r_\mathit{ties} = \frac{1}{2}|\{s_j \in S: j \neq i, s_j = s_i\}| \\ r = 1 + |\{s \in S: s > s_i\}| + r_\mathit{ties} \\ \hat{r} = 1 + \sum_{j: s_j \neq s_i}\mathit{WP}(R_j,R_i) + r_\mathit{ties} \end{gathered} \end{equation} The relative performance of a player in the round is the difference of actual and expected rank performance: \begin{equation} P = \mathit{RP}(n,r) - \mathit{RP}(n,\hat{r}) \end{equation} This can be written as: \begin{equation} P(\hat{r},r) = \log_2{\hat{r}} - \log_2{r} \end{equation} The performance $P$ equals a number of wins above or below expectations in a tournament of appropriately matched players. \subsection{Properties} \begin{figure}[h!] \centering\includegraphics[width=.7\textwidth]{RankPerf} \caption{Rank performance} \label{fig:RankPerf} \end{figure} Rank performance is convex (Figure~\ref{fig:RankPerf}). The sum of performances of a set of ranks is maximal if the ranks are distinct, and minimal if the ranks are uncertain or tied. Having split ties equally in actual and expected ranks, the expected ranks are at least as tied as the actual ranks. This ensures the sum of performances in a round is positive or zero. We have: \begin{equation} \sum{P} \ge 0 \end{equation} \begin{figure}[h!] \centering\includegraphics[width=.7\textwidth]{Perf_log} \caption{Relative performance} \label{fig:Perf_log} \end{figure} The rank is expected in logarithm (Figure~\ref{fig:Perf_log}). A player's performance may average $0$ in several ways: \begin{equation} \begin{gathered} P(a, b) + P(b, c) + P(c, a) = 0 \\ P(r, rx^{k}) = k.P(r, rx) \\ P(\hat{r}, r) = P(k\hat{r}, kr) \end{gathered} \end{equation} We will compute $\Delta R \propto P$, preserving these properties. \subsection{Accuracy} We define the prediction error for expected and actual ranks $(\hat{r}, r)$: \begin{equation} E(\hat{r},r) = |P| = |\log_2 \hat{r} - \log_2{r}| \end{equation} Our primary accuracy metric is the average error for all participants in all rated rounds. \section{Proposed rating system} We have the performances of each player in a SRM. We would like to compute rating changes $ΔR$ which better predict future performances. \subsection{Initial factor} With a prior of $1 : 1$, a performance $P$ is outperforming expectations by a factor $1 : 2^P$. A rating difference $\Delta R$ is expecting a better performance by a factor $1 : 10^{\Delta{R}/400}$. We can convert the performances in rating units: \begin{equation} \begin{gathered} \Delta R = \frac{400}{\log_2{10}} P = K_0 \cdot P \\ K_0 \approx 120 \end{gathered} \end{equation} If we expected the same performances the next round, and had no other information, this would be a reasonable $\Delta R$. Thus we consider $ΔR \propto P$ as Elo-based or 'Elo'. \subsection{Fixed K} \begin{figure}[h!] \centering\includegraphics[width=.7\textwidth]{fixed_K.png} \caption{Fixed $K$} \label{fig:fixed_K} \end{figure} Here we compute $\Delta R = K \cdot P$, with K minimizing the error. We find the most accurate choice is $K = 65$ (Figure~\ref{fig:fixed_K}). As we add factors in $\Delta R$, we automatically adjust $K$ to the most accurate. \subsection{Weight factor} \begin{figure}[h!] \centering\includegraphics[width=.7\textwidth]{alpha.png} \caption{Weight factor} \label{fig:W} \end{figure} A player's prior rating should weigh in $\Delta R$ according to the player's experience. Let $W$ be the weight, such that $\Delta R \propto 1/W$, and $\mathit{NR}$ the round number for the player, starting with $1$. We experimented with several possibilities, and find the best results with $W = \sqrt{\mathit{NR}}$. Figure~\ref{fig:W} shows choices of $W = \mathit{NR}^\alpha$. Thus we choose $W = \sqrt{\mathit{NR}}$. \begin{equation} ΔR = K \cdot \frac{P}{W} \end{equation} $K = 182$, the error is .7562. \subsection{Variance factor} A player's performance has variance for various reasons, not necessarily predicting future performance. We compute the derivative of a player's expected performance per change in rating: \begin{equation} \begin{gathered} P = - \log_2{\hat{r}} \\ \frac{dP}{dR} = - \frac{1}{\ln{2}.\hat{r}}.\frac{d\hat{r}}{dR} \\ \hat{r} = 1 + \sum_{j \neq i} w_j \\ w_j = \mathit{WP}(R_j,R) \end{gathered} \end{equation} We write the expected rank 1 as the loss:win ratio relative to the player's current rating: \begin{equation} \begin{gathered} \hat{r}(dR) = 10^{-dR/400} + \sum \frac{1}{1+10^{(R+dR-R_j)/400}} \\ \frac{d\hat{r}}{dR} = -\frac{\ln{10}}{400}(1 + \sum w_j(1-w_j)) \\ \frac{d{P}}{dR} = \frac{1}{K_0}.\frac{1 + \sum w_j(1-w_j)}{1 + \sum w_j} \end{gathered} \end{equation} In units of performance, we have: \begin{equation} P' = \frac{1 + \sum w_j(1-w_j)}{1 + \sum w_j} \end{equation} Extrapolating linearly, we can solve $P = 0$ with $ΔR = K_0 \cdot \frac{P}{\mathit{P'}}$. \begin{itemize} \item $\mathit{P'} \le 1$, so $ΔR = K_0 \cdot P$ may be conservative. \item $\mathit{P'} \to 1$ when $\hat{r} \to 1$. No extrapolation is possible. \item $\mathit{P'} \to \frac{1}{n}$ when $\hat{r} \to n$. Here $P \neq 0$ has bits of precision, hence more likely predicts future performance. \end{itemize} We compute $ΔR$ in a direction accounting for $P'$ and a multiplier $C$: \begin{equation} ΔR \propto \frac{P}{1 + C\cdot\mathit{P'}} \end{equation} \begin{figure}[h!] \centering\includegraphics[width=.7\textwidth]{C.png} \caption{Choice of $C$} \label{fig:C} \end{figure} We find the most accurate $C \approx 4$ (Figure~\ref{fig:C}). Thus we choose $C = 4$. We define the variance factor $V = 1 + C\cdot\mathit{P'}$. We now have: \begin{equation} ΔR = \frac{K}{V} \cdot \frac{P}{W} \end{equation} $K = 415$, the error is .7536. \subsection{Maximum factor} An unexpected performance predicts future performance less reliably than a consistent performance. The ratings gain accuracy if we limit the magnitude of $P$ to a maximum $M$ using a sigmoid. A sigmoid preserves symmetry, and exactly linearity of performance around $0$. We define the adjusted performance $\mathit{PA}$. We find the best results with: \begin{equation} \mathit{PA} = \frac{P}{1+\frac{|P|}{M}} \end{equation} \begin{figure}[h!] \centering\includegraphics[width=.7\textwidth]{M.png} \caption{Choice of $M$} \label{fig:M} \end{figure} We find the most accurate $M \approx 6.75$ (Figure~\ref{fig:M}). Thus we choose $M = 6.75$. The rating change for each player is now: \begin{equation} ΔR = \frac{K}{V} \cdot \frac{\mathit{PA}}{W} \end{equation} $K = 600$, the error is .7513. \subsection{Natural inflation} We have computed $ΔR \propto P$ which make the ratings more accurate after a round. Each player is expected a performance $0$, thus has an expected $ΔR = 0$. The ratings are stable in expectation. Because performances have a positive sum, more rating is won by the outperforming players than is lost by the underperforming players. The ratings have net inflation. Because the players gain experience during a round, the players on average have better performances after the round. Thus inflation better predicts future performance than deflation. Because we minimized the prediction error, the average $ΔR$ should approximately predict the next performances of participants relative to non-participants. We define this rate of inflation as natural inflation. For comparison with SRM ratings, we consider natural inflation as stable. We refer to this Elo-based implementation as 'Elo' in our results. \begin{center} 'Elo': $K = 600; C = 4; M = 6.75$ \end{center} \subsection{Stability} We have $ΔR \propto P$, predicting the players' relative performances. Now we would like to estimate the performances over time, such that players with stable performances have stable ratings. To maintain relative accuracy, we will not adjust our current parameters. \subsection{Performance bonus} As long as $ΔR \propto P$ exactly, the expected rating change of any player is zero. However, the expected performance in a round is a better performance than not participating. Players having practiced already have better performances before the round. Thus $ΔR = 0$ in expectation predicts future performance less accurately than $ΔR > 0$. We adjust the expectation to expect inflation, as if the ratings increased. We choose a parameter $B = ΔR$, then add the difference in expected performance to each player's performance: \begin{equation} ΔP = \frac{B}{K_0} \cdot P' \end{equation} \begin{figure}[h!] \centering\includegraphics[width=.7\textwidth]{B.png} \caption{Choice of $B$} \label{fig:B} \end{figure} We find $B \approx 6$ (Figure~\ref{fig:B}), and little accuracy can be gained from this parameter alone. \subsection{New players} So far we have a constant rating $R_0 = 1200$ for new players. However, the performance of new players is not constant. As the performance of existing players improves, SRMs become more difficult. This raises the barrier to entry. Before participating, potential players have opportunities to practice on recent rounds. Some may be experienced players coming from other platforms. Thus the performance of new players improves over time. Thus we adjust the initial rating for inflation. \begin{itemize} \item We choose a parameter $N$, the increase in $R_0$ per 100 rounds. \item After each round, adjust $R_0$ by $\frac{N}{100}$. \item Simultaneously, adjust $B$ for accuracy. \end{itemize} \begin{figure}[h!] \centering\includegraphics[width=.7\textwidth]{N.png} \caption{Choice of $N$} \label{fig:N} \end{figure} We find most accurately $N = 63, B = 27$ (Figure~\ref{fig:N}). Thus, our parameters estimating a stable performance are: \begin{center} 'Elo2': $N = 63; B = 27$ \end{center} \pagebreak \section{Results} We first compare our 'Elo' implementation to SRM ratings. Table~\ref{fig:results1} shows the average computed $ΔR$, performances, and prediction error, using our definitions. \begin{itemize} \item The first row is our primary metric. \item The players by experience. \item Existing players, in each division. \item In each division, the top and bottom half ranks. \end{itemize} \setlength{\tabcolsep}{5pt} \begin{table}[htbp] \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{ΔR} & \multicolumn{2}{c|}{perf} & \multicolumn{2}{c}{err} \\ \cline{3-8} Players & ratings & Elo & SRM & Elo & SRM & Elo & SRM \\ \hline All & 790127 & 19.5 & -20.8 & 0.224 & 0.298 & \textbf{0.7513} & 0.8301 \\ \hline First round & 77818 & 65.3 & -186.2 & 0.391 & -0.373 & \textbf{0.8019} & 1.0766 \\ \hline 2-7 rounds & 202858 & 32.5 & -25.1 & 0.285 & 0.244 & \textbf{0.6649} & 0.7105 \\ \hline 8-24 rounds & 217893 & 12.2 & 10.9 & 0.239 & 0.511 & \textbf{0.7454} & 0.8204 \\ \hline 25-74 rounds & 195019 & 5.1 & 4.5 & 0.171 & 0.395 & \textbf{0.7671} & 0.8263 \\ \hline 75-199 rounds & 84825 & 0.8 & -0.7 & 0.050 & 0.284 & \textbf{0.8701} & 0.9093 \\ \hline 200+ rounds & 11714 & -0.5 & -2.0 & -0.043 & 0.241 & \textbf{0.8965} & 0.9347 \\ \hline Existing & 712309 & 14.5 & -2.7 & 0.206 & 0.372 & \textbf{0.7458} & 0.8032 \\ \hline Division 1 & 388024 & 11.7 & -1.8 & 0.169 & 0.241 & \textbf{0.6930} & 0.7230 \\ \hline Division 2 & 324285 & 17.8 & -3.7 & 0.251 & 0.528 & \textbf{0.8090} & 0.8992 \\ \hline D1 H1 & 194004 & 34.1 & 52.1 & 0.636 & 0.796 & \textbf{0.9705} & 1.0406 \\ \hline D1 H2 & 194020 & -10.7 & -55.8 & -0.298 & -0.314 & 0.4155 & \textbf{0.4054} \\ \hline D2 H1 & 162141 & 58.7 & 55.3 & 0.881 & 1.348 & \textbf{1.1311} & 1.3831 \\ \hline D2 H2 & 162144 & -23.2 & -62.7 & -0.379 & -0.291 & 0.4869 & \textbf{0.4153} \\ \end{tabular} \end{center} \caption{Player statistics, Elo : SRM} \label{fig:results1} \end{table} Because SRM ratings use a different definition of performance, we include results using independent metrics. Each round, we compute rank correlation statistics \cite{RankCorrelation}: \begin{itemize} \item Kendall's $\tau$ \item Spearman's $\rho$ \end{itemize} For each metric, we compute the fraction of rounds where 'Elo' better predicted the result than SRM ratings, splitting ties equally. Table~\ref{fig:results2} shows the percentages. \setlength{\tabcolsep}{15pt} \begin{table}[htbp] \begin{center} \begin{tabular}{c|c|c|c|c} Rounds & \# & $\tau$ & $\rho$ & err \\ \hline All & 1950 & 87.4 & 87.1 & 89.2 \\ \hline Division 1 & 1196 & 79.8 & 79.6 & 83.1 \\ \hline Division 2 & 754 & 99.4 & 98.9 & 98.9 \\ \hline 2-16 players & 151 & 48.3 & 48.7 & 54.0 \\ \hline 17-99 players & 204 & 65.9 & 65.4 & 69.1 \\ \hline 100-199 players & 337 & 86.5 & 85.9 & 91.5 \\ \hline 200-399 players & 437 & 92.7 & 92.7 & 94.7 \\ \hline 400-599 players & 310 & 95.5 & 95.5 & 96.1 \\ \hline 600-799 players & 268 & 98.1 & 98.1 & 96.3 \\ \hline 800+ players & 242 & 99.2 & 97.9 & 98.3 \\ \end{tabular} \end{center} \caption{Round statistics, \% Elo : SRM} \label{fig:results2} \end{table} Tables ~\ref{fig:results3}, ~\ref{fig:results4} and ~\ref{fig:results5} compare our 'Elo2' and 'Elo' implementations. \setlength{\tabcolsep}{5pt} \begin{table}[htbp] \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{ΔR} & \multicolumn{2}{c|}{perf} & \multicolumn{2}{c}{err} \\ \cline{3-8} Players & ratings & Elo2 & Elo & Elo2 & Elo & Elo2 & Elo \\ \hline All & 790127 & 25.7 & 19.5 & 0.217 & 0.224 & \textbf{0.7497} & 0.7513 \\ \hline First round & 77818 & 90.6 & 65.3 & 0.426 & 0.391 & \textbf{0.7997} & 0.8019 \\ \hline 2-7 rounds & 202858 & 40.4 & 32.5 & 0.280 & 0.285 & 0.6649 & 0.6649 \\ \hline 8-24 rounds & 217893 & 15.6 & 12.2 & 0.217 & 0.239 & \textbf{0.7442} & 0.7454 \\ \hline 25-74 rounds & 195019 & 7.3 & 5.1 & 0.156 & 0.171 & \textbf{0.7650} & 0.7671 \\ \hline 75-199 rounds & 84825 & 2.4 & 0.8 & 0.050 & 0.050 & \textbf{0.8662} & 0.8701 \\ \hline 200+ rounds & 11714 & 0.6 & -0.5 & -0.033 & -0.043 & \textbf{0.8924} & 0.8965 \\ \hline Existing & 712309 & 18.6 & 14.5 & 0.194 & 0.206 & \textbf{0.7443} & 0.7458 \\ \hline Division 1 & 388024 & 14.9 & 11.7 & 0.163 & 0.169 & \textbf{0.6915} & 0.6930 \\ \hline Division 2 & 324285 & 22.9 & 17.8 & 0.232 & 0.251 & \textbf{0.8074} & 0.8090 \\ \hline D1 H1 & 194004 & 37.0 & 34.1 & 0.622 & 0.636 & \textbf{0.9668} & 0.9705 \\ \hline D1 H2 & 194020 & -7.2 & -10.7 & -0.297 & -0.298 & 0.4163 & \textbf{0.4155} \\ \hline D2 H1 & 162141 & 63.2 & 58.7 & 0.852 & 0.881 & \textbf{1.1199} & 1.1311 \\ \hline D2 H2 & 162144 & -17.3 & -23.2 & -0.388 & -0.379 & 0.4949 & \textbf{0.4869} \\ \end{tabular} \end{center} \caption{Player statistics, Elo2 : Elo} \label{fig:results3} \end{table} \setlength{\tabcolsep}{15pt} \begin{table}[htbp] \begin{center} \begin{tabular}{c|c|c|c|c} Rounds & \# & $\tau$ & $\rho$ & err \\ \hline All & 1950 & 58.4 & 57.3 & 64.6 \\ \hline Division 1 & 1196 & 59.5 & 58.3 & 63.8 \\ \hline Division 2 & 754 & 56.6 & 55.7 & 65.8 \\ \hline 2-16 players & 151 & 52.6 & 54.0 & 59.3 \\ \hline 17-99 players & 204 & 56.6 & 54.4 & 60.3 \\ \hline 100-199 players & 337 & 57.7 & 56.8 & 58.9 \\ \hline 200-399 players & 437 & 62.2 & 61.1 & 65.7 \\ \hline 400-599 players & 310 & 61.1 & 61.0 & 66.1 \\ \hline 600-799 players & 268 & 59.0 & 57.8 & 69.8 \\ \hline 800+ players & 242 & 53.3 & 50.4 & 69.8 \\ \end{tabular} \end{center} \caption{Round statistics, \% Elo2 : Elo} \label{fig:results4} \end{table} \pagebreak \setlength{\tabcolsep}{6pt} \begin{table}[htbp] \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{2}{c|}{} & \multicolumn{3}{c|}{ΔR} & \multicolumn{3}{c}{R} \\ \cline{3-8} Rating & err & $\mu$ & $\sigma$ & max & init & median & max \\ \hline SRM & 0.8301 & -20.8 & 130 & 900 & 1200 & 1043 & 3923 \\ \hline initial & 0.7908 & 22.2 & 142 & 1202 & 1200 & 1301 & 4663 \\ \hline K & 0.7784 & 14.7 & 75 & 642 & 1200 & 1258 & 3955 \\ \hline W & 0.7562 & 18.5 & 102 & 1762 & 1200 & 1327 & 3734 \\ \hline C & 0.7536 & 19.6 & 103 & 2417 & 1200 & 1341 & 3684 \\ \hline Elo & 0.7513 & 19.5 & 102 & 1548 & 1200 & 1369 & 3709 \\ \hline B & 0.7511 & 21.5 & 104 & 1581 & 1200 & 1384 & 3801 \\ \hline Elo2 & 0.7497 & 25.7 & 105 & 1591 & 1970 & 2095 & 4468 \\ \end{tabular} \end{center} \caption{Statistics, each parameter} \label{fig:results5} \end{table} \section{Conclusion} Our 'Elo' implementation generally better predicts the players' relative performances than SRM ratings. The ranks are also better predicted, with predictions improving with the number of players. Our 'Elo2' adjustments improve stability and slightly improve accuracy. Our primary metric considers all the players' performances in all SRM. The predictions are empirically accurate, on average, but not necessarily precise for any player or at any time. We include source code and charts in appendix. Other results are posted on our web site \cite{EloTC}. \section{Acknowledgements} We would like to thank Ivan Kazmenko for reviewing this paper and helpful comments. \bibliographystyle{plain}
{ "attr-fineweb-edu": 1.625977, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbK7xK6Ot9Pm3tcEW
\section{Conclusion}\label{sec:conclusion} In this paper, we propose \fullmodel (\model) which incorporates the historical customer and product reviews with the rating information to generate a personalized review summary and predict the sentiment of the review. In order to extract useful information from the historical reviews, we propose a graph-based reasoning module to capture the customer review preference and the commonly focused aspect of the product. To encourage the model to learn different information from two types of reviews, we introduce a contrastive learning objective for the graph reasoning module. Finally, we also propose a graph attention layer to dynamically incorporate the graph for generating a fluent summary. Extensive experiments on four benchmark datasets demonstrate that \model outperforms state-of-the-art baselines in both review summarization and sentiment classification tasks. \section{Experimental Setup}\label{sec:exp-setup} \subsection{Dataset}\label{sec:dataset} \begin{table}[t] \caption{Dataset statistics.} \label{tbl:data-statistics} \centering \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{lllll} \toprule & Sports & Movies & Toys & Home \\ \midrule \# of training samples & 183,714 & 1,200,601 & 104,296 & 367,395 \\ \# of validation samples & 9,000 & 20,000 & 8,000 & 10,000 \\ \# of test samples & 9,000 & 20,000 & 8,000 & 10,000 \\ Avg. words of review & 108.3 & 167.1 & 125.9 & 120.9 \\ Avg. words of summary & 6.7 & 6.6 & 6.8 & 6.8 \\ \bottomrule \end{tabular} } \end{table} To validate the effectiveness of the proposed method, we conduct the experiments on Amazon review~\cite{McAuley2015ImageBasedRO}. We adopt product reviews from the following four domains as our datasets: Sports, Movies, Toys, and Home. In our experiments, each data sample consists of a review text, a summary, and a rating. We randomly split each dataset into training, validation, and testing sets. We list some basic statistics for this dataset in Table~\ref{tbl:data-statistics}. We regard the rating of review as a sentiment label, which is an integer in the range of $[1, 5]$. \subsection{Evaluation Metrics} Following the previous review summarization works~\cite{Xu2021Transformer,Chan2020A}, we also use the word overlap-based \textbf{Rouge score}~\cite{lin2004rouge} as the evaluation metric for the summarization task. % Due to the limited space, we only report the F-value of Rouge in other experiments. Since only using automatic evaluation metrics can be misleading~\cite{Stent2005EvaluatingEM}, we also conduct the human evaluation by three well-educated Master students to judge 50 randomly sampled summaries. % The statistical significance of differences observed between the performance of two runs is tested using a two-tailed paired t-test and is denoted using \dubbelop\ (or \dubbelneer) for strong significance at $\alpha=0.01$. For the sentiment classification, we use the \textbf{macro F1} (M.F1) and \textbf{balanced accuracy} (B.Acc)~\cite{Brodersen2010TheBA} as the evaluation metric which is widely used in text classification methods~\cite{Zhou2018Differentiated,Chan2020A}. Since the rating of review is very imbalanced (\eg 58.0\% of reviews give rating 5 in Toys dataset), we employ the B.Acc which is a variant of the accuracy for imbalanced datasets~\cite{Brodersen2010TheBA,Kelleher2015FundamentalsOM}. \subsection{Implementation Details} We implement our experiments using PyTorch~\cite{Paszke2019PyTorchAI} based on the Transformers~\cite{wolf-etal-2020-transformers}. We train our model on two NVIDIA V100 GPUs for one day. We employ the pre-trained BART-base model (with 6 layers for encoder and decoder, the number of attention head is 12, and the hidden size is 768) to initialize part of the parameters. The hyper-parameter $\alpha$ is set to 0.1. \subsection{Comparisons} To prove the effectiveness of each module, we conduct ablation studies on Toys dataset, which removes each key module in \model, and then form $8$ baseline methods shown in Table~\ref{tab:ablations}. Apart from the ablation study, we also compare with the following summarization baselines: \noindent (1) \texttt{PGNet}~\cite{See2017Get} is an RNN-based abstractive summarization method with a copy mechanism. \noindent (2) \texttt{Transformer}~\cite{Vaswani2017Attention} is an encoder-decoder structure based solely on the attention mechanism~\cite{Bahdanau2015NeuralMT}. \noindent (3) \texttt{C.Transformer}~\cite{Gehrmann2018BottomUp} is a variant model of \texttt{Transformer} which equips with the copy mechanism. \noindent (4) \texttt{BART}~\cite{Lewis2020BART} is a pre-trained Transformer by using denoising mask language model as the training objective, and it has achieved SOTA performance on many text generation tasks. \noindent (5) \texttt{BART+Concat} is a baseline method that we concatenate product and customer reviews into the input of \texttt{BART} to generate the summary. \noindent (6) \texttt{BART+Senti} is an intuitive baseline method that we use the \texttt{BART} to generate the summary and use the encoder hidden state to predict the review sentiment as an auxiliary task. \noindent (7) \texttt{BART+Con.+Sen.} adds the sentiment classification task to the \texttt{BART+Concat}. \noindent (8) \texttt{HSSC+Copy}~\cite{Ma2018A} is a review summarization model for jointly improving review summarization and sentiment classification with copy mechanism~\cite{See2017Get}. \noindent (9) \texttt{Max+Copy}~\cite{Ma2018A} A bi-directional gated recurrent unit~\cite{Chung2014EmpiricalEO} based sequence-to-sequence architecture with copy mechanism and it uses the hidden states of the encoder to predict the review sentiment. \noindent (10) \texttt{DualView}~\cite{Chan2020A} is a dual-view model that jointly improves the performance of the review summarization task and sentiment classification task. \noindent (11) \texttt{TRNS}~\cite{Xu2021Transformer} propose the state-of-the-art transformer-based reasoning framework for personalized review summarization. We also employ a strong sentiment classification method \texttt{DARLM}~\cite{Zhou2018Differentiated} and fine-tune the \texttt{BERT}~\cite{Devlin2019BERTPO} on the sentiment classification task. \begin{table*} \centering \caption{Sentiment classification results. $\dagger$ means the results are referred from the original paper.} \label{tab:rating} \resizebox{1.1\columnwidth}{!}{ \begin{tabular}{c |c c| cc|cc| cc} \bottomrule \multirow{2}{4em}{System} & \multicolumn{2}{c}{Movies} &\multicolumn{2}{c}{Toys}&\multicolumn{2}{c}{Sports}&\multicolumn{2}{c}{Home} \\ \cline{2-9} & M.F1 & B.Acc & M.F1 & B.Acc & M.F1 & B.Acc & M.F1 & B.Acc \\ \hline \hline \multicolumn{9}{@{}l}{\emph{Joinly Training for Review Summarization \& Sentiment classification}} \\ \texttt{Max+copy} $\dagger$ &60.67&59.23&54.24&53.66&53.27&51.99&58.51&57.42\\ \texttt{HSSC+copy} $\dagger$ &60.69&59.32&54.38&53.32&53.14&52.63&58.78&58.02\\ \texttt{DualView} $\dagger$ &62.00&60.52&55.70&54.06&56.31&54.28&60.73&59.63\\ \texttt{BART+Senti} &61.99&61.13&58.88&57.12&58.63&57.17&61.88&61.23\\ \hline \multicolumn{9}{@{}l}{\emph{Sentiment classification Only}} \\ \texttt{DARLM} $\dagger$ &57.75&53.96&50.58&48.67&49.60&47.95&54.49&53.43\\ \texttt{BERT} & 59.82& 58.98& 57.42& 56.32&56.49 &55.38& 58.97& 58.6 \\ \texttt{Roberta} & 60.87& 60.13& 58.04 &57.27&\textbf{61.41} &\textbf{60.93}& 61.41& 60.93 \\ \hline \model (Our) & \textbf{63.42}& \textbf{61.99}& \textbf{61.56}& \textbf{61.08}&60.58 &58.87& \textbf{62.46}& \textbf{62.26} \\ \bottomrule \end{tabular} } \end{table*} \begin{table}[t] \centering \caption{Ablation models for comparison.} \label{tab:ablations} \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{@{}l|l} \toprule Acronym & Gloss \\ \hline \hline \model-CR & \multicolumn{1}{p{6cm}}{w/o \textbf{C}ustomer \textbf{R}eviews}\\ \model-PR & \multicolumn{1}{p{6cm}}{w/o \textbf{P}roduct \textbf{R}eviews}\\ \model-MIX & \multicolumn{1}{p{6cm}}{w/ \textbf{MIX}ed customer and product reviews}\\ \model-CL & \multicolumn{1}{p{6cm}}{w/o \textbf{C}ontrastive \textbf{L}oss}\\ \model-SC & \multicolumn{1}{p{6cm}}{w/o \textbf{S}entiment \textbf{C}lassification Loss (Eq.~\ref{equ:senti-cls-loss})}\\ \model-SEG & \multicolumn{1}{p{6cm}}{w/o \textbf{S}entiment-\textbf{E}nhanced \textbf{G}eneration (Eq.~\ref{equ:senti-aware-gen-gated})}\\ \model-HR & \multicolumn{1}{p{6cm}}{w/o \textbf{H}istorical \textbf{R}ating}\\ \model-Graph & \multicolumn{1}{p{6cm}}{Remove the \textbf{Graph} module and generator attends to original historical reviews}\\ \bottomrule \end{tabular} } \end{table} \section{Experimental Results} \label{sec:exp-result} \subsection{Overall Performance}\label{sec:overall-exp} We compare our model with the baselines listed in Table~\ref{tab:main}. Our model performs consistently better on four datasets than other state-of-the-art review summarization models with improvements of 10.53\%, 17.69\%, and 10.34\% on the Home dataset, and achieves 11.16\%, 17.09\%, and 10.90\% improvements on the Toys dataset compared with \texttt{BART+Senti} in terms of F-value of Rouge-1, Rouge-2, and Rouge-L respectively. This demonstrates that our method achieves better performance than previous strong baselines not only because we use a pre-trained language model, but also because we use model historical customer and product reviews separately and incorporate the historical rating information. \begin{table} \begin{center} \caption{Human evaluation results on Toys dataset.} \label{tab:human_eval} \resizebox{0.7\columnwidth}{!}{ \begin{tabular}{c|ccc} \toprule Method & Fluency & Informativeness & Factuality \\ \hline \hline \texttt{BART+Senti} & 2.18 & 1.92 & 2.14 \\ \texttt{BART} & 2.02 & 2.02 & 1.91 \\ \cellcolor{blue!15} \texttt{DualView} & \cellcolor{blue!15} 2.10 & \cellcolor{blue!15} 1.98 & \cellcolor{blue!15} 2.16 \\ \midrule \model & \textbf{2.32}\dubbelop & \textbf{2.34}\dubbelop & \textbf{2.28}\dubbelop \\ \bottomrule \end{tabular} } \end{center} \end{table} For the human evaluation, we asked the annotators to rate the generated summary according to its fluency, informativeness, and factuality on the Toys dataset. The rating score ranges from 1 to 3, with 3 being the best. Table~\ref{tab:human_eval} lists the average scores, showing that \model outperforms the other baseline models in terms of fluency, informativeness, and factuality. The kappa statistics are 0.44, 0.52, and 0.43 for fluency, informativeness, and factuality, and that indicates moderate agreement between annotators. We also conduct the paired student t-test between \model and \texttt{DualView} and obtain $p < 0.05$ for all metrics. From this experiment, we find that the \model outperforms the baselines in all metrics, which demonstrates the \model can generate fluent summaries with correct facts. For the sentiment classification task, from Table~\ref{tab:rating}, we can find that our model also achieves superior performance among most of the review sentiment classification methods and state-of-the-art methods. \subsection{Ablation Studies}\label{sec:ablation-exp} \begin{table} \centering \caption{Ablation study on the Toys dataset.} \label{tab:ablation-exp} \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{c |c| c| c|c|c} \bottomrule Model & Rouge-1 & Rouge-2 & Rouge-L &B.Acc& M.F1\\ \hline \hline \model &\textbf{20.62}&8.84&\textbf{19.94}&61.08&\textbf{61.56}\\ \model-CR &19.55&8.29&19.01&60.24&60.05\\ \model-PR &19.62&8.12&19.22&\textbf{61.13}&60.15 \\ \model-MIX &20.00&7.92&18.43&60.02&59.86\\ \model-CL &19.40&\textbf{8.92}&19.55&59.71&60.59\\ \model-SC &19.53&7.51&19.06&-&- \\ \model-SEG & 19.89 & 7.99 & 18.45 & 59.27 & 59.55 \\ \model-HR &19.63&8.45&19.39&59.12&60.01\\ \model-Graph &18.86&7.94&19.01&59.82&59.26\\ \bottomrule \end{tabular} } \end{table} \begin{table} \centering \caption{Influence of using different graph edges. Experiments are conducted on Toys dataset.} \label{tab:graph-edge} \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{c |c| c| c|c|c} \bottomrule Graph Construction & Rouge-1 & Rouge-2 & Rouge-L &B.Acc & M.F1\\ \hline \hline \model &\textbf{20.62}&\textbf{8.84}&\textbf{19.94}&\textbf{61.08}&\textbf{61.56}\\ w/o time-aware &19.58&8.21&19.31&60.71&61.40 \\ w/o rating-aware &20.01&8.51&18.99&59.87&60.12 \\ \bottomrule \end{tabular} } \end{table} \begin{figure*} \centering \includegraphics[width=2.0\columnwidth]{figs/num-review-3.pdf} \caption{ Influence of historical reviews number. } \label{fig:review-nums2} \end{figure*} We report the Rouge F-value of ablation models in Table~\ref{tab:ablation-exp}. Most ablation models perform worse than \model, which demonstrates the preeminence of each module. As proven by previous research work~\cite{Chan2020A}, jointly training the review sentiment classification and review summarization can boost the performance of both tasks, and our ablation models \model-SC and \model-SEG also verify this conclusion. \textbf{Using two types of historical reviews}. Ablation model \model-CR and \model-PR verify that only using the historical customer or product review cannot obtain good performance on both review summarization and sentiment classification tasks. \model-CL performs worse than \model, which proves our contrastive learning objective can help the model extract different useful information from two review sources. \textbf{Separately modeling the two types of reviews by graph reasoning module}. In this paper, we propose a novel graph-based review reasoning module to capture the relationship between historical reviews of customer and product separately in \S~\ref{sec:review-relation-model}, and the ablation model \model-MIX and \model-Graph verify the effectiveness of our review reasoning module. When we replace the multi-layer graph module by directly attending to the BART-based review representations (\model-Graph), the Rouge-1 F-score decreases by 9.23\% compared to \model. \textbf{Using rating information of historical reviews}. One of our contributions is to incorporate the rating of historical reviews. We concatenate the embedding of the review rating into the review representation in Equation~\ref{equ:review-repre}. To verify the effectiveness of incorporating historical rating, we test the ablation model \model-HR which removes the rating embedding in Equation~\ref{equ:review-repre}. Experimental results show that the performance decreases by 5.04\%, 2.84\%, and 2.60\% compared to \model in terms of Rouge-1, Rouge-L, and M.F1. \subsection{Effectiveness of Review Relationship}\label{sec:graph-edge-exp} In this paper, we propose to use two different review relationships in \S~\ref{sec:review-relation-model}: chronological and same rating relationships. To verify the effectiveness of these two edges, we conduct two ablation models which only use one type of relationship (edge). From the results shown in Table~\ref{tab:graph-edge}, we can find that the time-aware edge contributes most to the summarization and the rating-aware edge contributes most to the sentiment classification. This phenomenon demonstrates that the historical rating is useful for sentiment classification. \subsection{Discussion of Using Contrastive Learning}\label{sec:contrastive-exp} \begin{table} \centering \caption{Contrastive dropout rate. Experiments are conducted on Toys dataset.} \label{tab:dropout} \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{c |c| c| c|c|c} \bottomrule Dropout Rate & Rouge-1 & Rouge-2 & Rouge-L &B.Acc & M.F1\\ \hline \hline 0.01 &20.54&8.58&19.85&29.92&29.70\\ 0.05 &20.31&8.53&19.58&32.18&30.65\\ 0.1 & 20.33 & 8.64& 19.61 & 59.4 & 59.8\\ 0.6 &\textbf{20.62}&\textbf{8.84}&\textbf{19.94}&\textbf{61.08}&\textbf{61.56} \\ 0.9 & 19.54 & 8.12 & 18.93 & 27.3 & 27.43 \\ \bottomrule \end{tabular} } \end{table} In \model, we employ a contrastive learning constraint (Equation~\ref{equ:contrastive-dropout}) to encourage the two graph modules to learn heterogeneous information from customer reviews and product reviews separately. We use a simple but efficient dropout mask to obtain the augmented data representations. In this section, we investigate the performance influence of using different dropout rates on the mask. Table~\ref{tab:dropout} shows the results for both tasks. We can find that (1) using 60\% dropout rate on the data representation mask is the best choice; (2) \model can stably achieve strong performance on both tasks when the dropout rate is in a reasonable range $(0.01 \sim 0.9)$ without a big performance difference. \subsection{Influence of Historical Review Numbers}\label{sec:review-num-exp} Since our model requires multiple historical customer and product reviews to capture the customer's writing style and personal preference and the common focused aspect of the product, it is an intuitive research question how many historical reviews should be used in our model? We conduct an experiment using a different number of historical reviews for customer and product respectively and show the result for both tasks in Figure~\ref{fig:review-nums2}, which is conducted on the Toys dataset. From these results, we can find that using 3 historical reviews for the customer and product respectively can achieve the best performance for both tasks. \subsection{Case Study} \input{case.tex} From the case shown in Table~\ref{tab:case}, although all the methods generate fluent answers, the facts described in \texttt{BART+Senti} and \texttt{DualView} are not comprehensive. And \model describes both the positive and negative aspects, and this review writing style is the same as the customer's previous writing style. \section{Introduction} Most of the E-commerce portals provide a review panel for customers who have already bought the product to write a review of their experience. Many customers not only write a review but also give a short summary of the review, which can help other consumers to know the product better. Different from other text summarization tasks, the product review summarization is highly personalized and product-centric~\cite{Li2019Towards,Amplayo2021AspectControllable}. To be more specific, a good summary should (1) reflect the \textbf{persona writing preference} of the customer and (2) describe the \textbf{commonly focused aspects} of this product which are useful for future customers. These two requirements can potentially be met by utilizing historical reviews, where the customer's historical reviews reflect the writing style, and the historical product reviews describe commonly focused aspects. Following this direction, researchers proposed to incorporate the historical reviews~\cite{Xu2021Transformer,Liu2019Neural} for review summarization. However, these existing methods usually mix the historical reviews of customers and products together by concatenating them into a long sequence. Since we aim to learn the writing style from historical reviews of the customer and learn the main focused aspects of the product from the historical product reviews, these two reviews should play different roles in guiding the summarization process. Therefore, our first challenge is \textit{how to fully explore the two kinds of information from the two types of reviews and take advantage of their respective roles in generating summaries}. In the meantime, the review rating can be seen as a \textbf{high-level abstraction of the review} which reflects the satisfaction of the customer with the product. The customer rating reviews contain personal rating preferences, and the rating for the same product reflects the average user satisfaction with the product. Figure~\ref{fig:intro} shows an example of using two types of historical reviews and ratings can capture the actual user preference. Thus, modeling the historical review ratings can help the model understand the user's satisfaction with the product. To the best of our knowledge, the rating information has not been explored in related works~\cite{Ma2018A,Chan2020A}. Therefore, our second challenge is \textit{how to incorporate the rating of historical reviews to predict sentiment better and generate a personalized summary.} To tackle these two challenges, in this paper, we propose a personalized review summarization model named \fullmodel (\model). Different from previous methods, \model first (1) separately models the relationship between reviews of the customer and product by a graph reasoning model; (2) incorporates the rating information for the historical reviews. By these two methodologies, our model can understand the customer persona writing style and the main focused aspects of the product better, and improve the performance of two tasks. For the \textit{first challenge}, we construct two graphs for historical reviews of customer and product separately to capture the relationship and model the interaction between reviews. Since the two types of review are similar in literal, to force the model to learn customer writing style from customer reviews and extract the salient product aspects from product reviews, we propose a contrastive learning module that prevents the graph module from learning the homogeneous information from customer reviews and product reviews. And for the \textit{second challenge}, since the rating of review provides high-level information about the review, we employ the rating for the historical customer and product reviews to capture the rating preference of the customer and product respectively. Previous studies~\cite{Ma2018A,Chan2020A} show that jointly training the review sentiment classification (\aka rating prediction) model with the summarization model can boost the performance of both tasks. Motivated by these works, we first introduce \textit{historical} review ratings into the review sentiment classification task and propose a multi-task paradigm. Finally, we generate a personalized summary by incorporating the historical reviews and input reviews with a graph-attention layer. Experiments conducted on the benchmark datasets verify the effectiveness of our proposed model compared to the state-of-the-art baselines in sentiment classification and summarization tasks. \noindent To sum up, our contributions can be summarized as follows: $\bullet$ We propose to separately model the historical customer and product reviews to capture the personal style and commonly focused aspects of the product by a graph-based reasoning model. $\bullet$ We incorporate the rating of historical reviews in the summarization process, which provide high-level information for the review summarization. % $\bullet$ Experiments show the superiority of \model compared with state-of-the-art baselines on summarization and sentiment classification tasks. \section{Problem Formulation}\label{sec:problem-formulation} Given an input review $r = \{r_1, \cdots, r_{L_r}\}$ with $L_r$ tokens which is written by customer $u$ for product $p$, our goal is to generate a summary $\hat{y} = \{\hat{y}_1, \cdots, \hat{y}_{L_y}\}$ with $L_y$ tokens. To help the summarization model capture the customer style and preference and the common product aspects, we incorporate the historical reviews of customer $r^{u}$ and product $r^{p}$. We use $r^{u,k} = \{r^{u,k}_1, \cdots, r^{u,k}_{L_r}\}$ to denote the $k$-th review of the same customer of review $r$, and $r^{p,k} = \{r^{p,k}_1, \cdots, r^{p,k}_{L_r}\}$ to denote the $k$-th review of the same product of review $r$. Since we also use the sentiment classification as a multi-task, we use the rating $s^r$ for the input review $r$ and rating $s^{u}, s^{p}$ for historical reviews of customer and product respectively. Finally, we use (1) the difference between generated summary $\hat{y}$ and the ground truth summary $y$ and (2) the difference between the predicted rating and the ground truth rating as the training objective. \section{Preliminary}\label{sec:preliminary} \subsection{Text Generation with Transformer} Transformer~\cite{Vaswani2017Attention} is an encoder-decoder framework that captures the deep interaction between words in a sentence by using multi-head attention. We start by introducing the encoder in Transformer. It first projects the input text words into vector representation by an embedding matrix $e$ and then employs a multi-head self-attention mechanism. We project the input embedding $e(r)$ into query, key, and value which are three dependent vector spaces: \begin{equation} \begin{aligned}\label{equ:self-attn} & \operatorname{Attention}(r)= \\ & \operatorname{Softmax}\left(\frac{(e(r) W^Q) (e(r) W^K)}{\sqrt{d}}\right) (e(r) W^V), \end{aligned} \end{equation} where $W^Q, W^K, W^V$ are all trainable parameters, $e(r)$ are the embeddings of each token in the review $r$, and $d$ is the dimension of embedding vector. After interacting with other tokens in the input text, we apply the Feed-Forward Networks (FFN) on the output of Equation~\ref{equ:self-attn}: \begin{align}\label{equ:ffn} \mathrm{FFN}(x) = \max \left(0, x W_{1}+b_{1}\right) W_{2}+b_{2}, \end{align} where $x$ denotes the input of FFN which can be the output hidden state of Equation~\ref{equ:self-attn} for each word. To sum up, the encoder in the Transformer consists of multiple identical layers with multi-head self-attention (Equation~\ref{equ:self-attn}) and FFN layer (Equation~\ref{equ:ffn}), and we use the operator $\text{Enc}$ to denote this procedure: \begin{align}\label{equ:trans-enc} \{{\bf h}_{0}, {\bf h}_{1}, \cdots, {\bf h}_{L_r}\} = \text{Enc}({\text{[CLS]}, r_1, \cdots, r_{L_r}}), \end{align} where the output is the hidden states of each token in the input text $r$, and $\text{[CLS]}$ is a special token inserted at the start of the input text. The hidden state ${\bf h}_{0}$ of the special token $\text{[CLS]}$ can aggregate information from all the tokens~\cite{Liu2019RoBERTaAR,Devlin2019BERTPO,Gao2022HeteroQA}. In the decoder module, we first apply a multi-head self-attention layer on the mask output text embeddings which prevents attending to subsequent positions~\cite{Vaswani2017Attention}. Next, we modify the self-attention (shown in Equation~\ref{equ:self-attn}) as the cross-attention layer which uses the current decoding state as the query $(e(r) W^Q)$ and uses the hidden states of input review as key and value. Then, we use the FFN layer and linear projection layer with the softmax function to predict the word distribution of the generated text. To increase the representation ability of the transformer framework, researchers usually employ multiple encoder and decoder layers. \subsection{Pre-train Language Model} Recently, large-scale language models based on transformer have been explored further advanced the state-of-the-art on many language understanding~\cite{Zhang2021UNBERT,Gu2021An,Song2021BoB,Gu2021Partner,Gu2021DialogBERT} and generation tasks~\cite{Zhang2019DialoGPT,Feng2021Language}. These methods usually pre-train the transformer framework with mask language modeling~\cite{Devlin2019BERTPO,Yang2019XLNet} or text infilling~\cite{Lewis2020BART} task on large-scale text datasets. In this paper, we employ the pre-train language model BART~\cite{Lewis2020BART} as the backbone of our review summary generation model, which can increase the fluency of the generated summary. Although these pre-trained language models provide superior text generation ability, these methods usually use plain text as input and they cannot fully utilize the structure and manifold contextual information. Next, we will introduce how to fine-tune a language model to generate a better review summary by incorporating historical customer and product reviews. \section{\model Model}\label{sec:model} \begin{figure*} \centering \includegraphics[width=1.6\columnwidth]{figs/model_new.pdf} \caption{ Overview of \model. Our model can be divided into four parts: (1) \textit{Review Encoder} encodes the review text into a vector and concatenate the customer or product representation with rating embedding; (2) \textit{Historical Review Relationship Encoder} constructs graphs for two types of reviews and conduct reasoning on these graphs; (3) \textit{Sentiment Classification Module} predicts the sentiment of input review by incorporating the reasoning result, and introduces a contrastive learning objective; (4) \textit{Summarization Module} generates the review summary. } \label{fig:model} \end{figure*} \subsection{Overview} In this section, we introduce the \fullmodel (\model). An overview of \model is shown in Figure~\ref{fig:model}, which has four main parts: \noindent $\bullet$ \textbf{Review Encoder} encodes the review text into vector representation. \noindent $\bullet$ \textbf{Historical Review Reasoning Module} constructs the relationship of product and customer reviews separately and employs a graph model to conduct reasoning. \noindent $\bullet$ \textbf{Sentiment Classification Module} incorporates graph representations for historical reviews to predict the rating for the input review. In order to force the model to learn heterogeneous information from two graphs, we employ a contrastive learning objective. \noindent $\bullet$ \textbf{Summarization Module} first fuses the graph representations with the input review and then generates the summary, \subsection{Review Encoder}\label{sec:review-enc} To encode the reviews into vectors, we employ a pre-trained language model as the encoder: \begin{equation}\label{equ:utterance-encoder} \{{\bf h}^{*}_{0}, {\bf h}^{*}_{1}, \cdots, {\bf h}^{*}_{L_r}\} = \text{Enc}({\text{[CLS]}, r^{*}_1, \cdots, r^{*}_{L_r}}), \end{equation} where $\text{Enc}$ is the encoder (details in ~\S~\ref{sec:preliminary}) in BART which outputs the vector ${\bf h}^{*}_{i} \in \mathbb{R}^d$ of $i$-th word $r^{*}_i$ in review $r^{*}$, and $r^{*}$ can be input review $r$, product review $r^{p,k}$ and customer review $r^{u,k}$. % To obtain an overall representation of the review $r^{*}$, we extract the hidden state ${\bf h}^{*}_{0}$ of the input special token $\text{[CLS]}$ as the representation $\mathbf{\hat{r^{*}}} = {\bf h}^{*}_{0}$. Since the rating for historical customer reviews reflects the rating preference of the customer and the rating for the product reviews indicate the common sentiment for the customers who have already bought it, we propose to incorporate the rating into the review representation. Thus, we first introduce an embedding matrix $E_s \in \mathbb{R}^{5 \times d}$ for each rating score (1-5) and combine the ratings for reviews into the review representation (shown in Equation~\ref{equ:review-repre}). The product review written is highly associated with the customer's preference and the product attributes. To better understand the review, we propose to use customer embedding to store personal preference information. We use the embedding $\mathbf{u} \in \mathbb{R}^d$ as the representation for a customer of review $r^{u,k}$. Similarly, we also employ a product embedding $\mathbf{p} \in \mathbb{R}^d$ for the product $p$ of review $r^{p,k}$. The user embedding and product embedding are all trainable parameters that are jointly optimized when training the model. Finally, we combine the previous information as the final review representation: \begin{align}\label{equ:review-repre} \mathbf{r^{u,k}} &= {\bf h}^{u,k}_{0} + \mathbf{u} + E_s(s^{u,k}),\\ \mathbf{r^{p,k}} &= {\bf h}^{p,k}_{0} + \mathbf{p} + E_s(s^{p,k}), \\ \mathbf{r} &= {\bf h}^{r}_{0}, \end{align} where $\mathbf{r^{u,k}} \in \mathbb{R}^{d}$ is the representation for the $k$-th review written by the customer $u$, and $\mathbf{r^{p,k}} \in \mathbb{R}^{d}$ is the representation for the $k$-th review of product $p$. % \subsection{Historical Review Reasoning Module}\label{sec:review-relation-model} To model the relationship of product reviews and customer reviews separately, in this section, we propose to use a graph reasoning module. First, we construct the review graph by using $2$ types of edge to for product $\mathcal{G}_p$ and customer $\mathcal{G}_u$ reviews: \noindent $(1)$ \textbf{Time-aware Edge}: We first use the chronological relationship between reviews, which connects the review nodes according to the publish date. These relations can capture the dynamic rating tendency of users. \noindent $(2)$ \textbf{Rating-aware Edge}: Since the review with the same rating may share similar or related information, we also connect the review nodes with the same rating in each graph. Next, we use the review vector representation (in Equation~\ref{equ:review-repre}) as the initial node representation. After constructing the two graphs for product and customer reviews, we employ a \textbf{G}raph \textbf{C}onvolutional \textbf{N}etwork-based~\cite{Kipf2017SemiSupervisedCW,Tang2020Multihop,Cao2019Question} review reasoning module to conduct the message passing and reasoning between review nodes. In this module, we apply the multi-layer graph convolution to aggregate information from neighbor nodes connected by the two type edges. Since the different type edge represents different semantics, we should consider the edge type when passing information. Inspired by \textbf{R}elational \textbf{G}raph \textbf{C}onvolutional \textbf{N}etwork (RGCN)~\cite{Schlichtkrull2018ModelingRD}, we employ a local information aggregation scheme, which iteratively updates the node representation based on immediate neighbors. Different from GCN, RGCN propagates different information between nodes through the different type of relationships: \begin{align*} h_{i}^{(l+1)} = \sigma\left(\sum_{q \in \mathcal{Q}} \sum_{j \in \mathcal{N}_{i}^{q}} \textstyle\frac{1}{\left| \mathcal{N}_{i}^{q} \right|} W_{r}^{(l)} h_{j}^{(l)} + W_{0}^{(l)} h_{i}^{(l)}\right), \end{align*} where $l$ denotes the layer index, $h_{j}^{(l)}, h_{i}^{(l)}$ are node representations, $\mathcal{N}_{i}^{q}$ denotes the node $i$'s neighbor nodes which are connected with relation $q$, $\mathcal{Q}$ is the relation type set contains two type of relations, $W_{r}^{(l)}, W_{0}^{(l)}$ are all trainable parameters, and $\sigma$ is the activation function. After applying $L$ layers iterative updating by RGCN, we can obtain the updated node representation for each node $\{h_{1}^{(L)}, \dots, h_{L_r}^{(L)}\}$. Then, we employ a graph average pooling layer to combine the information from the graph nodes of customer and product reviews: \begin{align}\label{equ:graph-pooling} \mathbf{h_{u}} &= \text{avg}\left(\{h_{u,1}^{(L)}, \dots, h_{u,L_r}^{(L)}\}\right), \\ \mathbf{h_{p}} &= \text{avg}\left(\{h_{p,1}^{(L)}, \dots, h_{p,L_r}^{(L)}\}\right), \end{align} where $\text{avg}$ denotes the average graph pooling layer, and $\mathbf{h_{u}}$ and $\mathbf{h_{p}}$ are the graph representations for customer and product review graphs respectively. Since we aim to extract the customer's personal information from the historical customer reviews and capture the main aspects of the product from the historical product reviews, we employ a contrastive training objective~\cite{Gao2021SimCSE,Chen2020A,Caron2020Unsupervised,Li2021Contrastive,Tian2020Contrastive} to prevent the model from learning homogeneous information from two graph modules. Contrastive learning is an instance-wise discriminative approach that aims at making similar instances closer and dissimilar instances far from each other in representation space~\cite{He2020Momentum,Chen2021Wasserstein,Zhang2021Supporting,Yan2021ConSERT,Tong2021Directed,Jain2021Contrastive}. Thus, this contrastive training objective encourages the graph reasoning module to learn different information from the historical customer reviews and product reviews for input review summarization. To achieve better performance, it is important to design proper negative samples in contrastive learning. Since our model is to extract different information from the historical customer and product reviews, for the product review reasoning module, we use the product review representation as the positive sample and use other customer review representations in the mini-batch as negative samples. Our graph reasoning module is encouraged to learn a representation space where review representations from the same review type (\eg customer review or product review) are pulled closer and reviews from different review type are pushed apart. Inspired by the recent progress~\cite{liang2021RDrop,Gao2021SimCSE,Lee2021Contrastive,Giorgi2021DeCLUTR} of applying contrastive learning on the text data which uses simple but efficient independently sampled dropout masks on the representation to produce the data augmentation, we also use the same dropout on the vector representation of two graph reasoning results $\mathbf{h_{u}}$ and $\mathbf{h_{p}}$: % \begin{align}\label{equ:contrastive-dropout} \mathbf{\hat{h_{u}}} &= \text{Dropout}(\mathbf{h_{u}}), \\ \mathbf{\hat{h_{p}}} &= \text{Dropout}(\mathbf{h_{p}}). \end{align} For training the customer review reasoning module, we use a similar training method that uses the customer review representation as the positive sample and use other product review representations in the mini-batch as negative samples. Thus, we employ the contrastive loss function as an additional training objective: \begin{equation} \begin{aligned} \mathcal{L}_c = \log \frac{e^{\operatorname{sim}\left(\mathbf{h_{u}}, \mathbf{\hat{h_{u}}}\right) / \tau}}{\sum_{j=1}^{B} e^{\operatorname{sim}\left(\mathbf{h_{u}}, \mathbf{\hat{h_{p}}}_j \right) / \tau}} + \log \frac{e^{\operatorname{sim}\left(\mathbf{h_{p}}, \mathbf{\hat{h_{p}}}\right) / \tau}}{\sum_{j=1}^{B} e^{\operatorname{sim}\left(\mathbf{h_{p}}, \mathbf{\hat{h_{u}}}_j \right) / \tau}}, \end{aligned} \end{equation} where $B$ denotes the mini-batch, and the $\operatorname{sim}$ denotes the similarity function $\operatorname{sim}\left(a, b\right)=b^{\top} a / \tau$ and $\tau$ is the temperature. \subsection{Sentiment Classification Module} As shown in many previous studies~\cite{Ma2018A,Chan2020A}, jointly training the product review sentiment classification task with the review summarization task can boost the performance for both tasks. In this paper, we also follow this paradigm to employ this multi-task setting. However, previous studies~\cite{Ma2018A,Chan2020A} only use the input review itself when predicting the sentiment, the historical reviews of the customer contain the personal rating bias and the historical reviews provide the common focused aspect of the product. Thus, we fuse the historical customer and product reviews with the input review together by an attentive pooling layer: \begin{align} \hat{a} = \text{Softmax}\left( W_a [\mathbf{h_{u}} \oplus \mathbf{h_{p}} \oplus \mathbf{r}] + b_a \right) , \end{align} where $W_a, b_a$ are all trainable parameters and $\hat{a} \in \mathbb{R}^3$. Then, we apply a weighted sum operation by the attention score $\hat{a}$ and use a multilayer perceptron to predict the rating of the input review: \begin{align} z &= \hat{a}_1\mathbf{h_{u}} + \hat{a}_2\mathbf{h_{p}} + \hat{a}_3\mathbf{r} \in \mathbb{R}^d , \label{equ:senti-cls-repre} \\ \hat{s^r} &= \text{Softmax} (\text{MLP}(z)), \\ \mathcal{L}_{s} &= -\frac{1}{B} \sum_{i}^B s^r \log (\hat{s^r}), \label{equ:senti-cls-loss} \end{align} where $\hat{s^r} \in \mathbb{R}^5$ is the predicted rating distribution for input review $r$ over $5$ rating class. We employ the cross-entropy as the loss function $\mathcal{L}_{s}$ for this sentiment classification task. \input{results-all.tex} \subsection{Summarization Module} Finally, to incorporate the two graph representations which capture the customer's personal information and the product-specific information in the generation process of the summary, we propose to modify the original transformer framework. We first conduct the original self-attention and cross-attention layer in the transformer to incorporate the current decoded text and the input review $r$ respectively. After these two layers, we obtain the hidden state $\mathbf{h}^d_t$ for decoding step $t$. Next, we propose a graph-attention layer that extracts the useful knowledge from the nodes representation in customer review and product review graph: \begin{equation} \begin{aligned} & \mathcal{H}^p_t = \operatorname{GraphAttn}(\mathbf{h}^d_t, \mathcal{G}_p) = \\ & \operatorname{Softmax}\left(\frac{(\mathbf{h}^d_t W^Q) (\mathcal{G}_u W^K)}{\sqrt{d}}\right) (\mathcal{G}_p W^V), \end{aligned} \end{equation} where $\mathcal{G}_p = \{h_{p,1}^{(L)}, \dots, h_{p,L_r}^{(L)}\}$ is the set of graph node representations of thr product review graph, and $W^Q, W^K, W^V$ are all trainable parameters. We conduct the same $\operatorname{GraphAttn}$ operator using different parameters on customer review graph nodes $\mathcal{G}_u$, and obtain the output hidden states $\mathcal{H}^u_t$. Then, we combine the information from customer reviews $\mathcal{H}^u_t$ and product reviews $\mathcal{H}^p_t$ to obtain the hidden state for current decoding step: \begin{align} \mathcal{H}^{\prime}_t &= \operatorname{MLP}(\mathcal{H}^u_t + \mathcal{H}^p_t), \end{align} where $\mathcal{H}^{\prime}$ is the combined information from both graphs. Since the rating (sentiment) of the review can be seen as a high-level abstract of the review, the sentiment information can help the summarization module to capture the main idea of the review. Thus, we propose the \textbf{sentiment enhanced generation} module which incorporates sentiment classification representation $z$ (calculated in Equation~\ref{equ:senti-cls-repre}) into final summary generation: \begin{align} \delta = \text{Sigmoid}(W_{g1}\mathcal{H}^{\prime}_t + W_{g2}z + b_g), \label{equ:senti-aware-gen-gated}\\ \mathcal{H}_t = \mathrm{FFN}(\mathcal{H}^{\prime}_t) + \delta z, \quad P^w_t = \operatorname{MLP}(\mathcal{H}_t),\label{equ:senti-aware-gen-hidden} \end{align} where $W_{g1}, W_{g2}, b_g$ are all trainable parameters, $P^w_t$ is the predicted token distribution for decoding step $t$. The training objective is: \begin{align} \mathcal{L}_g = \textstyle\sum_{t=0}^{L_r}-\log P^w_t\left(y_{t}\right). \end{align} Finally, we combine the training objectives for each module as the final training objective: \begin{align} \mathcal{L} = \mathcal{L}_g + \mathcal{L}_s + \alpha\mathcal{L}_c, \end{align} where $\alpha$ is a hyper-parameter. The gradient descent method is employed to update all the parameters in our model to minimize this loss function. \section{Related Work}\label{sec:related} \subsection{Document Summarization} Document summarization aims to produce a short summary that covers the main idea of the input document. These methods can be classified into two categories: generative and extractive. Extractive summarization methods select several salient sentences from the input document as the summary, while abstractive summarization methods write the summary from scratch. In recent years, the pre-train language model (PLM)~\cite{Lewis2020BART,Devlin2019BERTPO,Liu2019RoBERTaAR} shows its great potential in language understanding and language generation tasks. Many researchers employ the PLM to obtain the contextualized sentence representation which can help the extractive summarization model achieve better performance. However, the extractive summarization methods usually produce a summary with redundant information and the summary is not coherent, since these methods simply concatenate several discontiguous sentences as a summary. The abstractive summarization methods, especially based on the large-scale PLM, can generate a more fluent and condensed summary than the extractive-based methods. From the experimental results on several benchmark document summarization datasets, we can find that the abstractive summarization methods outperform the extractive methods. In this paper, we focus on the review summarization task which usually needs to incorporate contextual information to produce a better summary (\eg product information, customer persona), and it should describe the popular product aspects. Thus, directly employing the document summarization methods on review cannot achieve good performance. \subsection{Review Summarization} Review summarization aims to produce a brief summary of the e-commerce product review. Early review summarization methods are mostly based on extractive methods, which directly extract phrases and sentences from the original review as the summary. \cite{Hu2004MiningAS} mine the features of the product from the customer review and identify whether the opinions are positive or negative. \cite{Xiong2014EmpiricalAO} propose an unsupervised extractive review summarization method that exploits review helpfulness ratings. For the abstractive methods, \cite{Chan2020A,Ma2018A} propose a multi-task framework to leverage the shared sentiment information in both review summarization and sentiment classification tasks. \cite{Liu2019Neural,Xu2021Transformer} propose the transformer-based reasoning framework for the personalized review summarization model, which first concatenates the historical reviews of customer and product and feeds into the reasoning layer. Although existing methods incorporate historical reviews, these methods simply concatenate all the reviews of customer and product and they cannot identify the different information from customer persona style and product aspects. And most of the existing methods ignore the rating information of the historical reviews. We compare the characteristics of several cutting-edge review summarization methods and our \model in Table~\ref{tab:xiaomi_comp}. \begin{table}[t] \begin{center} \caption{Characteristics of different methods. We not only model the heterogeneity of historical reviews, but also combines the advantages of existing methods.} \label{tab:xiaomi_comp} \resizebox{1\columnwidth}{!}{ \begin{tabular}{c|cccccc} \toprule & PGNet~\cite{See2017Get} & HSSC~\cite{Ma2018A} & DualView~\cite{Chan2020A} & TRNS~\cite{Xu2021Transformer} & \model \\ % \midrule Customer Reviews & \ding{55} & \ding{55} & \ding{55} & \ding{51} & \ding{51} \\ Product Reviews & \ding{55} & \ding{55} & \ding{55} & \ding{51} & \ding{51} \\ Heterogeneity Modeling & \ding{55} & \ding{55} & \ding{55} & \ding{55} & \ding{51} \\ Review Relation Modeling & \ding{55} & \ding{55} & \ding{55} & \ding{55} & \ding{51} \\ \midrule Sentiment Classification & \ding{55} & \ding{51} & \ding{51} & \ding{55} & \ding{51} \\ Historical Sentiment & \ding{55} & \ding{55} & \ding{55} & \ding{55} & \ding{51} \\ \bottomrule \end{tabular} } \end{center} \end{table}
{ "attr-fineweb-edu": 1.482422, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeNM5qWTA6f4qD5yW
\section{Discussion on Time Prediction Evaluation} \label{ap:dis_interval_evaluation} \section{Data Statistics} \label{ap:data_desp} \paragraph{Generation of WIKIDATA114k} We extracted a sport-centric subgraph from WIKIDADA432k. We first picked out statements where the relation \textit{memberOfSportsTeam} appears and obtained an entity set from those statements. Then we find all the statements that entities obtained from the previous step participate in as our initial subgraph. Finally, we ensure that each entity/relation is associated with at least 5 statements and the time period is restricted to [1883, 2023] for temporal statements, which encloses most of the temporal statements in the initial subgraph. This results in 1.7 million statements with 114k entities and 126 relations, and thus named as WIKIDATA114k. See Table ~\ref{tb:data_stat} for data statistics. \begin{table}[h] \begin{tabular}{llll} \hline \multicolumn{2}{l}{} & WIKIDATA12k & WIKIDATA114k \\ \hline \multicolumn{2}{l|}{\#entities} & 12,544 & 114,351 \\ \multicolumn{2}{l|}{\#relations} & 24 & 126 \\ \multicolumn{2}{l|}{time period} & {[}19, 2020{]} & {[}1883, 2023{]} \\ \hline \multicolumn{1}{l|}{\multirow{6}{*}{train}} & \multicolumn{1}{l|}{\#all} & 32,497 & 1,670,969 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#time instant} & 14,099 & 175,637 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#start time only} & 4,089 & 44,809 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#end time only} & 1,273 & 2,164 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#full time interval} & 13,035 & 402,135 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#no time} & 0 & 1,046,224 \\ \hline \multicolumn{1}{l|}{\multirow{6}{*}{valid}} & \multicolumn{1}{l|}{\#all} & 4,051 & 11,720 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#time instant} & 1,857 & 1,177 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#start time only} & 322 & 342 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#end time only} & 76 & 11 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#full time interval} & 1,796 & 2,655 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#no time} & 0 & 7,535 \\ \hline \multicolumn{1}{l|}{\multirow{6}{*}{\#test}} & \multicolumn{1}{l|}{\#all} & 4,043 & 11,854 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#time instant} & 1,844 & 1,219 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#start time only} & 324 & 306 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#end time only} & 56 & 15 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#full time interval} & 1,819 & 2,790 \\ \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\#no time} & 0 & 7,524 \\ \hline \end{tabular} \caption{Statistics of these datasets used.} \label{tb:data_stat} \end{table} \section{Hyperparameter Settings} We tune models by the MRR on the validation set. Grid search is performed over negative samples $k$ = [16, 32, 64, 128], learning rate $lr =$ [0.003, 0.002, 0.001], batch size $b =$ [1500, 2000, 2500, 3000, 3500]; dimension $d =$ [200, 300, 400], and weight for time smoothness regularizer $\beta =$ [0.0, 0.1, 0.001, 0.0001], as shown in Table~\ref{tb:hyperparamters}.\footnote{Experiments are terminated after 10000 steps.} We find that effects of different hyperparameters are minimal except for learning rate as the trained model usually converge to similar MRRs as long as they are trained thoroughly. We also observe that time smoothness regularizer is useful in learning time embeddings on WIKIDATA12k while failing to improve the model on WIKIDATA114k. This may be due to data sparsity with regard to time. As the time span of WIKIDATA114k is much smaller, time information is intensive and thus models are capable of learning temporal order between timestamps implicitly. \begin{table}[H] \centering \begin{tabular}{@{}l|llll|lll@{}} \toprule & \multicolumn{4}{l|}{\#negative samples} & \multicolumn{3}{l}{\# learning rate} \\ \midrule & 16 & 32 & 64 & 128 & 0.003 & 0.002 & 0.001 \\ \midrule MRR & 36.02 & 36.68 & 37.06 & 37.30 & 36.64 & 36.82 & 37.30 \\ MR & 97 & 100 & 98 & 101 & 126 & 103 & 101 \\ HITS@1 & 25.96 & 26.81 & 27.25 & 27.38 & 26.83 & 26.73 & 27.38 \\ \midrule & \multicolumn{4}{l|}{\#batch size} & \multicolumn{3}{l}{\# dimension} \\ \midrule & 2000 & 2500 & 3000 & 3500 & 200 & 300 & 400 \\ \midrule MRR & 36.71 & 36.87 & 37.30 & 36.78 & 36.12 & 36.88 & 37.30 \\ MR & 100 & 114 & 101 & 100 & 106 & 101 & 101 \\ HITS@1 & 26.59 & 26.88 & 27.38 & 26.77 & 25.98 & 26.88 & 27.38 \\ \bottomrule \end{tabular} \caption{Effects of hyper-parameters on WIKIDATA12k} \label{tb:hyperparamters} \end{table} \section{Experimental Setup} \label{sec:exp_setup} Upon inspection on implementations of TKBC models, we find there are two common issues. First, SOTAs only learn time embeddings for timestamps that appear in training set, which would be problematic at testing. For instance, suppose a sorted (ascending) list of timestamps occurring in training set is [1540, 1569, 1788, 1789, 1790], SOTAs only learn embeddings for these timestamps, while ignoring intermediate timestamps. As a result, they cannot answer queries when the associated time is not in the list, such as (s, r, ?o, \textit{1955}). This problem would be even worse regarding time interval generation. As when we need to grow a time point to a time interval by extending it to the left or the right, we may jump from one year to a year far away from it. For instance, from 1569 to 1540 (left) or 1788 (right). This is not reasonable and thus may severely affect the evaluation on time prediction. In order to address this issue, we enumerate all the time points in the time span of the training set with a fixed granularity (i.e., year) and use them for all models at training periods. The other issue is about the evaluation of link prediction task on time interval-based statements (including closed interval-based and left/right-open interval-based statements). In existing works, the evaluation boils down to assessing the correctness of answering a timestamp-based query by randomly picking one timestamp from a set of timestamps within the time interval and then measuring the performance on the newly generated query (i.e., the timestamp-based query). However, this is problematic. For closed interval-based samples, the evaluation results may vary from randomly sampled timestamps and thus may not be stable. For left/right-open interval-based statements, it is more severe. For instance, for a left-open interval-based test sample \textit{(Albert Einstein, educatedAt, ?o, [-, 1905])}, \citet{lacroix2019tensor} randomly pick a year before 1905, say 1000, and evaluate whether a model can output the correct answer (\textit{University of Zurich}) to the new query \textit{(Albert Einstein, educatedAt, ?o, 1000)}. Clearly, there is no correct answer at all since he was born in 1879. Therefore, the evaluation on such test samples may not be plausible. In order to address these issues, for a closed interval-based sample, we enumerate all the time points in the interval and do evaluation on each time point separately. Then we use the average performance over them as the overall evaluation. For the latter, we only consider the known endpoint in an interval, namely $(s, r, ?o, st)$ for right-open cases and $(s, r, ?o, et)$ for left-open cases. \section{Link Prediction Performance by types of validity information} \label{ap:link_prediction_by_types} Table~\ref{tb:eval_time_prediction_bytype} shows the comparison between different methods in terms of different types of validity information. \begin{table}[!htbp] \resizebox{0.5\textwidth}{!}{ \setlength\tabcolsep{1.5pt} \begin{tabular}{lllllllll} \hline Datasets & \multicolumn{8}{c}{WIKIDATA12k} \\ \hline Types & \multicolumn{2}{c|}{Time Interval (O)} & \multicolumn{2}{c|}{Time Interval (C)} & \multicolumn{2}{c|}{Time Instant} & \multicolumn{2}{c}{No Time} \\ \hline Methods & TIMEPLEX base & \multicolumn{1}{l|}{TIME2BOX} & TIMEPLEX base & \multicolumn{1}{l|}{TIME2BOX} & TIMEPLEX base & \multicolumn{1}{l|}{TIME2BOX} & TIMEPLEX base & TIME2BOX \\ \hline MRR & 46.74 & \multicolumn{1}{l|}{\textbf{51.48}} & 25.30 & \multicolumn{1}{l|}{\textbf{28.44}} & 41.11 & \multicolumn{1}{l|}{\textbf{43.13}} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} \\ MR & 203 & \multicolumn{1}{l|}{\textbf{68}} & 273 & \multicolumn{1}{l|}{\textbf{84}} & 350 & \multicolumn{1}{l|}{\textbf{125}} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} \\ HITS@1 & 19.21 & \multicolumn{1}{l|}{\textbf{41.05}} & 11.54 & \multicolumn{1}{l|}{\textbf{18.5}} & 32.6 & \multicolumn{1}{l|}{\textbf{33.30}} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} \\ \hline \multicolumn{1}{c}{Datasets} & \multicolumn{8}{c}{WIKIDATA114k} \\ \hline \multicolumn{1}{c}{Types} & \multicolumn{2}{c|}{Time Interval (O)} & \multicolumn{2}{c|}{Time Interval (C)} & \multicolumn{2}{c|}{Time Instant} & \multicolumn{2}{c}{No Time} \\ \hline Methods & TIMEPLEX & \multicolumn{1}{l|}{TIME2BOX} & TIMEPLEX & \multicolumn{1}{l|}{TIME2BOX} & TIMEPLEX & \multicolumn{1}{l|}{TIME2BOX} & TIMEPLEX & TIME2BOX \\ \hline MRR & \textbf{22.63} & \multicolumn{1}{l|}{22.43} & 17.72 & \multicolumn{1}{l|}{\textbf{18.85}} & 20.81 & \multicolumn{1}{l|}{\textbf{21.32}} & 67.85 & \textbf{68.40} \\ MR & 346 & \multicolumn{1}{l|}{\textbf{168}} & 155 & \multicolumn{1}{l|}{\textbf{147}} & \textbf{176} & \multicolumn{1}{l|}{\textbf{193}} & 430 & \textbf{172} \\ HITS@1 & 4.98 & \multicolumn{1}{l|}{\textbf{11.21}} & 3.94 & \multicolumn{1}{l|}{\textbf{8.35}} & 11.07 & \multicolumn{1}{l|}{\textbf{11.16}} & \textbf{61.52} & 60.30 \\ \hline \end{tabular} } \caption{Link prediction evaluation by types of validity information. Time Interval (O) denotes left/right-open interval-based statements, and Time Interval (C) refers to closed interval-based statements.} \label{tb:eval_time_prediction_bytype} \end{table} \section{Time Prediction Performance by duration length} Table~\ref{tb:time_prediction_duration_12k} and~\ref{tb:time_prediction_duration_114k} compare the performance of TIMEPLEX and TIME2BOX on the time prediction task across different duration lengths on two datasets. Test samples are first classified into three groups by duration (du) and then evaluate the performance of each group. For an interval $I$, $du = I_{max}-I_{min}+1$. It shows that our improvements are more pronounced in terms of shorter durations in general. \begin{table}[!htbp] \resizebox{0.5\textwidth}{!}{\setlength\tabcolsep{1.5pt}\begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{6}{c|}{WIKIDATA12k} \\ \hline Duration (du) & \multicolumn{2}{c|}{du=1} & \multicolumn{2}{c|}{1\textless{}du\textless{}=5} & \multicolumn{2}{c|}{du\textgreater{}5} \\ \hline Method & TIMEPLEX base & TIME2BOX & TIMEPLEX base & TIME2BOX & TIMEPLEX base & TIME2BOX \\ \hline gIOU@1 & 30.29 & \textbf{38.09} & 39.51 & \textbf{43.68} & \textbf{47.4} & 46.99 \\ \hline aeIOU@1 & 20.84 & \textbf{28.34} & 15.86 & \textbf{22.95} & \textbf{18.23} & 13.20 \\ \hline gaeIOU@1 & 12.47 & \textbf{18.62} & 11.73 & \textbf{16.34} & \textbf{16.85} & 11.20 \\ \hline \end{tabular}} \caption{Time prediction by duration on WIKIDATA12k} \label{tb:time_prediction_duration_12k} \end{table} \begin{table}[!htbp] \resizebox{0.5\textwidth}{!}{\setlength\tabcolsep{1.5pt}\begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{6}{c|}{WIKIDATA114k} \\ \hline Duration (du) & \multicolumn{2}{c|}{du=1} & \multicolumn{2}{c|}{1\textless{}du\textless{}=5} & \multicolumn{2}{c|}{du\textgreater{}5} \\ \hline Method & TIMEPLEX base & TIME2BOX & TIMEPLEX base & TIME2BOX & TIMEPLEX base & TIME2BOX \\ \hline gIOU@1 & 28.75 & \textbf{37.03} & 29.77 & \textbf{38.36} & 27.99 & \textbf{39.07} \\ \hline aeIOU@1 & 25.80 & \textbf{34.16} & 16.52 & \textbf{21.54} & 7.09 & \textbf{9.94} \\ \hline gaeIOU@1 & 14.69 & \textbf{21.08} & 10.50 & \textbf{14.50} & 3.85 & \textbf{7.02} \\ \hline \end{tabular}} \caption{Time prediction by duration on WIKIDATA114k} \label{tb:time_prediction_duration_114k} \end{table} \begin{comment} \begin{table*}[] \resizebox{0.6\textwidth}{!}{\begin{tabular}{|c|c|c|} \hline relation & \# tail entities & box size \\ \hline capital of & 2.18 & 64.31 \\ \hline located in the administrative territorial entity & 2.24 & 79.46 \\ \hline title of chess person & 2.00 & 85.62 \\ \hline IMA status and/or rank & 2.06 & 65.86 \\ \hline academic degree & 1.38 & 102.24 \\ \hline twinned administrative body & 3.31 & 122.09 \\ \hline country & 3.84 & 74.07 \\ \hline head of government & 3.32 & 79.56 \\ \hline spouse & 1.08 & 89.23 \\ \hline member of & 2.07 & 123.65 \\ \hline employer & 1.47 & 90.53 \\ \hline member of political party & 1.32 & 74.12 \\ \hline country of citizenship & 1.62 & 83.25 \\ \hline heritage designation & 1.91 & 88.19 \\ \hline position held & 2.84 & 95.14 \\ \hline instance of & 1.86 & 83.69 \\ \hline winner & 14.19 & 199.11 \\ \hline award received & 2.94 & 188.30 \\ \hline contains administrative territorial entity & 7.47 & 92.38 \\ \hline residence & 1.63 & 87.49 \\ \hline nominated for & 4.89 & 167.18 \\ \hline significant event & 1.95 & 96.70 \\ \hline member of sports team & 7.45 & 163.55 \\ \hline educated at & 1.37 & 93.56 \\ \hline \end{tabular}} \caption{Learned Box sizes by relation types. Box sizes are calculated by L1.} \label{tb:relation_box_size} \end{table*} \end{comment} \section{Model Parameter Comparison} Table~\ref{tb:num_model_params} summarizes the number of parameters used in each method. \begin{table}[H] \centering \begin{tabular}{|l|l|} \hline \textbf{Models} & \textbf{Number of parameters} \\ \hline TNTComplex & 2d($|E|$ + $|T|$ + 4$|R|$) \\ \hline TIMEPLEX base & 2d($|E|$ + $|T|$ + 6$|R|$) \\ \hline TIME2BOX & d($|E|$ + 2$|T|$ + 2$|R|$) + $4d^2$ \\ \hline \end{tabular} \caption{Number of parameters for each model} \label{tb:num_model_params} \end{table} \end{appendix} \section{Conclusion} \label{sec:conclusion} In this work, we presented a box-based temporal knowledge graph (TKBC) completion framework (called TIME2BOX) to represent and model statements with different types of validity information (i.e., no time, known start time, known end time, instant, both start and end time) in a vector space. We argued that a TKBC problem can be solved in two steps. First by solving an atemporal KBC problem and then narrowing down the correct answer sets that are only true at the time of interest. Therefore, we introduced time-agnostic boxes to model sets of answers obtained from KBC models. Time-aware boxes are used as a filter to pick out time-dependent answers. TIME2BOX outperforms existing TKBC methods in both link prediction and time prediction on two datasets - WIKIDATA12k and WIKIDATA114K. By investigating the model performance on statements with different types of validity information, we found that the improvement of TIME2BOX largely attributes to its better ability to handle statements with interval-based validity information. In the future, we will explore how to incorporate spatial scopes of statements into KGE models, such that KBC can benefit from both spatial and temporal scopes of statements. \section{Evaluation metrics IN Time prediction} \label{sec:eval_metrics} \paragraph{Time Interval Evaluation} $gIOU$~\cite{rezatofighi2019generalized} and $aeIOU$~\citep{jain-etal-2020-temporal} are two evaluation metrics recently adopted in time interval prediction. Both are built on \textit{Intersection Over Union} that is commonly used for bounding box evaluation in Computer Vision. The idea of $gIOU$ is to compare the intersection between a predicted interval and a gold interval against the maximal extent that the two intervals may expand. It can be formulated as below: \begin{equation} \begin{split} gIOU(I^{gold}, I^{pred}) = \frac{D(I^{gold} \bigcap I^{pred})}{D(I^{gold} \bigcup I^{pred})} - \\ \frac{D(I^{gold} \biguplus I^{pred}\setminus (I^{gold} \bigcup I^{pred}))}{D(I^{gold} \biguplus I^{pred})} \in (-1, 1] \end{split} \end{equation} \sloppy where $I^{gold} \bigcap I^{pred}$ is the overlapping part of two intervals, $I^{gold} \biguplus I^{pred}$ denotes the shortest contiguous interval (hull) that contains both $I^{gold}$ and $I^{pred}$. As shown in Fig.~\ref{fig:interval_eval}, if $I^{gold}=[2011, 2016]$ and $I^{pred}=[2009, 2013]$, then $I^{gold} \bigcap I^{pred}=[2011, 2013]$ and $I^{gold} \biguplus I^{pred}=[2009, 2016]$. $D(I) = I_{max}-I_{min}+1$ is the number of time points at a certain granularity (e.g., year in this paper) during the time interval $I$. Compared to $gIOU$, affinity enhanced IOU, denoted as $aeIOU$, provides a better evaluation in case of non-overlapping intervals and outputs scores in $[0, 1]$. It can be written as follow: \begin{equation} aeIOU(I^{gold}, I^{pred}) = \begin{cases} \frac{D(I^{gold} \bigcap I^{pred})}{D(I^{gold} \biguplus I^{pred})} & D(I^{gold} \bigcap I^{pred}) > 0 \\ \frac{1}{D(I^{gold} \biguplus I^{pred})} & otherwise \end{cases} \label{eq:aeIOU} \end{equation} However, we notice that $aeIOU$ cannot tell some cases apart. As illustrated in Fig.~\ref{fig:interval_eval}, $aeIOU$ results in the same scores for \textcircled{5}, \textcircled{6}, and \textcircled{7} when compared to the gold interval$[2011,2016]$. Intuitively one would assume that \textcircled{7} is better than the others and \textcircled{6} is the least desirable. The former has a one-year intersection between \textcircled{7} and the gold. For the latter, the gap between \textcircled{5} and the gold is smaller than that between \textcircled{6} and the gold, despite the fact that neither \textcircled{5} and \textcircled{6} overlaps with the gold. Its failure lies in that it does not consider the gap between the gold and the predicted interval in case of no overlap. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{./figs/figures-interval_metrics.pdf} \caption{Evaluation Comparison between aeIOU and gaeIOU on different predicted intervals. Suppose a gold interval is [2011, 2016], seven possible predicted intervals are represented as rectangles in black. Intersections between the predicted and the gold are in pink and gaps are in orange if no overlap exists. Notably, gaeIOU is able to distinguish these predictions while aeIOU fails to do so. } \label{fig:interval_eval} \vspace{-0.5cm} \end{figure} In the following, we take both the hull and the intersection/gap between a gold interval and a predicted interval into the design of the metric. The intuition is that when the size of the hull remains the same, the metric score of a predicted interval \textit{decreases} with a larger gap to the gold in case of no overlap and \textit{increases} with a larger intersection. $aeIOU$ is therefore generalized to $gaeIOU$ as below: \begin{equation} gaeIOU(I^{gold}, I^{pred}) = \begin{cases} \frac{D(I^{gold} \bigcap I^{pred})}{D(I^{gold} \biguplus I^{pred})} & D(I^{gold} \bigcap I^{pred}) > 0 \\ \\ \frac{D^{\prime}(I^{gold}, ~I^{pred})^{-1}}{D(I^{gold} \biguplus I^{pred})} & otherwise \end{cases} \label{eq:aeIOU} \end{equation} where $D^{\prime}(I^{gold}, ~I^{pred}) = max(I^{gold}_{min},I^{pred}_{min})-min(I^{gold}_{max},I^{pred}_{max})+1$ is the length of the gap. Accordingly, the Property ($P$) that a good evaluation metric must satisfy can be rewritten as: if predicted intervals (partially) overlap with the gold interval with the same size, then the prediction having a smaller hull with the gold interval should be awarded more by $M$; if there is no overlap, the prediction that has a smaller hull and a narrower gap with the gold should be scored higher by $M$. It can be formalized as below: \sloppy \textbf{Property P}: In case of $D(I^{gold} \bigcap I^{pred1}) = D(I^{gold} \bigcap I^{pred2}) \neq 0$, $M(I^{gold}, I^{pred1})>M(I^{gold}, I^{pred2})$ if and only if $D(I^{gold}\bigcup I^{pred1})<D(I^{gold} \bigcup I^{pred2})$. \\ \sloppy In case of non-overlapping, $M(I^{gold}, I^{pred1}) > M(I^{gold}, I^{pred2})$ if and only if $D(I^{gold} \bigcup I^{pred1}) \cdot D^{\prime}(I^{gold} \bigcap I^{pred1}) > D(I^{gold} \bigcup I^{pred2}) \cdot D^{\prime}(I^{gold} \bigcap I^{pred2}))$. It follows that $gaeIOU$ satisfies Property P, whereas $aeIOU$ does not satisfy it; see Fig.~\ref{fig:interval_eval}. \section{Experiment} \label{sec:experiment} Our goal here is to evaluate TIME2BOX in both link prediction and time prediction tasks. For a test sample $(s, r, ?o, t^{*})$, we replace $?o$ with each entity $o^\prime \in E$ and use $log\sigma(\gamma-D(\mathbf{o^\prime}, \mathbf{b}_{inter}))$, a variation of the inverse distance used in Eq.~\ref{eq:loss}, as scores for link prediction. Entities that have higher scores are more likely to form new links. Likewise, in terms of time prediction, for a query $(s, r, o, ?I)$, we first replace $I$ with each timestamp $t \in T$ and calculate its score. Then we use the greedily coalescing method proposed in~\citet{jain-etal-2020-temporal} to generate time intervals as predictions. \subsection{Datasets} We report experimental results using two TKBC datasets, which both are rooted in Wikidata. WIKIDATA12k is a widely used benchmark dataset in TKBC where each statement is associated with a time ``interval''~\citep{dasgupta2018hyte}. Such an ``interval'', in fact, could be a time instant, where start time and end time are the same, a left/right-open interval, or a closed interval. Note that this dataset excludes statements that do not have known temporal scopes in Wikidata, although they may be time-dependent and useful in TKBC, as discussed in Section~\ref{sec:intro}. The other dataset is a subset of WIKIDATA432k proposed by ~\citet{lacroix2019tensor}, which is the only TKB dataset where the start time, end time, or both of a statement can remain unspecified. Although this dataset is more appropriate for a TKBC problem, there are two limitations. First, it poses a computational burden as it contains 432k entities and 407 relations, consisting of 7M tuples in the training set. Second, there are several mistakes in the temporal information. For instance, 2014 was written as 2401. We extract a subgraph, named as WIKIDATA114k, and correct temporal information by checking it against Wikidata. More details about data pre-processing and statistics are in Appendix A (All Appendices are available online\footnote{\href{https://github.com/ling-cai/Temporal-KG/blob/28db101fa09853f761547f99d57f0400189bc413/Paper/Time2Box_Appendix.pdf}{Link to online Appendices.}}). Since our focus is on generic knowledge bases, we do not consider event-based datasets, such as ICEWS14 and ICEWS05-15, in which each statement is associated with a timestamp. \subsection{Baselines and Model Variants} In the following experiments, we regard TIME2BOX-TE as our main model, in which both the relation operator and the time projector are instantiated as an element-wise addition. It is denoted as TIME2BOX in resulting tables. We compare it against two SOTAs in TKBC: TNT-Complex and TIMEPLEX base model by using the implementation in~\citet{jain-etal-2020-temporal}, both of which are based on the time-agnostic KBC model: ComplEx~\citep{trouillon2016complex}. In addition to comparison with existing SOTAs, we also conduct an ablation study, in which several variants of the proposed model are compared: (1) TIME2BOX-SI, short for Sample Interval: for a closed interval-based statement, this variant randomly samples a sub time interval from a given interval at each training step and train it as shown in Fig.~\ref{fig:interval_vs}. (2) TIME2BOX-TR: previous works in TKBC often explicitly fused relations with time information to obtain time-aware relations and empirically demonstrated its effectiveness~\citep{jain-etal-2020-temporal,lacroix2019tensor, garcia-duran-etal-2018-learning}. We also explicitly model the association between relations and time as a new point $p_{rt}=\mathbf{r}+\mathbf{t}$ in the vector space and incorporate it into Eq.~\ref{eq:cent_inter} to help locate the intersection box. (3)TIME2BOX-DM: this variant implements the relation and time projectors as an element-wise product in real space as DistMult does. (4) TIME2BOX-TNS: this variant is used to test the effect of time negative samples, in which we replace a number of entity negative samples with time negative samples, as introduced in Section~\ref{secsec:time_negative}. All these models are trained on statements in training set and evaluated by answering queries where either the object or the time information is missing. Hyper-parameter settings are introduced in Appendix B and comparison of parameters used in different models is summarized in Table 11 in Appendix F. Moreover, we notice there are several limitations in current experimental setups of SOTAs and we detail them in Appendix C. \subsection{Main Results} \begin{table}[h] \resizebox{0.5\textwidth}{!}{\setlength\tabcolsep{1.5pt}\begin{tabular}{lcccc|cccc} \hline Datasets & \multicolumn{4}{c|}{WIKIDATA12k} & \multicolumn{4}{c}{WIKIDATA114k} \\ \hline Metrics & MRR & MR & HITS@1 & HITS@10 & MRR & MR & HITS@1 & HITS@10 \\ \hline TNT-Complex & 31.77 & 415 & 19.24 & 51.74 & 49.25 & 638 & 41.02 & 66.99 \\ TIMEPLEX base & 34.55 & 302 & 21.91 & 53.25 & 49.99 & 337 & 41.25 & 66.10 \\ \hline TIME2BOX-TR & 34.99 & 102 & 24.79 & 56.32 & \multicolumn{1}{l}{50.25} & 85 & 41.73 & 67.13 \\ TIME2BOX-DM & 35.90 & 139 & 25.52 & 56.74 & \multicolumn{1}{l}{48.84} & \multicolumn{1}{l}{284} & 41.09 & 64.33 \\ TIME2BOX-SI & 36.79 & \textbf{100} & 27.16 & 56.43 & \multicolumn{1}{l}{50.42} & \multicolumn{1}{l}{\textbf{139}} & 41.65 & 67.58 \\ TIME2BOX-TNS & 37.25 & \textbf{100} & \textbf{27.41} & 57.31 & \multicolumn{1}{l}{\textbf{50.55}} & \multicolumn{1}{l}{185} & \textbf{41.77} & 67.78 \\ TIME2BOX & \textbf{37.30} & 101 & 27.38 & \textbf{57.36} & 50.49 & 168 & 41.69 & \textbf{67.91} \\ \hline \end{tabular} \vspace{-1.0cm} } \caption{Link prediction evaluation across two datasets.} \label{tb:eval_link_prediction} \vspace{-0.8cm} \end{table} \paragraph{Link Prediction Task} We report main results of link prediction in Table~\ref{tb:eval_link_prediction}. TIME2BOX and all its variants consistently outperform or are on a par with the performance on SOTAs in terms of MRR, MR, HITS@1 and HITS@10. On WIKIDATA12k, TIME2BOX outperforms TIMEPLEX base by around 3 points in terms of MRR and over 5 points in HITS@1. On WIKIDATA114k, TIME2BOX is slightly better than two SOTAs in general for MRR, HITS@1 and HITS@10. In addition, we notice that TIME2BOX beats SOTAs by large margins in time interval-based link prediction, as shown in Table 8 in Appendix D. Our method improves around 20 and 7 HITS@1 points in terms of half-open interval-based link prediction and closed interval-based link prediction on WIKIDATA12k, respectively. On WIKIDATA114k TIME2BOX improves around 6 and 4 HITS@1 points, respectively. Another critical observation in Table~\ref{tb:eval_link_prediction} is the substantial improvements of using TIME2BOX in terms of MR on both datasets. TIME2BOX returns an MR of 100 and 139 on WIKIDATA12k and WIKIDATA114k, respectively and TIMEPLEX base obtains 302 and 337 for MR on both datasets. It indicates that TIME2BOX is capable of giving a fair rank for a gold answer to any test query on average. This is likely because of the idea of using boxes to constraint the potential answer set. As a time-agnostic box is optimized towards embracing entities satisfying atemporal queries of the form ($s, r, ?o$) in the learning process, boxes implicitly manage to learn common characteristics of the satisfied $?o$. Therefore, TIME2BOX is less likely to output extremely bad predictions. Examples in Section~\ref{subsec:quality_study} exemplify this hypothesis. \paragraph{Time Prediction Task} Table~\ref{tb:eval_time_prediction_12k} and Table~\ref{tb:eval_time_prediction_114k} summarizes the results for two datasets. On both datasets, TIME2BOX and its variants consistently outperform SOTAs by significant margins. Specifically, TIME2BOX improves over TIMEPLE by about 5.56, 7.25, and 4.87 points with respect to gIOU@1, aeIOU@1, and gaeIOU@1, respectively, on WIKIDATA12k. As for WIKIDATA114k, despite subtle improvements in link prediction, the advancement of TIME2BOX is more pronounced in time prediction, which shows that it gains 8.7, 5.87, and 4.66 points on gIOU@1, aeIOU@1, and gaeIOU@1, respectively. Furthermore, the improvements on gaeIOU@10 are much more notable with gains of 15.81 and 11.07 points on the two datasets, respectively. \begin{table}[h] \resizebox{0.5\textwidth}{!}{ \setlength\tabcolsep{1.5pt} \begin{tabular}{|lcc|cc|cc|} \hline Datasets & \multicolumn{6}{c|}{WIKIDATA12k} \\ \hline Metrics & gIOU@1 & gIOU@10 & aeIOU@1 & aeIOU@10 & gaeIOU@1 & gaeIOU@10 \\ \hline TNT-Complex & 31.44 & 55.18 & 18.86 & 40.94 & 11..01 & 29.51 \\ TIMEPLEX base & 35.63 & 60.86 & 18.60 & 37.75 & 12.61 & 32.63 \\ \hline TIME2BOX-TR & 39.63 & 67.83 & 23.47 & 44.64 & 15.87 & 41.53 \\ TIME2BOX-DM & 38.78 & 62.44 & 21.91 & 41.55 & 14.94 & 37.14 \\ TIME2BOX-SI & 39.68 & 65.30 & 23.66 & 42.16 & 16.09 & 38.54 \\ TIME2BOX-TNS & \textbf{42.30} & \textbf{70.16} & \textbf{25.78} & \textbf{50.04} & \textbf{17.41} & \textbf{47.54} \\ TIME2BOX & 41.20 & 68.53 & 24.70 & 46.05 & 16.98 & 43.08 \\ \hline \end{tabular} } \caption{Time prediction evaluation on WIKIDATA12k.} \label{tb:eval_time_prediction_12k} \vspace{-1.0cm} \end{table} \begin{table}[h] \resizebox{.5\textwidth}{!}{ \setlength\tabcolsep{1.5pt} \begin{tabular}{|lcc|cc|cc|} \hline Datasets & \multicolumn{6}{c|}{WIKIDATA114k} \\ \hline Metrics & \multicolumn{1}{l}{gIOU@1} & \multicolumn{1}{l|}{gIOU@10} & \multicolumn{1}{l}{aeIOU@1} & \multicolumn{1}{l|}{aeIOU@10} & \multicolumn{1}{l}{gaeIOU@1} & \multicolumn{1}{l|}{gaeIOU@10} \\ \hline TNT-Complex & 27.94 & 48.31 & 16.18 & 35.32 & 7.31 & 23.68 \\ TIMEPLEX base & 29.31 & 57.68 & 18.56 & 36.70 & 12.53 & 32.47 \\ \hline TIME2BOX-TR & 37.49 & 67.95 & 25.05 & 49.02 & 15.41 & 45.72 \\ TIME2BOX-DM & 35.88 & 66.62 & 24.33 & 48.03 & 14.89 & 44.48 \\ TIME2BOX-SI & 34.02 & 62.89 & 23.10 & 44.74 & 14.07 & 40.05 \\ TIME2BOX-TNS & 37.31 & 66.91 & 25.07 & 48.18 & 15.57 & 44.66 \\ TIME2BOX & \textbf{38.01} & \textbf{71.29} & \textbf{24.42} & \textbf{50.07} & \textbf{15.88} & \textbf{47.77} \\ \hline \end{tabular} } \caption{Time prediction evaluation on WIKIDATA114k.} \label{tb:eval_time_prediction_114k} \vspace{-0.8cm} \end{table} \subsection{Qualitative Study} \label{subsec:quality_study} Table~\ref{tb:timestamped_query_example} showcases examples of timestamp-based link prediction on WIKIDATA12k. The comparison between TIMEPLEX base and TIME2BOX reveals that TIME2BOX is able to learn common characteristics of entities by adopting boxes. For instance, the predicted top 10 returned by TIME2BOX are possible affiliations (e.g., institutes, colleges, universities) in the first query and are countries in the second query. By contrast, TIMEPLEX base returns a mixture of entities with distinct classes for both queries. Furthermore, Table~\ref{tb:timeinterval_query_example} shows an example of time interval-based link prediction, in which TIME2BOX is able to consistently output correct predictions across time and precisely discern the changes of objects over time (i.e., the correct answer shifts from Russian Empire to Ukrainian People's Republic in 1916), while TIMEPLEX base fails. This can be attributed to the ability of TIME2BOX to capture the order of timestamps and the idea of temporal boxes as a constraint over potential answer entities. Hence, answer entities that are true in two consecutive years can be enclosed in the intersection of temporal boxes. \begin{table}[] \resizebox{0.55\textwidth}{!}{\setlength\tabcolsep{1.5pt}\begin{tabular}{|c|l|} \hline \multicolumn{2}{|c|}{\textit{\textbf{Query Example 1: (Yury Vasilyevich Malyshev, educatedAt, ?o, 1977)}}} \\ \hline TIMEPLEX base & \multicolumn{1}{c|}{TIME2BOX} \\ \hline \multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}1. Bauman Moscow State Technical University,\\ 2. Gold Star,\\ 3. Communist Party of the Soviet Union,\\ 4. Order of Lenin,\\ 5. S.P. Korolev Rocket and Space Corporation Energia,\\ 6. Hero of the Soviet Union,\\ 7. \underline{\textbf{Gagarin Air Force Academy,}}\\ 8. Balashov Higher Military Aviation School of Pilots,\\ 9. Ashok Chakra,\\ 10. Heidelberg University\end{tabular}} & \begin{tabular}[c]{@{}l@{}}1. Bauman Moscow State Technical University,\\ \underline{\textbf{2. Gagarin Air Force Academy,}}\\ 3. S.P. Korolev Rocket and Space Corporation Energia,\\ 4. Saint Petersburg State Polytechnical University,\\ 5. University of Oxford,\\ 6. Saint Petersburg State University,\\ 7. Steklov Institute of Mathematics,\\ 8. Leipzig University,\\ 9. Heidelberg University,\\ 10. Moscow Conservatory\end{tabular} \\ \hline \multicolumn{2}{|c|}{\textit{\textbf{Query Example 2: (Pedro Pablo Kuczynski, countryOfCitizenship,?o, 2015)}}}\\ \hline TIMEPLEX base & \multicolumn{1}{c|}{TIME2BOX} \\ \hline \multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}1. doctor honoris causa,\\ 2. President of Peru,\\ 3. Minister of Economy and Finance of Peru,\\ 4. Grand Cross of the Order of the Sun of Peru,\\ 5. President of the Council of Ministers of Peru,\\ 6. World Bank,\\ 7. Serbia,\\ 8. Royal Spanish Academy,\\ 9. Meurthe-et-Moselle,\\ 10. Norwegian Sportsperson of the Year\end{tabular}} & \begin{tabular}[c]{@{}l@{}}1. France,\\ 2. Germany,\\ \underline{\textbf{3. United States of America,}}\\ 4. Austria,\\ 5. Romania,\\ 6. United Kingdom,\\ 7. Poland,\\ 8. Kingdom of Italy,\\ 9. Russian Soviet Federative Socialist Republic,\\ 10. Russian Empire\end{tabular} \\ \hline \end{tabular}} \caption{Examples of timestamp-based link prediction on WIKIDATA12k. Top 10 entities predicted by TIMEPLEX base and TIME2BOX are numbered, where 1 denotes Top One. Correct answers are in bold.} \label{tb:timestamped_query_example} \vspace{-0.8cm} \end{table} \begin{table}[] \resizebox{0.5\textwidth}{!}{\setlength\tabcolsep{1.5pt}\begin{tabular}{|l|l|l|} \hline & \multicolumn{2}{l|}{Query Example: (Kyiv, country, ?o, {[}1905, 1919{]})} \\ \hline & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Gold Answers: (1){[}1905, 1916{]}-\textgreater Russian Empire; (2){[}1917, 1919{]}-\textgreater{}Ukrainian People's Republic\end{tabular}} \\ \hline year & Timplex & TIME2BOX \\ \hline 1905 & Soviet Union & Russian Empire \\ \hline 1906 & Soviet Union & Russian Empire \\ \hline 1907 & Russian Empire & Russian Empire \\ \hline 1908 & Soviet Union & Russian Empire \\ \hline 1909 & Ukrainian Soviet Socialist Republic & Russian Empire \\ \hline 1910 & Soviet Union & Russian Empire \\ \hline 1911 & Russian Empire & Russian Empire \\ \hline 1912 & Ukrainian Soviet Socialist Republic & Russian Empire \\ \hline 1913 & Soviet Union & Russian Empire \\ \hline 1914 & Ukrainian Soviet Socialist Republic & Russian Empire \\ \hline 1915 & Soviet Union & Russian Empire \\ \hline 1916 & Ukrainian People's Republic & Russian Empire \\ \hline 1917 & Ukrainian People's Republic & Ukrainian People's Republic \\ \hline 1918 & Ukrainian People's Republic & Ukrainian People's Republic \\ \hline 1919 & Ukrainian People's Republic & Ukrainian People's Republic \\ \hline \end{tabular}} \caption{An example of interval-based link prediction on WIKIDATA12k. For time interval-based link prediction, the current strategy is to discretize intervals to timestamps and average ranks for each timestamp-based prediction result as the final evaluation. Only top 1 predictions are shown here.} \label{tb:timeinterval_query_example} \vspace{-1.05 cm} \end{table} \subsection{Model Variation Study} In this section, we report on observations of results about different model variations, which are shown in Table~\ref{tb:eval_link_prediction},~\ref{tb:eval_time_prediction_12k} and ~\ref{tb:eval_time_prediction_114k}. Compared to TIME2BOX-DM, which adopts element-wise product as operators, element-wise addition projectors (TIME2BOX) perform better in link prediction and time prediction on both datasets. Moreover, we observe that explicitly modeling association between time and relation (i.e., TIME2BOX-TR) does not significantly improve the performance of TIME2BOX framework, although it speeds up the convergence at training, indicating that intersection operators are good enough to learn the association between time and relations implicitly. As for different time-aware strategies incorporated in TIME2BOX-SI and TIME2BOX-TNS, we find that on both datasets TIME2BOX-SI does not outperform TIME2BOX, indicating that using one sample strategy (i.e., Fig.~\ref{fig:instant_vs}) is better at modeling time interval-based statements in TIME2BOX. In addition, we find that by incorporating time negative samples, the performance on time prediction can be further improved on WIKIDATA12k, although TIME2BOX-TNS is not superior to TIME2BOX in link prediction. \section{Introduction} \label{sec:intro} A knowledge base (KB) such as Wikidata and DBpedia stores statements about the world around us. A KB is typically represented as a set of triples in the form of $(s, r, o)$ -- short for \textit{(subject, relation, object)}, encoding the association between entities and relations among them. A statement is often temporally scoped, which indicates during which time period it is valid. Two examples are (\textit{Albert Einstein, educatedAt, ETH Zurich, 1896 - 1900}) and (\textit{Albert Einstein, academicDegree, Doctor of Philosophy in Physics, 1906}). The former specifies the time period during which Albert Einstein studied at ETH, and the latter points out the specific date when he obtained his degree. Graphs that contain a substantial amount of such time-aware statements are often called temporal knowledge base (TKB) in the machine learning literature. Each statement in a TKB is associated with a validity time as (\textit{s, r, o, $t^*$})\footnote{$t^{*}$ could be a time instant or time interval}. Due to the ever-changing state of the world and missing data, TKBs usually contain inaccurate and incomplete information similar to KBs. The sparsity of TKBs necessitates temporal knowledge base completion (TKBC), namely inferring missing statements from known statements. Temporal link prediction task is proposed to evaluate a TKBC model by testing its performance on answering incomplete temporal queries of the form (\textit{s, r, ?o, $t^*$}) or (\textit{?s, r, o, $t^*$}). Despite recent success stories on time-agnostic KBC, research on TKBC is still in its early age and is facing new challenges. The validity time period of a statement is often missing in a KB. As a result, it is difficult to distinguish whether statements in a KB are atemporal (e.g., (\textit{Albert Einstein,~instanceOf,~Human})) or time-dependent (e.g., (\textit{United States of America, instanceOf, Historical Unrecognized State\footnote{According to Wikidata that statement holds true during 1776-1784.}})). This leads to the question of which statements should be part of a TKB in the first place. Prior works restrict TKBs to a collection of statements where the validity time period for each statement \textit{must} be available. However, in WIKIDATA114k, a dataset from Wikidata, for instance, 85.1\% of all statements are temporal, 56.2\% of the temporal statements are missing their validity time information and are excluded in previous studies while only 247,393 out of 1,660,824 statements (i.e., 14.9\%) are truly atempora \footnote{For all the statements, we first categorize predicates into two groups -- atemporal predicates and temporal predicates. If a predicate has ever been involved in a statement that has temporal scoping, it belongs to temporal predicates; otherwise, it is an atemporal predicate. Atemporal statements are those associated with atemporal predicates.}. As the number of temporal statements with missing validity information is substantial, excluding them from a TKB will significantly reduce the amount of information that could be useful in TKB studies. Retaining these temporally scoped statements leads to several challenges that need to be addressed. For instance, how to design a TKBC model to handle statements with and without known temporal scoping from the data representation perspective and model design perspective? Clearly, the conventional representation in prior TKBC in the form of (\textit{s, r, o, t})\footnote{t denotes a time point} falls short. An ideal TKBC model should be more \textit{flexible} to address cases when the validity information of different types (i.e., point in time, right-open interval (known start time), left-open interval (known end time), closed interval) is presented in a TKB or even no validity information is available for a statement. The second challenge is how to predict the temporal scope of a statement as it is often missing in TKBs. This task is referred to as time interval prediction, which amounts to answering incomplete queries of the form (\textit{s, r, o, $?I$}). How to generate a predicted time interval and evaluate it require further investigation. This problem has only been addressed very recently by ~\citet{jain-etal-2020-temporal}. However, at times their evaluation protocols fail to distinguish one predicted interval from another since they do not consider the gap between the predicted and the gold interval in case of no overlap. For instance, the same metric scores are assigned to two predictions [1998, 1999] and [1998, 2010] when a gold interval [2011, 2020] is considered. In this paper, we present a novel TKBC embedding framework, called TIME2BOX, which relies on the intuition that the answer set in a temporal query $(s, r, ?o, t*)$ is always a subset of answers of its time-agnostic counterpart $(s, r, ?o)$. As illustrated in Fig.~\ref{fig:time_reasoning_process}, there are four correct answer entities to a query (\textit{Albert Einstein, employer, ?o}). However, when time information is specified (e.g., \textit{Albert Einstein, employer, ?o, 1933}) as shown in Fig. \ref{fig:instant_vs}, the number of positive answers becomes three. With more temporal information being available (e.g., \textit{Albert Einstein, employer, ?o, [1933, 1955]}), the answer set shrinks further (see Fig.~\ref{fig:interval_vs}). Therefore, we propose to model a statement in a TKB by imitating the process of answering its corresponding temporal query$(s, r, ?o, t^{*})$, which can be achieved in two steps -- finding answer entities to its atemporal counterpart (\textit{s, r, ?o}) by using KBC methods and then picking out entities to be true to the temporal query from preceding answers by including time. We implement this idea by using box embeddings~\citep{vilnis2018probabilistic, patel2020representing}, especially inspired by QUERY2BOX~\citep{ren2019query2box}, which is originally used for answering conjunctive queries~\citep{mai2019contextual}. Boxes, as containers, can naturally model a set of answers they enclose. The filtering functionality of time can be naturally modeled as intersections over boxes similarly to Venn diagrams~\citep{venn1880diagrammatic}. Meanwhile, performing an intersection operation over boxes would still result in boxes, thus making it possible to design a unified framework to deal with statements/queries of different types. \begin{figure*} \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{./figs/temporal_box_atemporal_kb.pdf} \label{fig:atemporal_kb} \end{subfigure} \unskip\ \vrule\ \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{./figs/temporal_box_instant_kb.pdf} \label{fig:instant_kb} \end{subfigure} \unskip\ \vrule\ \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{./figs/temporal_box_interval_kb.pdf} \label{fig:interval_kb} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=1.02\textwidth]{./figs/temporal_box_atemporal_vs.pdf} \caption{Modeling an atemporal statement} \label{fig:atemporal_vs} \end{subfigure} \unskip\ \vrule\ \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=1.0\textwidth]{./figs/temporal_box_instant_vs.pdf} \caption{Modeling a timestamp-based statement} \label{fig:instant_vs} \end{subfigure} \unskip\ \vrule\ \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{./figs/temporal_box_interval_vs.pdf} \caption{Modeling an interval-based statement} \label{fig:interval_vs} \end{subfigure} \caption{\textbf{Illustration of TIME2BOX reasoning process.} In each figure, the upper part shows entities and relations in the KB space and the latter illustrates their correspondences in the embedding space. In all figures, the final boxes are shaded regions in orange and answer entities are in the boxes. Note that we omit the edges between \textit{Albert Einstein} and associated entities in Fig.\ref{fig:instant_vs} and \ref{fig:interval_vs} for simplicity. Fig.\ref{fig:atemporal_vs} shows that for an atemporal query, the reasoning process picks out all possible answers from the whole entity space and encloses them into a time-agnostic box. In Fig.\ref{fig:instant_vs} a time-aware box is added to enclose entities that are relevant to \textit{Albert Einstein} in 1933. Then the intersection between time-agnostic and time-aware boxes consists of a new box, which contains entities that satisfy both requirements. When more validity information is available, Fig.\ref{fig:interval_vs} shows that more time-aware boxes can be added and the intersection box that contains correct answers would shrink further. By doing so, TIME2BOX is flexible enough to handle different types of queries.} \label{fig:time_reasoning_process} \end{figure*} \textbf{Our main research contributions are listed as follows}: \begin{itemize} \item We propose a box-based knowledge graph embedding TKBC framework (TIME2BOX) that can represent and model statements with different types of validity information (i.e., unknown temporal scoping, left/right-open intervals and closed intervals). \item We introduce a new evaluation metrics $gaeIOU$ for interval evaluation by taking the gap between a gold interval and a predicted interval into consideration if no overlap between them exists. \item Extensive experiments on two datasets - WIKIDATA12k and WIKIDATA114k - show that TIME2BOX yields the state-of-the-art (SOTA) results in link prediction and outperforms SOTAs in time prediction by significant margins. \end{itemize} TIME2BOX code is available in Github\footnote{\url{https://github.com/ling-cai/Time2Box}}. \section{Method} \label{sec:method} The key insight of TIME2BOX lies in an intuition that the answer set of a temporal query ($s$, $r$, $?o$, $t^*$) is always a subset of answers of its time-agnostic counterpart ($s$, $r$, $?o$) and set size decreases by adding more temporal constraints. As illustrated in Fig.~\ref{fig:atemporal_vs}, four object entities satisfy the atemporal query (\textit{Albert Einstein, employer, ?o}) while three entities are the correct answers when the query is restricted to the year of 1993 (see Fig.~\ref{fig:instant_vs}) and only one entity is correct when another time information is further added in the statement, shown in Fig.\ref{fig:interval_vs}. Inspired by this observation, we propose to model a temporal statement ($s$, $r$, $o$, $t^*$) by imitating the process of answering its corresponding temporal query ($s$, $r$, $?o$, $t^*$), which can be achieved through two steps: 1) finding a set of answer entities that are true for the corresponding atemporal query by using any KBC model and 2) imposing a filtering operation enforced by time to restrict answers afterwards. In following sections, we take a time instant-based statement as example to formalize our idea in a KB space and a vector space, respectively. \subsection{Formalization in a KB Space} For a statement (\textit{s, r, o, t}) in a TKB, the first step of TIME2BOX, as shown in Fig.~\ref{fig:atemporal_vs}, is to project the subject $s$ to a set of object entities that are true to its corresponding atemporal query in the form of (\textit{s, r, ?o}) enforced by the relation $r$. This is a prerequisite for any statement and can theoretically be addressed by any KBC method. Formally, the relation projector is defined as: \textbf{Relation Projector -- $\mathbf{OP_r}$:} Given the subject entity $s$ and the relation $r$, this operator obtains: $S_r=\{o^{\prime} \mid (s, r, o^{\prime}) \in \mathcal{G}^N\}$. $\mathcal{G}^N$ is the time-agnostic counterpart of $\mathcal{G}$. Then time information is used to filter out entities that are incorrect during the time of interest from the answer set $S_r$. This can be achieved by first \textit{projecting} the subject $s$ to a set of object entities that co-occur with $s$ in statements at a given time point (as shown in blue edges in Fig.~\ref{fig:instant_vs}) and then finding the \textit{intersection} over them and $S_r$ (see the three entities in red in Fig.~\ref{fig:instant_vs}). Accordingly, the two involved steps are defined as: \textbf{Time Projector -- $\mathbf{OP_t}$:} Given the subject $s$ and the timestamp $t$, this operator obtains: $S_t = S_t = \{o^\prime \mid o^\prime \in E ~and ~(s, r^{\prime}, o^{\prime}, t) \in \mathcal{G} ~and~ r^{\prime} \in R \}$. \textbf{Intersection Operator -- $\mathbf{OI}$:} Given $S_r$ and $S_t$, this operator obtains the intersection $S_{inter}=\{o \mid o \in (S_r ~and~ S_t)\}$. In fact, such a modeling process also fits to left/right-open interval-based statements directly. For a left/right-open interval-based statement, we only consider the known endpoint time in such an interval as we follow the open-world assumption. However, for an atemporal statement, we only need one relation projector to obtain $S_{r}$, which is the final set consisting of correct answer entities to its query form. For a closed interval-based query, one commonly used approach is to randomly pick one timestamp within the interval and to associate it with (\textit{s, r, o}). Then it can be modeled the same way as an instant-based statement. At training, a timestamp is always randomly picked from the interval to ensure that all the timestamps in the interval are used. In addition to the common strategy, TIME2BOX allows sampling of a sub-time interval within the given interval so that two time constraints (i.e., start time and end time) can be imposed by using two time projectors, as shown in Fig.~\ref{fig:interval_vs}\footnote{Alternatively, one could also enumerate all the timestamps within the interval and use different $\mathbf{OP_t}$ to project the subject to multiple sets of entities, each of which is specific for one timestamp. Subsequently, an intersection operator again is performed over all the sets of entities obtained from $\mathbf{OP_r}$ and $\mathbf{OP_t}$ in the previous step. However, in spite of its efficiency, this practice is hard to implement in mini-batch training manner since time intervals in different statements usually have varying duration and thus contain different number of timestamps.}. \subsection{Implementation in a Vector Space} In order to implement this idea in a vector space, two key points are 1) how to model a set of answers returned by a KBC model and 2) how to instantiate two projectors and one intersection operator. Prior KBC models are incapable of directly representing a set of answer entities in a vector space. Instead, they usually represent entities and relations as single points in the vector space and model point-to-point projections, e.g., TransE. Inspired by QUERY2BOX~\cite{ren2019query2box}, which is used to deal with complex queries that involve conjunctions, existential quantifiers, and disjunctions, we introduce the idea of boxes in the vector space and thus name the proposed framework TIME2BOX. The reasons for adopting boxes are three-fold. First, boxes are containers that can naturally model a set of answer entities they enclose. Second, finding the intersection set among sets of entities amounts to finding the intersected area over boxes similar to the concept of Venn diagram. Third, the result of performing an intersection operation over boxes is still a box, which makes it possible to deal with statements of different types in a unified framework. In TIME2BOX, each entity $e \in E$, relation $r \in R$, and timestamp $t \in T$ ($T$ is the set of all discrete timestamps in a TKB) are initialized as vector embeddings $\mathbf{e} \in \mathbb{R}^d$, $\mathbf{r} \in \mathbb{R}^d$, and $\mathbf{t} \in \mathbb{R}^d$. $S_r$, $S_t$, and $S_{inter}$ refer to sets of entities and thus are modeled by boxes, represented as box embeddings in the vector space. In the following, we first introduce the definition of box embeddings and then introduce main components of modeling and reasoning. \subsubsection{Box Construction and Reasoning} \paragraph{Box embeddings:} Mathematically, they are axis-aligned hyper-rectangle in a vector space, which can be determined by the position of the box (i.e., a center point) and its length (i.e., offsets). Formally, in a vector space $\mathbb R^d$, a box can be represented by $\mathbf{b}$=(Cen($\mathbf{b}$), Off($\mathbf{b}$)), where $\text{Cen}(\mathbf{b}) \in \mathbb{R}^d$ is its center point and $\text{Off}(\mathbf{b}) \in \mathbb{R}_{\ge 0}^d$ specifies the length/2 of the box in each dimension. If an entity belongs to a set, its entity embedding is modeled as a point inside the box of the set. The interior of a box in the vector space can be specified by points inside it: \begin{equation} box_\mathbf{b} = \{\mathbf{e} \in \mathbb R^d: \text{Cen}(\mathbf{b})-\text{Off}(\mathbf{b}) \preceq \mathbf{e} \preceq \text{Cen}(\mathbf{b})+\text{Off}(\mathbf{b}))\} \end{equation} where $\preceq$ denotes element-wise inequality. \paragraph{Projection operators in a vector space} In previous work, relations are commonly assumed to be projectors that transform a subject embedding to an object embedding in terms of \textit{points} in a vector space, e.g., TransE~\citep{bordes2013translating} and RotatE~\citep{sun2019rotate}. Here we adopt a similar idea but take both relations and timestamps as projectors ($\mathbf{OP_r}$ and $\mathbf{OP_t}$) to project a subject to a set of entities in $S_r$ -- represented as a time-agnostic \textit{box} $\mathbf{b}_{S_r}$ and to a set of entities in $S_t$ -- represented as a time-aware \textit{box} $\mathbf{b}_{S_t}$, respectively, which are illustrated in Fig.~\ref{fig:time_reasoning_process}. The center of a box can be defined as the resulting embedding after applying a projection operator ($\mathbf{OP_r}$ or $\mathbf{OP_t}$) on the subject embedding. The centers of $\mathbf{b}_{S_r}$ and $\mathbf{b}_{S_t}$ can be formulated as below: \begin{align} Cen(\mathbf{b}_{S_r}) = \mathbf{e} \odot \mathbf{r}; ~~ Cen(\mathbf{b}_{S_t}) = \mathbf{e} \otimes \mathbf{t} \label{equ:center-proj} \end{align} where $\odot \mathbf{r}$ and $\otimes \mathbf{t}$ are projectors $\mathbf{OP_r}$ and $\mathbf{OP_t}$, respectively. Theoretically, projection operators could be instantiated by any projector in existing KBC models, such as element-wise addition in TransE~\citep{bordes2013translating} , element-wise product in DistMult~\citep{yang2014embedding} , and Hadamard product in RotatE~\citep{sun2019rotate}. Even though $\mathbf{OP_r}$ and $\mathbf{OP_t}$ can be different, we choose the same projector for both and implement two TIME2BOX models by taking element-wise addition and element-wise product as operators by following TransE and DistMult, respectively. Accordingly, these two models are named as TIME2BOX-TE and TIME2BOX-DM. Ideally, the size of the box $\mathbf{b}_{S_r}$ should be determined by both the subject entity and the relation, since the box contains all object entities that satisfy a query in the form of (\textit{s, r, ?o}). The same applies to $\mathbf{b}_{S_r}$. However, as the entity space is usually large in a KB, introducing entity-specific parameters would result in high computational cost. Therefore, in practice, $\text{Off}(\mathbf{b}_{S_r})$ and $\text{Off}(\mathbf{b}_{S_t})$ are only determined by the relation $r \in R$ and the timestamp $t \in T$, respectively. Put differently, the size of $\mathbf{b}_{S_r}$ and $\mathbf{b}_{S_t}$ are initialized based on $r$ and $t$, which are learned through training. \paragraph{Intersection Operators in a vector space} An intersection operator aims to find the intersection box $\mathbf{b}_{inter}=\text{(Cen}(\mathbf{b}_{inter}), \text{Off} (\mathbf{b}_{inter}))$ of a set of box embeddings $\mathbf{B} = \{\mathbf{b}_{S_r}, \mathbf{b}_{S_t1}, ..., \mathbf{b}_{S_tn}\}$ obtained from the previous step. The intersection operator should be able to deal with $\mathbf{B}$ of different sizes, as required in Fig.~\ref{fig:time_reasoning_process}. Thus, both $\text{Cen}(\mathbf{b}_{inter})$ and $\text{Off} (\mathbf{b}_{inter})$ are implemented by using attention mechanisms. \sloppy Following the idea in ~\citet{bahdanau2014neural}, the center point $\text{Cen(}\mathbf{b}_{inter}\text{)}$ is calculated by performing element-wise attention over the centers of boxes in $\mathbf{B}$. This can be formulated as follows: \begin{equation} \text{Cen}(\textbf{b}_{inter}) = \sum_i \text{softmax}(\text{NN}(\text{Cen}(\mathbf{b}_{i})) \odot \text{Cen}(\mathbf{b}_{i}) \label{eq:cent_inter} \end{equation} where NN is a one-layer neural network and $\mathbf{b}_i \in \mathbf{B}$. Since the intersection box $\mathbf{b}_{inter}$ must be smaller than any of the box in $\mathbf{B}$, we use element-wise min-pooling to make sure the new box must be shrunk and perform DeepSets~\citep{zaheer2017deep} over all the Off($\mathbf{b}_i$) ($\mathbf{b}_i \in \mathbf{B}$) to downscale $\mathbf{b}_{inter}$ \citep{ren2019query2box}. This can be written as below: \begin{equation} \text{Off(}\mathbf{b}_{inter}\text{)} = \text{Min}(\textbf{Off}) \odot \sigma(\text{DeepSets}(\textbf{Off})) \end{equation} where $\text{DeepSets(} \{\mathbf{x_1}, \mathbf{x_2}, ..., \mathbf{x_n}\}\text{)}=\text{MLP}(1/n)\cdot \sum_i^n\text{MLP(} \mathbf{x_i}\text{)}$, $\sigma$ denotes the sigmoid function, and $\textbf{Off}=\{\text{Off}(\mathbf{b}_i):\mathbf{b}_i \in \mathbf{B}\}$. \subsection{Optimization Objective} For a query, TIME2BOX aims to pull correct entity embedding into the final box $\mathbf{b}_{inter}$ while pushing incorrect entity embedding far away from it. The distance-based loss proposed by ~\citet{sun2019rotate} satisfies this need : \begin{equation} Loss = -log \; \sigma(\gamma - D(\mathbf{o}, \mathbf{b}_{inter})) - \frac{1}{k}\sum_{i=1}^{k}log \; \sigma(D(\mathbf{o}^{\prime}, \mathbf{b}_{inter})- \gamma) \label{eq:loss} \end{equation} where $\sigma$ is the sigmoid function, $\gamma$ is a fixed margin, $\mathbf{o}$ is the embedding of a positive entity to the given query, and $k$ is the number of negative samples $\mathbf{o^{\prime}}$. $ D(\mathbf{o}, \mathbf{b}_{inter})$ measures the distance between entity $\mathbf{o}$ and the final box $\mathbf{b_{inter}}$. With the size of a box being considered, the distance is divided into two parts: outside distance $D_{outside}(\mathbf{o}, \mathbf{b}_{inter})$ and inside distance $D_{inside}(\mathbf{o}, \mathbf{b}_{inter})$. For cases when $\mathbf{o}$ is outside of $\mathbf{b}_{inter}$, the former refers to the distance of an entity embedding $\mathbf{o}$ to the boundary of the box $\mathbf{b}_{inter}$, and the latter calculates the distance between the box's center $Cen(\mathbf{b}_{inter})$ and its boundary. This can be formalized as below: \begin{equation} D(\mathbf{o}, \mathbf{b}_{inter}) = \alpha \cdot D_{inside}(\mathbf{o}, \mathbf{b}_{inter}) + D_{outside}(\mathbf{o}, \mathbf{b}_{inter}) \end{equation} where $\alpha \in [0, 1]$. When $\alpha = 0$, it means that a positive entity is required to be in a $\mathbf{b}_{inter}$, but its distance to the center is not as important. $D_{inside}(\mathbf{o}, \mathbf{b}_{inter})$ and $D_{outside}(\mathbf{o}, \mathbf{b}_{inter})$ are written as: \begin{align*} D_{inside}(\mathbf{o}, \mathbf{b}_{inter}) &= \|\text{Cen}(\mathbf{b}_{inter })-\text{Min}(\mathbf{b}_{max}, \text{Max}(\mathbf{b}_{min}, \mathbf{o}))\|_1 \\ D_{outside}(\mathbf{o}, \mathbf{b}_{inter}) &= \| \text{Max}(\mathbf{o}-\mathbf{b}_{max}, \mathbf{0})+ \text{Max}(\mathbf{b}_{min}-\mathbf{o}, \mathbf{0})\|_1 \end{align*} where $\mathbf{b}_{min}=\text{Cen}(\mathbf{b}_{inter})-\text{Off}(\mathbf{b}_{inter})$ and $\mathbf{b}_{max}=\text{Cen}(\mathbf{b}_{inter})+\text{Off}(\mathbf{b}_{inter})$ are embeddings of the bottom left corner and the top right corner of $\mathbf{b}_{inter}$, respectively. Compared to answering atemporal queries, finding correct answers to temporal ones is more challenging. Therefore, the loss function should reward more in the optimization direction that is capable of correctly answering temporal queries. For a given query $q_i$, we use $\frac{1}{n_{q_i}}$, where $n_{q_i}$ is the number of correct answers to $q_i$ that appear in training as a weight to adjust the loss. The core idea here is that time-aware queries often are satisfied with fewer answers, and, thus, are harder to answer compared to atemporal queries. \subsection{Time Negative Sampling} \label{secsec:time_negative} Entity negative sampling is widely used in KBC. For a positive sample $(s, r, o)$, negative samples are constructed by replacing $o$ with other entities $o^{\prime}$, ensuring that $(s, r, o^{\prime})$ must not appear in training set. In this paper, we adopt this strategy so that the model is able to learn the association between entities, relations, and time occurring in a positive sample by distinguishing the correct answers from the negative samples. Moreover, for time-aware statements, we perform time negative sampling, which corrupts a statement $(s, r, o, t)$ by replacing $t$ with a number of timestamps~$t^{\prime}$. This is important for statements where only start time or end time is available. As shown in Fig.~\ref{fig:time_reasoning_process}, the proposed architecture cannot distinguish those statements from time instant-based statements. But time negative sampling can mitigate this issue to some degree. The following is used for time negative sampling concerning different types of statements ($st$ and $et$ are short for start and end time): \begin{equation} T^{\prime} = \begin{cases} \{t^{\prime}\in T: (s, r, o, t^{\prime}) \notin \mathcal{G}\} & (s, r, o, t) \\ \{t^{\prime}\in T: (s, r, o, t^{\prime}) \notin \mathcal{G}, t^{\prime} <st\} & (s, r, o, I_{st}^{-}) \\ \{t^{\prime}\in T: (s, r, o, t^{\prime}) \notin \mathcal{G}, t^{\prime} >et\} & (s, r, o,I_{-}^{et}) \\ \{t^{\prime}\in T: (s, r, o,t^{\prime}) \notin \mathcal{G}, t^{\prime} \notin T^{et}_{st}\} & (s, r, o, I_{st}^{et}) \\ \end{cases} \end{equation} where $T^{et}_{st}$ denotes a set of time points within the interval $I_{st}^{et}$. \subsection{Time Smoothness Regularizer} Time is continuous. We may expect that neighboring timestamps would have similar representations in the vector space. Following~\citet{lacroix2019tensor}, we penalize time difference between embeddings of two consecutive timestamps by using $L_2$: \begin{equation} \Lambda(T) = \frac{1}{|T|-1}\sum_{i=1}^{|T|-1}\|\mathbf{t}_{i+1}-\mathbf{t}_i\|_2^2 \end{equation} During the training step, for batches with temporal statements, we add this regularizer with a weight scalar $\beta$ to the loss function in Eq.~\ref{eq:loss}, where $\beta$ specifies the degree of penalization. \section{Preliminaries} \label{sec:preliminary} \subsection{Temporal Knowledge Bases} Prior TKBC methods typically work on TKBs in which each statement has to be associated with validity information. Thereby, for statements that do not have known temporal scopes, they either exclude them from a TKB in the beginning or assume that these statements hold all the time~\citep{lacroix2019tensor}. However, there are limitations in both ways. As discussed in Section~\ref{sec:intro}, excluding them from a TKB will significantly reduce the amount of information that could be beneficial in TKBC studies as the number of such statements is substantial. For the latter, their assumption would be problematic since a lot of them may only hold for a certain time period. For instance, the statement (\textit{Warsaw, country, Russian Empire}) holds during the time interval [1815-07-09, 1916-11-04]. Following the open-world assumption (OWA), we argue that TKBs are an extension to KBs insofar as the lack of temporal scoping for any given statement does not imply it holding indefinitely. In the following, we use $t$ and $I_{st}^{et}=[st, et]$ to denote a time point and a time interval, respectively. The symbol $-$ will stand for unknown temporal validity. There are five types of statements in such a TKB: (1) ({$s$, $r$, $o$}) for a statement without a known temporal scope; (2) ($s$, $r$, $o$, $t$) for a timestamped statement which holds at a point in time $t$; (3) ($s$, $r$, $o$, $I^-_{st}$) for a right-open interval-based statement, in which only the time when the statement starts to hold is known; (4) (\textit{s, r, o, $I^{et}_{-}$}) for a left-open interval-based statement, in which only the time when the statement ceases to hold is known; and (5) ({$s$, $r$, $o$, $I^{et}_{st}$}) for a statement which is temporally scoped by a closed interval $I_{st}^{et}$. Then a TKB is denoted as $\mathcal{G}=\bigcup_{(s,r,o,t^*)}$, namely the union of statements of the five types, where $s,o \in E$ represent entities, $r \in R$ denotes a relation and $t^* \in \{t, I_{st}^{-}, I_{-}^{et}, I_{st}^{et}, None\}$ denotes different types of valid time or no valid time available. \subsection{The TKBC Problem} Link prediction and time prediction are two main tasks used to evaluate a TKBC model. Statements in TKBs are split into training, validation, and test sets, used for model training, parameter tuning and model evaluation, respectively. \paragraph{Link prediction} Queries used in this task are of the form ({$s$, $r$, $?o$, $t^{*}$}). Performance is evaluated on the rank of a given golden, i.e., ground truth, answer in the list of all the entities sorted by scores in a descending order. Then MRR (mean reciprocal rank), MR (mean rank), HITS@1, HITS@3 and HITS@10 are computed from the ranks over all queries in the test set. However a query may be satisfied by multiple answer entities. Thus another correct answer may be ranked over the given golden answer. In such cases, a KBC/TKBC model should not be penalized. A traditional strategy used in KBC is to filter out those correct answers that are already in the training and validation sets before calculating metrics. This strategy can be directly applied to queries of the form ({$s$, $r$, $?o$}) or ({$s$, $r$, $?o$, $t$}). However, it may not be sufficient for queries of the form (\textit{s, r, ?o, I}), as there may exist other answers that are true during a time period within the interval~$I$. For example, suppose two statements -- \textit{(Albert Einstein, employer, Princeton University, [1933, 1955])} and \textit{(Albert Einstein, employer, Leiden University, [1920, 1946])}, both Princeton University and Leiden University are correct answers during the period [1933, 1946]. One naive way to solve this problem is to discretize the interval $I$ to a sequence of time points $t$s and then to convert (\textit{s, r, ?o, I}) into timestamped queries of the form (\textit{s, r, ?o, t}) so that the same filtering process can be performed on each timestamped query. Finally, the ranks over them are averaged to be the rank for a time interval-based query. This idea is well-aligned with the proposal by~\citep{jain-etal-2020-temporal}. \paragraph{Time prediction} Time prediction queries in TKBs are of the form (\textit{s, r, o, ?I}). Despite the fact that the validity information could be a point in time or a time interval, a point in time can be viewed as a special time interval, in which start time and end time coincide. Thus, time prediction boils down to time interval prediction. Its performance is evaluated by the overlap between a gold interval and a predicted interval or the closeness between those in case of no overlap. We describe the existing evaluation protocols and propose a generalized evaluation metric in Section~\ref{sec:eval_metrics}. \section{Related Work} \label{related_work} \paragraph{Knowledge Base Completion} KBC has been extensively studied in the past~\citep{bordes2013translating, lin2015modeling, yang2014embedding, trouillon2016complex, sun2019rotate}. The core insight of these methods is to embed entities and relations in a KB into low-dimensional vectors, which can be utilized in downstream tasks, such as link prediction. These methods can be roughly classified into two groups: transformation-based models and semantic matching energy based models. Transformation-based models treat a relation as a transformation operator. Two well-known assumptions are translation (e.g., TransE~\citep{bordes2013translating}) and rotation (e.g., RotatE~\citep{sun2019rotate}). For instance, TransE assumes that for a statement (\textit{s, r, o}), the object embedding can be derived by translating the subject embedding operated by the relation embedding in the embedding space. As such, the presence of a statement in a KG is measured by the distance between the object embedding and the subject embedding after transformation. Semantic matching energy based methods determine the existence of a statement by a score calculated from a function of learned entity and relation embeddings in the latent space~\citep{yang2014embedding, trouillon2016complex}. For instance, DistMult~\citep{yang2014embedding} uses a 3-way inner product as the scoring function. In addition to these basic triple-based methods, other studies have focused on exploiting higher-order structural information in a KG (e.g., paths, neighbours), such as PTransE~\citep{lin2015modeling}, R-GCN~\citep{schlichtkrull2018modeling}, and TransGCN~\citep{cai2019trans}. All KBC models ignore the temporal scoping of statements, and thus are unable to address temporal statements. However, these models are the foundations for TKBC. \paragraph{Temporal Knowledge Base Completion} Recently, there has been a surge of interest in taking validity information into consideration as KB statements are usually time-dependent. There are two lines of works on temporal link prediction. The first branch focuses on so-called dynamic knowledge bases (i.e., event KBs (ICEWS)~\cite{DVN/28075_2015}), where each statement is associated with a timestamp. The insight behind this branch is that knowledge in KBs evolves over time and historical statements/events drive the occurrence of new events. Therefore, their focus is more on extrapolation -- predicting unseen entity relationships over time by modeling temporal dependencies of statements/events in KBs ~\citep{trivedi2017know, xu2019temporal, jin2020recurrent, deng2020dynamic}. The most well-known model is Know-Evolve~\citep{trivedi2017know}, which assumes that the occurrence of facts/events can be modeled as a multivariate temporal point process. Unlike the first branch that assumes timestamp-based statements/facts, a TKB in the second branch can associate a statement with time instants or time intervals as its validity information. Moreover, the goal of this line of work is more about interpolation -- filling in missing components in TKGs with/without explicitly modeling the temporal dependencies between statements. Recent works follow a common paradigm, that is, to encode time as embeddings and then incorporate them into time-agnostic KBC models~\citep{Leblay2018deriving, garcia-duran-etal-2018-learning, goel2020diachronic, ma2019embedding, lacroix2019tensor, jain-etal-2020-temporal}. \citet{Leblay2018deriving} investigated several extensions of existing KBC models by directly fusing time embeddings with relation embeddings, including TTransE, TRESCAL, etc. \citet{goel2020diachronic} proposed to learn time-varying entity embeddings by replacing a fraction of embedding weights with an activation function of learned frequencies. Unlike previous work, which view time as numeric values, \citet{garcia-duran-etal-2018-learning} concatenated the string representation of the relation and the time, and fed them into an LSTM to obtain time-aware relation representations, which were used in TransE (TA-TransE) and DistMult(TA-DM) afterwards. More recently, \citet{lacroix2019tensor} presented the TKBC problem as a four-way tensor completion problem, and proposed TNTComplEx, which was extended from the time-agnostic ComplEx~\cite{trouillon2016complex}. \citet{jain-etal-2020-temporal} augmented TNTComplex with three more time-dependent terms as analogy to the idea of approximating joint distributions by low-order marginals in graph models and incorporated soft ordering and span constraints as temporal inductive biases. Our work belongs to the latter group. However, our proposal is more flexible as we can deal with cases when $t^*$ is a time instant, a (left/right-open) interval, closed interval or even missing, while prior works can only handle one timestamped representation of the form ($s$, $r$, $o$, $t$).
{ "attr-fineweb-edu": 1.954102, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfT7xK6Ot9WA5jh--
\section{Concluding Remarks} In this paper, we have examined the problem of making matches that exhibit competitive balance. Through simulations on an online team sports game published by Electronic Arts (EA), we demonstrated that regressing the final score difference on carefully designed player, team and match features followed by a binary threshold can significantly outperform aggregate skill-based models in predicting match balance. Our approach provides insight into how simple, generalizable attributes of players and teams can be used to better capture in-game dynamics that impact match outcomes. We also show that using a linear model with the specific features can lead to computational savings ranging by orders of magnitude with a small sacrifice in predicting performance. A main focus of this work is to provide insight to game designers on how to improve the quality of online team games through better matchmaking. We believe that the presented definitions, features, prediction models and experiments can be utilized by game designers for predicting competitive balance in other types of online games. We have seen how the proposed models generalize from 3v3 to 6v6 games, but it still remains to see how it can be evaluated in different sports games and larger teams. Finally, note that the models and experiments illustrated in this paper are based on existing player data. The approaches described here have not yet been deployed in a live matchmaking service. However, they provide the hypotheses for A/B testing once they are deployed in the game. \section{Experiments} \label{sec:exp} The purpose of this section is to explore the efficiency of our model on real datasets. Specifically, i) we evaluate and compare the performance of our prediction models to a variety of baseline models, ii) we demonstrate that the definition of competitive balance as a regression problem leads to significant prediction performance improvements, iii) we showcase that using the proposed definition of balance in combination with the proposed features can lead to substantial computational savings, iv) we discuss which features have the most influence on competitive balance in team sports games. For context on execution times, our experiments were conducted using single process implementations on a 64-bit MacBook Pro with an Intel Core i7 CPU at 2.6GHz and 16 GB RAM. All presented models are implemented in Python, using the the scikit-learn \cite{pedregosa2011scikit} and Keras \cite{chollet2015keras} libraries. \subsection{Baseline methods} \label{sec:baseline} We compare the performance of the model {{\texttt{NN}}} presented in Section \ref{sec:method} to a variety of baseline methods. \spara{{{\texttt{Dummy}}}:} {{\texttt{Dummy}}} is the most naive approach that we consider. It always predicts the mean of the training set. {{\texttt{Dummy}}} corresponds to a competitive balance prediction model. \spara{{{\texttt{AvgSkill}}}:} In the {{\texttt{AvgSkill}}} approach the match feature set includes only two features , i.e., the two averages of the skill ratings of the players in each team. These features are used as the input to a linear regression model that predicts the final score difference. Note that this baseline corresponds to the currently used single-valued skill aggregation model and corresponds to a competitive balance prediction model. \spara{{{\texttt{Linear}}}:} {{\texttt{Linear}}} is a linear regression model, where the input match instances comprise all the features presented in Table \ref{tbl:playerprofile}. In addition to being a fundamental regression model that is known for its simplicity, linear regression provides insight into the model covariates that explain the variance in the response variable (final score difference). It provides us insights into which explanatory variables are significant in the match balance prediction task. {{\texttt{Linear}}} corresponds to a competitive balance prediction model. \spara{{{\texttt{RndFrst}}}:} Random Forests construct a multitude of decision trees at training time and output the mean prediction of all trees. {{\texttt{RndFrst}}} corresponds to a competitive balance prediction model. \spara{{{\texttt{Logistic}}}:} This is a logistic regression model that uses a logistic function to model a binary dependent variable. In particular, it models the probability of a certain class. {{\texttt{Logistic}}} corresponds to a probability of winning prediction model. \spara{{{\texttt{NNSoftmax}}}:} {{\texttt{NNSoftmax}}} is a model with the same neural network architecture as {{\texttt{NN}}} with a single difference. We replace the final layer with a softmax layer to assign a probability to whether a match is balanced or not. {{\texttt{NNSoftmax}}} corresponds to a probability of winning prediction model. For each of the baseline methods we select the best feature subset using the \emph{recursive feature elimination} method and a statistical feature analysis. Further details on significant features are provided in Section \ref{sec:sigfeatures}. All models with the sign $^+$ in their name use their corresponding best subset of features. Note that we did not perform best subset feature selection for {{\texttt{Dummy}}} and {{\texttt{AvgSkill}}} since the features of these models are determined by their definitions. \subsection{Model Characteristics} \begin{table} \centering \footnotesize \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{F1}}\\ & 3v3 & 6v6\\ \hline {{\texttt{Dummy}}} & $0.00\;(\pm 0.00)$ & $0.61\;(\pm 0.01)$\\ {{\texttt{AvgSkill}}} & $0.00\;(\pm 0.00)$ & $0.61\;(\pm 0.01)$\\ {{\texttt{Linear}}} & $0.59\;(\pm 0.26)$ & $0.70\;(\pm 0.08)$\\ {{\texttt{RndFrst}}} & $0.57\;(\pm 0.17)$ & $0.64\;(\pm 0.07)$ \\ {{\texttt{NN}}} & $0.62\;(\pm 0.13)$ & $0.73\;(\pm 0.08)$\\ \hline {{\texttt{Linear$^{+}$}}} & $0.60\;(\pm 0.27)$ & $0.73\;(\pm 0.12)$\\ {{\texttt{RndFrst$^{+}$}}} & $0.58\;(\pm 0.19)$ & $0.65\;(\pm 0.12)$ \\ {{\texttt{NN$^{+}$}}} & $\mathbf{0.64\;(\pm 0.10)}$ & $\mathbf{0.74\;(\pm 0.09)}$\\ \hline \end{tabular} \caption{Training-set performance of models when predicting competitive balance. \label{tbl:trainingresults}} \end{table} \label{sec:modchar} To demonstrate the performances of our prediction models, we use the \emph{F1} metric \cite{powers2011evaluation}. To support our claims we showcase the results of the training model evaluation along with the corresponding standard deviation in Table \ref{tbl:trainingresults}. However, our main focus is on the test-set performances of the models that are presented in Table \ref{tbl:testresults}. \spara{F1 Score:} This metric is the harmonic mean of the precision and recall. The results of {\textit{F1}} are presented in Table \ref{tbl:testresults}. We present the mean and standard deviation of the models' performances over 20 consecutive matches. First, we compare the performances of the models when all features of Section \ref{sec:features} are used (rows 1-5) and when the best subset of features is used (rows 6-8). We notice that there is improvement in the models' performances when the best features are used. For instance in the case of the {{\texttt{Linear}}} and {{\texttt{Linear$^{+}$}}} the performance increases up to ${\scriptsize \sim}4\%$. The conclusion of this observation is two-fold; (i) selecting the best subset of features boosts the models' performances, (ii) the feature engineering described in Section \ref{sec:features} and the features that we propose are overall very effective for the prediction of balanced matches. Now, we focus on the individual comparisons between the different models. Note that {{\texttt{NN$^{+}$}}} achieves the best {\textit{F1}} performance (row 8) in both the 3v3 and 6v6 datasets. An interesting observation is that even though {{\texttt{NN$^{+}$}}} demonstrates the best performance during testing, its performance is not significantly higher than the performance presented by the much simpler {{\texttt{Linear$^{+}$}}} model (at most $4\%$ more). Overall, we see that the performances of {{\texttt{NN$^{+}$}}}, {{\texttt{Linear$^{+}$}}} and {{\texttt{RndFrst$^{+}$}}} are close. Finally, for the 3v3 and 6v6 datasets we see that the {\textit{F1}} scores of {{\texttt{Dummy}}} and {{\texttt{AvgSkill}}} are $0.00$ and $0.61$, respectively. The {{\texttt{Dummy}}} model classifies all the matches as unbalanced, hence the zero {\textit{F1}} score. In both cases however, the conclusion is that simply using the average skill as a feature is not a good predictor of match balance. \begin{table} \centering \footnotesize \begin{tabular}{|l|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{1}{|c|}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{F1}}\\ & 3v3 & 6v6\\ \hline {{\texttt{Dummy}}} & $0.00\;(\pm 0.00)$ & $0.60\;(\pm 0.01)$ \\ {{\texttt{AvgSkill}}} & $0.00\;(\pm 0.00)$ & $0.60\;(\pm 0.01)$\\ {{\texttt{Linear}}} & $0.53\;(\pm 0.02)$ & $0.68\;(\pm 0.03)$ \\ {{\texttt{RndFrst}}} & $0.56\;(\pm 0.02)$ & $0.61\;(\pm 0.03)$\\ {{\texttt{NN}}} & $0.59\;(\pm 0.02)$ & $0.68\;(\pm 0.02)$\\ \hline {{\texttt{Linear$^{+}$}}} & $0.60\;(\pm 0.02)$ & $0.68\;(\pm 0.02)$\\ {{\texttt{RndFrst$^{+}$}}} & $0.58\;(\pm 0.01)$ & $0.64\;(\pm 0.03)$\\ {{\texttt{NN$^{+}$}}} & $\bf 0.62\;(\pm 0.02)$ & $\bf 0.71\;(\pm 0.02)$ \\ \hline \end{tabular} \caption{Test-set performance of models when predicting competitive balance. The results are averaged over 20 matches. \label{tbl:testresults}} \end{table} \begin{table} \centering \footnotesize \begin{tabular}{|l|l|l|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{1}{|c|}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{F1}}\\ & 3v3 & 6v6\\ \hline {{\texttt{Logistic}}} & $0.54\;(\pm 0.02)$ & $0.56\;(\pm 0.03)$\\ {{\texttt{NNSoftmax}}} & $0.57\;(\pm 0.01)$ & $0.59\;(\pm 0.02)$\\ \hline {{\texttt{Logistic$^{+}$}}} & $0.55\;(\pm 0.01)$ & $0.56\;(\pm 0.02)$\\ {{\texttt{NNSoftmax$^{+}$}}} & $0.55\;(\pm 0.01)$ & $0.62\;(\pm 0.02)$\\ \hline {{\texttt{Delalleau$^{+}$}}} & $\bf 0.59\;(\pm 0.01)$ & $\bf 0.70\;(\pm 0.01)$\\ \hline \end{tabular} \caption{Test-set performance of models when predicting probability of winning. The results are averaged over 20 matches. \label{tbl:classresults}} \end{table} \subsection{Why predict the score difference?} \label{sec:esd} The purpose of this section is first to demonstrate the effectiveness of using a competitive balance prediction model as opposed to a probability of winning prediction model. The differences of the aforementioned models are presented in Section \ref{sec:predmodel}. For this purpose, we define {{\texttt{Logistic}}}, {{\texttt{Logistic$^{+}$}}}, {{\texttt{NNSoftmax}}} and {{\texttt{NNSoftmax$^{+}$}}} all of which are probability of winning prediction models and are trained to predict the probability that a team will win. {{\texttt{Logistic}}} and {{\texttt{Logistic$^{+}$}}} use the same features as {{\texttt{Linear}}} and {{\texttt{Linear$^{+}$}}}, respectively, but perform logistic regression, while {{\texttt{NNSoftmax}}} and {{\texttt{NNSoftmax$^{+}$}}} use the same features and neural network architectures as {{\texttt{NN}}} and {{\texttt{NN$^{+}$}}}, respectively, but with an additional softmax layer that predicts the probability of winning. Table \ref{tbl:classresults} demonstrates the performances of the probability of winning models on the test set. Due to space limitations and given that the performance differences are pronounced we omit the corresponding results of the training set. We compare the results of Table \ref{tbl:classresults} to the corresponding scores of Table \ref{tbl:testresults} where we consider the competitive balance prediction models. Specifically, we focus on the comparison of the following models; i) {{\texttt{Linear}}} with {{\texttt{Logistic}}}, ii) {{\texttt{Linear$^{+}$}}} with {{\texttt{Logistic$^{+}$}}}, iii) {{\texttt{NN}}} with {{\texttt{NNSoftmax}}}, iv) {{\texttt{NN$^{+}$}}} with {{\texttt{NNSoftmax$^{+}$}}}. We see that using competitive balance models leads to higher {\textit{F1}} scores compared to predicting the probability of winning. This is pronounced by the models' corresponding {\textit{F1}} scores which are overall much lower compared to Table \ref{tbl:classresults}. The only exception is when comparing {{\texttt{Linear}}} with {{\texttt{Logistic}}} where the latter performs slightly better. Furthermore, we perform a comparison of our proposed models with the probability of winning model proposed in \cite{delalleau2012beyond} denoted as {{\texttt{Delalleau$^{+}$}}} in row 5 of Table \ref{tbl:classresults}. In that paper the authors present a neural network architecture with the following task; given two teams $A$ and $B$ predict the probability of team $A$ to win over team $B$. Similar to the final score difference, we define a threshold $\omega$ to compute balanced and non-balanced matches from the probability of winning. In particular, we consider the match to be balanced if $ | \textrm{Pr(team $A$ wins over team $B$)} - \frac{1}{2} |\leq \omega$, otherwise the match is not considered balanced. Furthermore, since \cite{delalleau2012beyond} addresses a different game and the authors do not provide the exact feature and embedding descriptions, we use as input features to corresponding best subset of features as presented in Section \ref{sec:features}. Overall when focusing on the {\textit{F1}} score of {{\texttt{Delalleau$^{+}$}}} in Table \ref{tbl:classresults} and comparing it to the corresponding score of {{\texttt{NN$^{+}$}}} in Table \ref{tbl:testresults} we see that using the competitive balance models is more effective for the determination of balanced matches than using the probability of winning. Another takeaway is that while {{\texttt{NN$^{+}$}}} performs similar to {{\texttt{Delalleau$^{+}$}}}, we observe that the same applies to {{\texttt{Linear$^{+}$}}}, whereas this is not the case for {{\texttt{Logistic$^{+}$}}}. We optimized this hyperparameter for best performance of {{\texttt{Delalleau$^{+}$}}} model (obtained by $\omega = 0.3$). \subsection{Training and inference times} \begin{table*} \centering \footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Prediction Model}} & \multicolumn{1}{|c|}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{Time 3v3}}\\ & & Training & Inference\\ \hline Competitive balance & {{\texttt{Dummy}}} & 1.0e-04 & 1.0e-05\;($\pm$ 0.0e-00)\\ Competitive balance & {{\texttt{AvgSkill}}} & 5.0e-02 & 5.0e-05\;($\pm$ 0.0e-00)\\ Competitive balance & {{\texttt{Linear}}} & 8.2e+00 & 7.0e-05\;($\pm$ 0.0e-00)\\ Competitive balance & {{\texttt{RndFrst}}} & 9.9e+03 & 6.0e-03\;($\pm$ 7.0e-04) \\ Competitive balance & {{\texttt{NN}}} & 2.3e+02 & 2.1e-02\;($\pm$ 1.0e-02)\\ Probability of winning & {{\texttt{Logistic}}} & 1.2e+02 & 6.0e-05\;($\pm$ 0.0e-00)\\ Probability of winning & {{\texttt{NNSoftmax}}} & 3.7e+02 & 2.1e-02\;($\pm$ 5.0e-03)\\ \hline Competitive balance & {{\texttt{Linear$^{+}$}}} & 5.6e+00 & 5.0e-05\;($\pm$ 0.0e-00)\\ Competitive balance & {{\texttt{RndFrst$^{+}$}}} & 8.3e+03 & 7.0e-03\;($\pm$ 1.0e-02)\\ Competitive balance & {{\texttt{NN$^{+}$}}} & 4.7e+02 & 2.4e-02\;($\pm$ 1.4e-02)\\ Probability of winning & {{\texttt{Logistic$^{+}$}}} & 6.2e+01 & 6.0e-05\;($\pm$ 0.0e-00)\\ Probability of winning & {{\texttt{NNSoftmax$^{+}$}}} & 3.6e+02 & 2.3e-02\;($\pm$ 6.0e-03)\\ \hline Probability of winning & {{\texttt{Delalleau$^{+}$}}} & 2.8e+01 & 2.3e-02\;($\pm$ 5.0e-03)\\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{Time 6v6}}\\ & Training & Inference\\ \hline {{\texttt{Dummy}}} & 1.0e-04 & 2.0e-05\;($\pm$ 0.0e-00)\\ {{\texttt{AvgSkill}}} & 3.5e-02 & 6.0e-05\;($\pm$ 0.0e-00)\\ {{\texttt{Linear}}} & 5.2e-00 & 6.0e-05\;($\pm$ 0.0e-00)\\ {{\texttt{RndFrst}}} & 7.7e+03 & 8.0e-03\;($\pm$ 1.0e-03)\\ {{\texttt{NN}}} & 1.3e+02 & 2.4e-02\;($\pm$ 1.2e-02)\\ {{\texttt{Logistic}}} & 8.7e+01 & 7.0e-05\;($\pm$ 0.0e-00)\\ {{\texttt{NNSoftmax}}} & 1.2e+02 & 2.4e-02\;($\pm$ 7.0e-03)\\ \hline {{\texttt{Linear$^{+}$}}} & 3.7e+00 & 8.0e-05\;($\pm$ 0.0e-00)\\ {{\texttt{RndFrst$^{+}$}}} & 6.2e+03 & 8.0e-03\;($\pm$ 1.0e-03)\\ {{\texttt{NN$^{+}$}}} & 1.6e+02 & 2.8e-02\;($\pm$ 8.0e-03)\\ {{\texttt{Logistic$^{+}$}}} & 5.3e+01 & 7.0e-05\;($\pm$ 0.0e-00)\\ {{\texttt{NNSoftmax$^{+}$}}} & 2.4e+02 & 2.7e-02\;($\pm$ 1.0e-02)\\ \hline {{\texttt{Delalleau$^{+}$}}} & 1.6e+02 & 2.7e-02\;($\pm$ 8.0e-03)\\ \hline \end{tabular} \caption{Training time required for matches that occurred within a 3-month data period. Inference time averaged over 20 matches. The left table is for 3v3 matches and the right table is for 6v6 matches. \label{tbl:traininginfertimes}} \vspace{-.05in} \end{table*} Table~\ref{tbl:traininginfertimes} compares the training and inference times required by each of the prediction models. Column 1 denotes the balance definition the corresponding model uses as described in Section \ref{sec:predmodel}. The training time represents the time required to train each model over a series of matches that occurred within a 3-month period. The inference time of each model is the time required to make a prediction, averaged over 20 matches. We report both the mean and the standard deviation of the models' running times in seconds. Observe that the training and the inference times of the linear models are approximately $10$x and $100$x faster, respectively, compared to the corresponding times of the neural network-based models. In online gaming taking the training and inference times into account is essential to provide high-quality service to the user without latency. Therefore, even though the {{\texttt{Linear$^{+}$}}} model's {\textit{F1}} score is slightly lower than the one of the best-performance {{\texttt{NN$^{+}$}}} model, in practice trading-off performance for speed can be essential for online gaming. Note that the training and inference times of the 3v3 dataset are higher than the corresponding ones of the 6v6 dataset. This is because the total number of matches in the 6v6 dataset are fewer compared to the 3v3 dataset. \subsection{Significant features} \begin{table*} \centering \footnotesize \begin{tabular}{lcclll} \hline \textbf{$M$ Features} & \textbf{Coeff.} & \textbf{Coeff.} & \textbf{Description in Team Sports games} \\& \textbf{3v3} & \textbf{6v6} & & & \\ \hline avg\_freq\_dropout & $+1.128$ & $+1.164$ & Average dropout rate of the players in Teams 1 \& 2\\% & Match dropouts & Match dropouts \\ \hline avg\_assists\_abs\_diff & $+0.889$ & $+0.741$ & Absolute difference of the average number of assists between Teams 1 \& 2 \\ \hline avg\_freq\_defense & $+0.850$ & $+0.918$ & Average rate of playing defense among players of Teams 1 \& 2\\ \hline avg\_freq\_left & $+0.856$ & $+0.504$ & Average rate of playing left wing among players of Teams 1 \& 2\\ \hline avg\_freq\_right & $+0.843$ & $+0.494$ & Average rate of playing right wing among players of Teams 1 \& 2 \\ \hline avg\_freq\_wins & $-0.334$ & $-0.178$ & Average win rate among players of Teams 1 \& 2 \\ \hline cnt\_players& $+0.117$ & $+0.142$ & Number of human players in Teams 1 \& 2 in the beginning of the match\\ \hline \end{tabular} \caption{Statistical analysis of indicative most significant features. The fourth column describes these features for the online team sports online games used in our experiments. \label{tbl:significantfeatures}} \vspace{-.15in} \end{table*} \label{sec:sigfeatures} This section provides a discussion on the important features of a match for predicting competitive balance. Table \ref{tbl:significantfeatures} presents the most statistically significant features with their coefficient residuals and a corresponding brief description of their meaning in the team sports game. For all features $p < 0.001$. An interesting observation is that the frequency of dropouts (row 1), a common phenomenon in team online games, is a strong indicator of competitive balance. A player dropping a game before it finishes results in having a team with less human players and therefore gives the lead to the opposing team. As expected, statistics on the past actions of the players (row 2) are also significant for competitive balance, and in this case this action was particularly the assists that occurred in the game. Rows 3-5 correspond to the role experience players have. In the team sports game we are considering, the most important roles are defense, and right and left offense. However, there are other roles in the game that appear to not be critical to the final outcome. In row 6 we observe a negative coefficient for the frequency of winning feature. This implies that the largest the frequency of previous wins for a team the more unlikely it is that the match will end with a balanced score. Finally, as expected, the number of players in each team at the beginning of the match (row 7) also seems to impact balance. Potentially, this is because teams with less players are assigned with bots whose playing behavior significantly deviates from a human's, and thus can be more unexpected. Note that while Table \ref{tbl:significantfeatures} presents the features with the largest coefficients in magnitude, we considered other features as well that had much smaller impact. For instance, in addition to using the average of the features we also considered their corresponding standard deviation. Furthermore, we also evaluated the skill ratings of the opposing teams. The results showed that even though skill rating was not among the most significant features, we cannot draw conclusive results about its importance because the datasets we used comprise real matches between teams of close team skill ratings. That said, we remark that performing a statistical significance test after the proposed model has been deployed in the matchmaking system, could provide us with potentially deeper insights, even though we expect the presented results to mostly hold. In Table \ref{tbl:significantfeatures}, column 4 provides the descriptions of some of the most significant features of the team sports online game that we are investigating. In addition to the statistical analysis of Table \ref{tbl:significantfeatures} we created correlation matrices to identify high amount of correlations that would suggest unreliable prediction estimates and removed these features from the dataset, which are ommitted however due to lack of space. Finally, we used the \emph{recursive feature elimination} feature selection method and the results of the statistical analysis to decide the best subset of features for each of the baseline methods (when applicable) and for our proposed model presented in Section \ref{sec:baseline}. An interesting observation was the common consensus between the feature selection methods and the different models about the significant features for competitive balance. \section{Introduction} \label{sec:intro} Video games are now a ubiquitous part of life; it is estimated that there will be over 2.47 billion video gamers worldwide by the end of 2019, according to Statista~\cite{statistica}. Furthermore, there has been a big shift toward online gameplay, as the majority of games offer online match capabilities~\cite{online_gameplay}. At the forefront of the online experience of video gamers is matchmaking, the process of grouping players or teams into matches. It follows a ``goldilocks'' principle, as research indicates that matches that are neither too difficult nor too easy are key to player stimulation and engagement~\cite{butcher2008pluribus,delalleau2012beyond}. Competitive balance is particularly important in team games as it is influenced by both intra- and inter-team dynamics. A \textit{competitively balanced match}, formally defined in Section \ref{problem_definition}, is one where the distribution of successful scoring events among teams is close \cite{merritt2014scoring}. For example, a soccer match that ends 2-1 is more balanced compared to one that ends 5-0. The traditional approach for creating competitively balanced matches is to create teams of players with similar aggregate skill ratings. A skill rating is usually a single numeric value derived from a player's prior match history. In team games, a team's skill is an aggregate of the skill ratings of all its players such as the mean/median. Popular skill rating systems today include Elo \cite{elo1978rating}, Glicko \cite{glickman1999parameter}, and Trueskill \cite{herbrich2007trueskill}. Although the simplicity of skill-based matchmaking makes it a very attractive choice, multiple studies \cite{delalleau2012beyond,claypool2015surrender} including the research presented in this paper show that this simplicity fails to capture important in-game dynamics that impact match balance. In team sports games, teams are comprised of players who play in different roles such as offense, defense, goalkeeper, etc. A single skill rating value indicates a player's overall proficiency and gives no information about expertise in specific roles required for a team. Thus, a team whose players' expertise match the roles they play in will likely dominate an opposing team with little match between player expertise and roles even if both teams have similar skill ratings \cite{wang2015thinking,delalleau2012beyond}. Considering another example, imagine a team game where each team has a player in the forward, midfield and defense role. A team with players whose expertise match the specific role they play in i.e. forward, midfield or defense will likely outperform an opposing team where all players have expertise only in the forward role despite both sets of players having equivalent skill ratings. Therefore, it is not sufficient to use a single skill value to capture the team dynamics in team games. In this work our contributions are summarized as follows: \begin{itemize \item We provide a new definition of \textit{competitive balance} for team games, where a match is competitively balanced if the final score difference is concentrated close to zero. Our experiments show that using the proposed definition can lead to ${\scriptsize \sim}15\%$ and ${\scriptsize \sim}2\%$ improved performance over previous definitions in linear models and non-linear models, respectively. \item We explain and provide insight to a variety of player, team and match features in team sports games. We design several models to predict competitive balance and demonstrate the definition's utility in a team sports game published by Electronic Arts (EA). Our experiments show that using the proposed features can lead up to ${\scriptsize \sim}16\%$ and up to ${\scriptsize \sim}5\%$ prediction performance improvement in linear models and non-linear models, respectively. \item We demonstrate that using our definition of game balance with the proposed set of features can lead to great computational savings with small predictive performance loss. In particular, the proposed linear model achieves up to ${\scriptsize \sim}100$x computational advantages, particularly at inference times, with less than ${\scriptsize \sim}2\%$ sacrifice in prediction performance compared to non-linear models. \end{itemize} \section{Method} \label{sec:method} Our model architecture used for predicting match competitive balance comprises three main phases; (i) the extraction of player features, (ii) the aggregation of these features to form team and match level features, (iii) and the predictor for a balance match. Note that the presented architecture is a sophisticated extension of the one presented in \cite{delalleau2012beyond}. However, our main contribution is in motivating and optimizing the correct metric for competitive balance, and in designing a model with high running time performance using an appropriate set of features, rather than designing a powerful model architecture. \subsection{Data} The analysis presented in this paper is based on data from two team game modes, namely the 3 players versus 3 players (3v3) and the 6 players versus 6 players (6v6) game modes, of an online team sports game published by Electronic Arts, Inc. (EA). Both datasets comprise more than 100,000 players and more than 500,000 games, and the balanced samples are generally in similar order to unbalanced samples. All models are trained and evaluated on subsets of the same 3-month data period. \subsection{Feature construction} \label{sec:features} The conversion of raw attributes to meaningful features used for the prediction of competitive balanced matches occurs during the feature construction phase. \spara{Player features:} The set of features that corresponds to player $j$, i.e., $P_j$ is constructed from a player's in-game attributes as described in the game logs. These features are classified into four broad categories as shown in Table \ref{tbl:playerprofile}; i) match experience, ii) role experience, iii) play style, iv) dropout history. The match experience category captures general player participation and influence in matches, such as the number of matches a player has played, and the fraction of matches the player has won. The role experience category captures player experience in specific roles. Roles can be either explicitly defined by the game, e.g., forward, defense, etc. or they can be inferred from a player's play style, e.g. a player who saves many scoring attempts may be categorized as an ``defender''. Regardless of the role type, maintaining the frequency of a player's involvement with a specific role is a strong indicator of the player's play style. As discussed in Section \ref{related_work}, this insight can be helpful in the creation of balanced teams/matches. The play style category covers all actions performed by a player. It differs from the role experience class in that we record statistics about different actions instead of roles. Actions generally reflect all micro-level in-game events during a match. These include scoring attempts, giveaways, hits, takeaways, etc. Finally, we consider the dropout history category. An essential aspect that leads to competitive balanced matches is ensuring \textit{a priori} that the number of players in each team will remain close throughout the whole match. Matches invariably end up unbalanced when players from a particular team quit early. \begin{table*}[tbp!] \centering \footnotesize \begin{tabular}{lll} \hline \textbf{$P_{j}$ Feature} & \textbf{Category} & \textbf{Description} \\ \hline num\_matches & Match Experience & Number of matches a player has participated in.\\ num\_wins & Match Experience & Number of wins a player has had.\\ freq\_wins & Match Experience & Ratio of wins to the number of total matches \\ & & of a player has participated in.\\ \hline num\_role\_i & Role Experience & Number of times a player has played a specific role $i$. \\ freq\_role\_i & Role Experience & Ratio of times that a player played a specific role $i$ \\ \hline num\_action\_$i$ & Play style & Number of times a player has performed a specific action $i$.\\ avg\_num\_action\_$i$ & Play style & On average, how many times a player performs \\ & & a specific action $i$ in a match. \\ \hline num\_dropout & Dropout History & Number of times a player has dropped out from a game.\\ freq\_dropout & Dropout History & Ratio of times a player dropped out from a game.\\ \bottomrule \end{tabular} \caption{A summary of a player feature set $P_{j}$. \label{tbl:playerprofile}} \end{table*} The player features for all categories described are cumulative and updated in an online fashion as matches are completed. This renders player feature sets time-dependent. It also enables our model to account for recent player activity in making predictions on match competitive balance \spara{Team features:} Team features are based on aggregating individual player statistics. Given a team, we compute the average value and the standard deviation of its players for each of the player features described in Table \ref{tbl:playerprofile} to create the corresponding team features. \spara{Match features:} A match is a complex entity whose performance is determined by both inter- and intra-teams dynamics. A match feature set that is only based on aggregating individual player statistics lacks insight on significant aspects of a match that impact competitive balance such as team properties and duels between members of opposing teams. Here, the match feature set contains the team features of the opposing teams with additional match-specific features. First, we use the team features of the opposing teams to create the following feature categories: i) the absolute difference of each of the average player feature values of the two teams (non-negative value), ii) the difference of the average player feature values of the two teams (can be negative). The first category aims to capture potential superiority of one team over the other that could lead to an unbalanced match. For instance, a team comprising players that have scored many goals in past matches could dominate a team with players that have been less successful at it. Such phenomena are captured using the difference of the team members' average feature values. Now even though the absolute value itself demonstrates the existence of a superior team, it does not tell which team that is, which requires the second category of features (signed difference). These categories provide us with insight into the similarity and differences in team ability. For example, a match with a large difference in the average attempts of goals between the two teams will probably be more unbalanced. Furthermore, we consider features that are based on player information that is available at the time of matchmaking. These include the skill ratings of players, their allocated roles in the game and the team sizes. We compute various transformations of these features to capture insights on the match composition. First, as is done in skill-based matchmaking systems, we compute the skill rating of each team by averaging the skill ratings of its players. We then compute the difference and absolute skill difference between the two teams. Other features we compute include the skill difference of the players with the highest skill ratings in each team, the skill difference of the players with the lowest skill ratings in each team as well as the standard deviation of the skill ratings in each team. These features showcase if there is a stronger or weaker link within any team that could affect the overall performance, or if one team has a much larger range of skill ratings compared to the other team. Finally, due to the importance of creating matches fast, many matchmaking systems create teams of different sizes. This leads to the last set of match features, namely the number of human players in each team. In particular, games usually position game bots (short for robots) in roles where human players are missing. However, the extent to which these can mimic how a human would play the game is rather limited, and they either result in dominating the game, or significantly under-performing. Therefore, having opposing teams with a similar number of human players is an important indicator of match balance. We stress that the described set of features implicitly models synergy among players with different role preferences and playstyles because the corresponding features are the input to a neural network-based model that captures the non-linear dependencies between player features. We do not present all of the match features due to restricted space; however, in Section \ref{sec:exp}, we do provide the most significant features impacting competitive balance. \spara{Feature pre-processing:} Finally, all features are standardized following z-score normalization \cite{zill2011advanced}. In total, we propose approximately 100 features for the 3v3 and 6v6 game modes, respectively. This number depends on the available set of roles and actions in a game. \subsection{Prediction model} \label{sec:predmodel} This section presents two categories of predictions models used for determining balanced matches. \spara{Probability of winning prediction model (Delalleau et al.~\cite{delalleau2012beyond}):} The probability of winning prediction model, proposed by Delalleau {\em et al.}~\cite{delalleau2012beyond}, trains a soft classifier that predicts the probability that either team wins given player, team, and match features. In this case, a match is considered to be balanced if the probability of winning is about $0.5$ for either team. \spara{Competitive balance prediction model (this work):} This work proposes using the definition of competitive balance to determine balanced matches. In particular, the predictor is tasked with regressing the final score difference between the two teams on the match features. The smaller the difference, the more competitive balanced the match is. This score difference value is used to determine match balance via a threshold function. We motivate the effectiveness of the aforementioned definition in Section ~\ref{sec:esd}, and compare the results with the probability of winning prediction model. We refer to this model as {{\texttt{NN}}} and its succession is as follows: \smallbreak\noindent{1)} The player feature sets $P_{j}$ containing the latest features of each player are retrieved from the database. \smallbreak\noindent{2)} For each team $T_{j\in\{1,2\}}$ the corresponding team features $t_{j\in\{1,2\}}$ are created. \smallbreak\noindent{3)} The match feature set $M$ combines into a single vector the individual features of each opposing team, along with the additional match-specific features: $ M = (t_{1},t_{2},m). $ \smallbreak\noindent{4)} Match features are summarized by a predictor. We compare a linear and a two-layer neural network predictor similar to the one in \cite{delalleau2012beyond} with fully connected layers followed by the Rectified Linear Unit (ReLU) activation. \smallbreak\noindent{5)} The output layer of the previous step returns a single, continuous value $r$ representing the final score difference prediction. Depending on the application, it might be useful to convert the real value into a binary label, denoting whether the match will be balanced or not. For this purpose, we use function $f:\mathbb{R}\to \{0,1\}$ defined by, $ f(r)= \mathbb{I}_{|r|< \theta}(r) $, where $\mathbb{I}$ denotes the indicator function, $r$ is the signed score difference, $\theta$ is a threshold hyperparameter for measuring competitive balance, and where 0 and 1 represent an unbalanced and balanced match, respectively. \spara{Selecting threshold hyperparameter $\theta$:} In some team sports games if one party leaves the match before it ends then they forfeit. In sports the forfeiting team loses with a predefined score difference, e.g., for soccer and hockey the match ends with 3-0, basketball 25-0, e.t.c. This means that in sports $\theta$ can be clearly defined as the score difference after a forfeit (a forfeited match can be considered unbalanced). Alternatively, $\theta$ could be treated as a hyperparameter that could be tuned for optimizing player engagement and retention. \iffalse {\bf \color{blue} Significantly summarize the next two para}In sports forfeit occurs when a match automatically ends because a team is unable to meet the basic standards for playing the game, either before the game begins or as a result of actions happening during the match. The forfeiting team loses with a predefined score difference, such as 3-0 based on the FIFA Disciplinary Code in soccer, or 25-0 based on FIBA in basketball tournaments. The exact same notion of forfeiting exists in team sport games, and the corresponding forfeiting score reflects an imbalanced match. We use this notion in team sport games to choose $\theta$ as the score difference of forfeiting matches. In the experimental section $\theta$ is equal to 3. To define $\theta$ in other types of games, the game designers can select the appropriate scoring event to optimize for balance, and then tune the hyperparameter, respectively. \fi \spara{Training and validating the prediction models:} The time sensitive nature of our data make them unnameable to a traditional data shuffling and K-fold cross validation procedure. Furthermore, recent matches are stronger predictors of competitive balance in upcoming matches than older matches are. Therefore, we perform training, validation, and testing as follows. Assume a total of matches that occurred within $K$ days that are used for the model's evaluation and hyperparameter tuning. We use the first $K-3$ days for training, the matches that occurred during days $K-2$ and $K-1$ for validation and the matches of day $K$ for testing. These base sets allow us to design and evaluate our model in an offline way, before deploying it into the matchmaking system. Now, we assume that there is a stream of incoming matches arriving at day $K+1$. We shift the first $K$ days by one, such that the training set contains data from the first $K-2$ days, the validation set includes the next $K-1$ and $K$ days, and the most recent chunk $K+1$ is used for testing. This procedure continues, and allows the system to continuously update the model using a larger training set, and selecting the most recent validation and testing parameters that capture current tendencies. \subsection{Competitively balance-based matchmaking} \label{sec:integration} In this section, we presented an architecture for predicting competitive balance to improve matchmaking. We emphasize that the results presented in this paper are based on existing player data, but none of the approaches have been deployed to a live matchmaking service. That said, here we describe how the proposed model can be deployed to a live matchmaking system. We assume a matchmaking system similar to the ones presented by Delalleau {{et al.}} \cite{delalleau2012beyond} and Zook et al. \cite{zook2019better}. Briefly, players enter a queue and the matchmaking system assembles teams using a sampling strategy, calculates the match quality, and either reassembles the teams if the quality is low, or launches the match. In a similar matchmaking system, the prediction model is integrated with the match quality computation step, and is used as an additional quality assessment addressing the competitive balance of a match. Predicting whether a match is going to be balanced has low computational overhead, while the prediction model itself can be trained offline. \section{Problem Definition \& Notations} \label{problem_definition} Throughout the discussion we consider a set of $k$ players $\mathcal{P} = \{P_{j}; j=1,\ldots,k\}$, two opposing teams denoted as $T_{1}$ and $T_{2}$ with team feature sets $t_{1}$ and $t_{2}$ respectively, and a match $M$. We use real-valued vectors to describe the i) player, ii) team and iii) match feature sets. In this setting, every player is described by their corresponding feature set, so we use notation $P_{j}$ to represent the player feature set of player $j$. Furthermore, each team $T_{j\in\{1,2\}}$ is described by its feature set $t_{j\in\{1,2\}}$, respectively. A match occurring between two opposing teams is described by its feature set $M$, defined by the combination of the feature sets of the opposing teams ($t_{1}$ and $t_{2}$), along with match-specific features ($m$), i.e., $M=(t_{1},t_{2},m)$. The goal of this paper is to \textit{predict competitive balance} in team online games. A match $M$ between teams $T_{1}$ and $T_{2}$ is \textit{competitive balanced}, if the difference between the number of successful scoring events achieved by $T_{1}$ with that of $T_{2}$ approaches zero \cite{csataljay2009performance,gomez2014performance,vaz2011importance}. Note that for the general case, a scoring event can be defined broadly and can be a different thing based on the game genre. For instance, scoring events during a match in FPS and MOBA games can be indicators of the team with more kills, with the most bases captured, or with the most men standing. \section{Related Work} \label{related_work} \spara{Player, team, and match features for matchmaking:} While traditional matchmaking systems deal with the creation of 1 player versus 1 player (1v1) matches, the popularity of team games has necessitated the need for systems that can create and match teams. These systems are typically team extensions of existing 1v1 skill-based systems, such as Elo \cite{elo1978rating}, Glicko \cite{glickman1999parameter} and Microsoft's TrueSkill \cite{herbrich2007trueskill}. For example, a team's skill might be represented by the mean Elo score of all its players. A major drawback of these approaches is that they represent a team by a single scalar value\textemdash its skill rating. This value, however, does not capture the complex dynamics of competitive team games, such as the distribution of roles in a team, player play style, team characteristics, etc. Recent research has sought to address the drawbacks of considering only skill ratings for team games by modeling team dynamics. Some researchers \cite{jimenez2011matchmaking,myslak2014developing} have taken advantage of how player skill ratings vary over different roles in a game to create player feature sets comprising role-specific skill levels. More recent research \cite{francillette2013players,wang2015thinking} has explored the enrichment of player feature sets with play styles (playing behavior of players during a game). For instance, Wang et al. \cite{wang2015thinking} experimentally show on the multiplayer online battle arena (MOBA) game \textit{League of Legends} that teams with a mix of both aggressive and defensive players are more competitive than teams of players of a single style. The main issue with these approaches is their focus on creating player-specific features, and using these features to create balanced teams. Teams, especially as they become larger, are a lot more complex entities than the sum of their individual members. A simple aggregation of the player features does not sufficiently capture match dynamics, such as individual duels between forwards and defenders, or rogue team members, that impact the players' enjoyment. Our work focuses on both, creating richer player feature sets, and considering features that capture team dynamics which aren't necessarily tied to a player's characteristics. In this regard, the team profiling approach of Delalleau {{et al.}} \cite{delalleau2012beyond} is closely related to our work. However, in that work the authors define a balanced match to be one where the probability of a team to win is close to 50\%. Our work extends the work presented by Delalleau {{et al.}} \cite{delalleau2012beyond} as follows; the authors raise the importance of having richer player feature sets, an issue that we address in this paper by considering generic attributes during the creation of player, team and match-specific features. \spara{Match balance:} Balanced matches are a strong indicator of matching opposing teams of similar strength and are more prone to lead to player enjoyment. However, a clear and concise definition of match balance is challenging \cite{jaffe2012evaluating}. The majority of realized matchmaking systems use the average team skill ratings to create matches and assumes that a match is balanced when the opposing teams have close average skill ratings \cite{butcher2008pluribus,leagueoflegends2018}. The accuracy of this approach has been challenged by Claypool {{et al.}} \cite{claypool2015surrender}, who surveyed players participating in skill-based matches and discovered that a majority of these players did not feel that the matches they were involved in were balanced. They conclude by stating that player skill ratings are at best useful for ranking players as opposed to creating matches based on them \cite{claypool2015surrender}. Towards improving the skill rating-based definition of balance, a line of research \cite{chen2016modeling,chen2016predicting,delalleau2012beyond,jaffe2012evaluating} suggests that balance exists when the probability of winning is close to 50\%. The recent works of Chen and Joachims \cite{chen2016modeling,chen2016predicting} extend the conventional Bradley-Terry model for the prediction of the winner in $k$v$k$ matches using multi-dimensional representations of the players. Their experiments show that the proposed model outperforms not only the Bradley-Terry model, but also a variety of baselines. Delalleau {{et al.}} \cite{delalleau2012beyond} extend this notion to team games and propose a neural network model to predict the probability of winning. Even though such probabilistic models allocate more space and time resources than the simple, but widely used skill-based approach, researchers have began shifting towards this new notion of balance, which is also used for matchmaking in a variety of games \cite{zook2019better}. Contrary to our setting, all of the aforementioned works approach the generation of matches assessing both probability of winning and player satisfaction.
{ "attr-fineweb-edu": 1.80957, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd9c5qYVBZEfUi_Th
\section{} \begin{abstract} The purpose of this paper is to determine whether basketball teams who choose to employ an offensive strategy that involves predominantly shooting three point shots is stable and optimal. We employ a game-theoretical approach using techniques from dynamical systems theory to show that taking more three point shots to a point where an offensive strategy is dependent on predominantly shooting threes is not necessarily optimal, and depends on a combination of payoff constraints, where one can establish conditions via the global stability of equilibrium points in addition to Nash equilibria where a predominant two-point offensive strategy would be optimal as well. We perform a detailed fixed-points analysis to establish the local stability of a given offensive strategy. We finally prove the existence of Nash equilibria via global stability techniques via the monotonicity principle. We believe that this work demonstrates that the concept that teams should attempt more three-point shots because a three-point shot is worth more than a two-point shot is therefore, a highly ambiguous statement. \end{abstract} \tableofcontents \section{Introduction} We are currently living in the age of analytics in professional sports, with a strong trend of their use developing in professional basketball. Indeed, perhaps, one of the most discussed results to come out of the analytics era thus far is the claim that teams should shoot as many three-point shots as possible, largely because, three-point shots are worth more than two-point shots, and this somehow is indicative of a very efficient offense. These ideas were mentioned for example by Alex Rucker \cite{rucker} who said ``When you ask coaches what's better between a 28 percent three-point shot and a 42 percent midrange shot, they'll say the 42 percent shot. And that's objectively false. It's wrong. \emph{If LeBron James just jacked a three on every single possession, that'd be an exceptionally good offense}. That's a conversation we've had with our coaching staff, and let's just say they don't support that approach.'' It was also claimed in the same article that ``The analytics team is unanimous, and rather emphatic, that every team should shoot more 3s including the Raptors and even the Rockets, who are on pace to break the NBA record for most 3-point attempts in a season.'' These assertions were repeated in \cite{rucker2}. In an article by John Schuhmann \cite{schuhmann}, it was claimed that ``It's simple math. A made three is worth 1.5 times a made two. So you don't have to be a great 3-point shooter to make those shots worth a lot more than a jumper from inside the arc. In fact, if you're not shooting a layup, you might as well be beyond the 3-point line. Last season, the league made 39.4 percent of shots between the restricted area and the arc, for a value of 0.79 points per shot. It made 36.0 percent of threes, for a value of 1.08 points per shot.'' The purpose of this paper is to determine whether basketball teams who choose to employ an offensive strategy that involves predominantly shooting three point shots is stable and optimal. Although this problem to the best of the author's knowledge has not been studied before in the literature, several studies that provide an in-depth quantitative analysis of various aspects of basketball games using statistical and game theoretical methods have been established in \cite{Perse2009612}, \cite{Zhang20072190}, \cite{Kvam2006788}, \cite{Larkey1997596}, \cite{Popescu200681}, \cite{DeMelo2008695}, \cite{Clair20071163}, \cite{Dodge20121}, \cite{Zhou20081141} and references therein. We will employ a game-theoretical approach using techniques from dynamical systems theory to show that taking more three point shots to a point where an offensive strategy is dependent on predominantly shooting threes is not necessarily optimal, and depends on a combination of payoff constraints, where one can establish conditions via the global stability of equilibrium points in addition to Nash equilibria where a predominant two-point offensive strategy would be optimal as well. \section{The Dynamical Equations} For our model, we consider two types of NBA teams. The first type are teams that employ two point shots as the predominant part of their offensive strategy, while the other type consists of teams that employ three-point shots as the predominant part of their offensive strategy. There are therefore two predominant strategies, which we will denote as $s_{1}, s_{2}$, such that we define \begin{equation} \mathbf{S} = \left\{s_{1}, s_{2}\right\}. \end{equation} We then let $n_{i}$ represent the number of teams using $s_{i}$, such that the total number of teams in the league is given by \begin{equation} N = \sum_{i =1}^{k} n_{i}, \end{equation} which implies that the proportion of teams using strategy $s_{i}$ is given by \begin{equation} x_i = \frac{n_{i}}{N}. \end{equation} The state of the population of teams is then represented by $\mathbf{x} = (x_{1}, \ldots, x_{k})$. It can be shown \cite{webb} that the proportions of individuals using a certain strategy change in time according to the following dynamical system \begin{equation} \label{eq:dyn1} \dot{x}_{i} = x_{i}\left[\pi(s_{i}, \mathbf{x}) - \bar{\pi}(\mathbf{x})\right], \end{equation} subject to \begin{equation} \label{eq:constr1} \sum_{i =1}^{k} x_{i} = 1, \end{equation} where we have defined the average payoff function as \begin{equation} \label{eq:avpayoff} \bar{\pi}(\mathbf{x}) = \sum_{i=1}^{k} x_{i} \pi(s_{i}, \mathbf{x}). \end{equation} Now, let $x_{1}$ represent the proportion of teams that predominantly shoot two-point shots, and let $x_{2}$ represent the proportion of teams that predominantly shoot three-point shots. Further, denoting the game action set to be $A = \left\{T, Th\right\}$, where $T$ represents a predominant two-point shot strategy, and $Th$ represents a predominant three-point shot strategy. As such, we assign the following payoffs: \begin{equation} \pi(T,T) = \alpha, \quad \pi(T,Th) = \beta, \quad \pi(Th, T) = \gamma, \quad \pi(Th,Th) = \delta. \end{equation} We therefore have that \begin{equation} \label{payoff1} \pi(T,\mathbf{x}) = \alpha x_{1} + \beta x_{2}, \quad \pi(Th, \mathbf{x}) = \gamma x_{1} + \delta x_{2}. \end{equation} From \eqref{eq:avpayoff}, we further have that \begin{equation} \label{payoff2} \bar{\pi}(\mathbf{x}) = x_{1} \left( \alpha x_{1} + \beta x_{2}\right) + x_{2} \left(\gamma x_{1} + \delta x_{2}\right). \end{equation} From Eq. \eqref{eq:dyn1} the dynamical system is then given by \begin{eqnarray} \label{eq:x1d} \dot{x}_{1} &=& x_{1} \left\{ \left(\alpha x_{1} + \beta x_{2} \right) - x_{1} \left( \alpha x_{1} + \beta x_{2}\right) - x_{2} \left(\gamma x_{1} + \delta x_{2}\right) \right\}, \\ \label{eq:x2d} \dot{x}_{2} &=& x_{2} \left\{ \left( \gamma x_{1} + \delta x_{2}\right) -x_{1} \left( \alpha x_{1} + \beta x_{2}\right) - x_{2} \left(\gamma x_{1} + \delta x_{2}\right) \right\}, \end{eqnarray} subject to the constraint \begin{equation} \label{eq:constr2} x_{1} + x_{2} = 1. \end{equation} Indeed, because of the constraint \eqref{eq:constr2}, the dynamical system is actually one-dimensional, which we write in terms of $x_{1}$ as \begin{equation} \label{eq:x1d2} \dot{x}_{1} = x_{1} \left(-1 + x_{1}\right) \left[\delta + \beta \left(-1 + x_{1}\right) - \delta x_{1} + \left(\gamma-\alpha\right)x_{1}\right]. \end{equation} From Eq. \eqref{eq:x1d2}, we immediately notice some things of importance. First, we are able to deduce just from the form of the equation what the invariant sets are. Following \cite{ellis}, we note that for a dynamical system $\mathbf{x}' = \mathbf{f(x)} \in \mathbf{R^{n}}$ with flow $\phi_{t}$, if we define a $C^{1}$ function $Z: \mathbf{R}^{n} \to \mathbf{R}$ such that $Z' = \alpha Z$, where $\alpha: \mathbf{R}^{n} \to \mathbf{R}$, then, the subsets of $\mathbf{R}^{n}$ defined by $Z > 0, Z = 0$, and $Z < 0$ are invariant sets of the flow $\phi_{t}$. Applying this notion to Eq. \eqref{eq:x1d2}, one immediately sees that $x_1 > 0$, $x_1 = 0$, and $x_1 < 0$ are invariant sets of the corresponding flow. Further, there also exists a symmetry such that $x_{1} \to -x_{1}$, which implies that without loss of generality, we can restrict our attention to $x_{1} \geq 0$. \section{Fixed-Points Analysis} With the dynamical system in hand, we are now in a position to perform a fixed-points analysis. There are precisely three fixed points, which are invariant manifolds and are given by: \begin{equation} P_{1}: x_{1}^{*} = 0, \quad P_{2}: x_{1}^{*} = 1, \quad P_{3}: x_{1}^{*} = \frac{\beta - \delta}{-\alpha + \beta - \delta + \gamma}. \end{equation} Note that, $P_{3}$ actually contains $P_{1}$ and $P_{2}$ as special cases. Namely, when $\beta = \delta$, $P_{3} = 0 = P_{1}$, and when $\alpha = \gamma$, $P_{3} = 1 = P_{2}$. We will therefore just analyze, the stability of $P_{3}$. $P_{3} = 0$ represents a state of the population where all teams predominantly shoot three-point shots. Similarly, $P_{3} = 1$ represents a state of the population where all teams predominantly shoot two-point shots, We additionally restrict \begin{equation} 0 \leq P_{3} \leq 1 \Rightarrow 0 \leq \frac{\beta - \delta}{-\alpha + \beta - \delta + \gamma} \leq 1, \end{equation} which implies the following conditions on the payoffs: \begin{equation} \left[\delta < \beta \cap \gamma \leq \alpha \right] \cup \left[\delta = \beta \cap \left(\gamma < \alpha \cup \gamma > \alpha \right) \right] \cup \left[\delta > \beta \cap \gamma \leq \alpha \right]. \end{equation} With respect to a stability analysis of $P_{3}$, we note the following. The point $P_{3}$ is a: \begin{itemize} \item Local sink if: $\{\delta < \beta\} \cap \{\gamma > \alpha\}$, \item Source if: $\{\delta > \beta\} \cap \{\gamma < \alpha\}$, \item Saddle: if: $\{\delta = \beta \} \cap (\gamma < \alpha -\beta + \delta \cup \gamma > \alpha - \beta + \delta)$, or $(\{\delta < \beta\} \cup \{\delta > \beta\}) \cap \gamma = \frac{\alpha \delta - \alpha \beta}{\delta - \beta}$. \end{itemize} Further, the system exhibits some bifurcations as well. In the neigbourhood of $P_{3} = 0$, the linearized system takes the form \begin{equation} x_{1}' = \beta - \delta. \end{equation} Therefore, $P_{3} = 0$ destabilizes the system at $\beta = \delta$. Similarly, $P_{3} = 1$ destabilizes the system at $\gamma = \alpha$. Therefore, bifurcations of the system occur on the lines $\gamma = \alpha$ and $\beta = \delta$ in the four-dimensional parameter space. \section{Global Stability and The Existence of Nash Equilibria} With the preceding fixed-points analysis completed, we are now interested in determining global stability conditions. The main motivation is to determine the existence of any Nash equilibria that occur for this game via the following theorem \cite{webb}: If $\mathbf{x}^{*}$ is an asymptotically stable fixed point, then the symmetric strategy pair $[\sigma^{*}, \sigma^{*}]$, with $\sigma^{*} = \mathbf{x}^*$ is a Nash equilibrium. We will primarily make use of the monotonicity principle, which says \cite{ellis} let $\phi_{t}$ be a flow on $\mathbb{R}^{n}$ with $S$ an invariant set. Let $Z: S \to \mathbb{R}$ be a $C^{1}$ function whose range is the interval $(a,b)$, where $a \in \mathbb{R} \cup \{-\infty\}, b \in \mathbb{R} \cup \{\infty\}$, and $a < b$. If $Z$ is decreasing on orbits in $S$, then for all $\mathbf{x} \in S$, \begin{equation*} \omega(\mathbf{x}) \subseteq \left\{\mathbf{s} \in \partial S | \lim_{\mathbf{y} \to \mathbf{s}} Z(\mathbf{y}) \neq \mathbf{b}\right\}, \end{equation*} \begin{equation*} \alpha(\mathbf{x}) \subseteq \left\{\mathbf{s} \in \partial S | \lim_{\mathbf{y} \to \mathbf{s}} Z(\mathbf{y}) \neq \mathbf{a}\right\}. \end{equation*} Consider the function \begin{equation} Z_{1} = \log \left(-1 + x_{1}\right). \end{equation} Then, we have that \begin{equation} \dot{Z}_{1}= x_{1} \left[\delta + \beta \left(-1 + x_{1}\right) - \delta x_{1} + x_{1} \left(\gamma - \alpha\right)\right]. \end{equation} For the invariant set $S_1 = \{0 < x_{1} < 1\}$, we have that $\partial S_{1} = \{x_{1} = 0\} \cup \{x_{1} = 1\}$. One can then immediately see that in $S_{1}$, \begin{equation} \dot{Z}_{1} < 0 \Leftrightarrow \left\{\beta > \delta\right\} \cap \left\{\alpha \geq \gamma\right\}. \end{equation} Therefore, by the monotonicity principle, \begin{equation} \omega(\mathbf{x}) \subseteq \left\{\mathbf{x}: x_{1} = 1 \right\}. \end{equation} Note that the conditions $\beta > \delta$ and $\alpha \geq \gamma$ correspond to $P_{3}$ above. In particular, for $\alpha = \gamma$, $P_{3} = 1$, which implies that $x_{1}^{*} = 1$ is globally stable. Therefore, under these conditions, the symmetric strategy $[1,1]$ is a Nash equilibrium. Now, consider the function \begin{equation} Z_{2} = \log \left(x_{1}\right). \end{equation} We can therefore see that \begin{equation} \dot{Z}_{2} = \left[-1 + x_{1}\right] \left[\delta + \beta\left(-1+x_{1}\right) - \delta x_{1} + \left(-\alpha + \gamma\right) x_{1}\right]. \end{equation} Clearly, $\dot{Z}_{2} < 0$ in $S_{1}$ if for example $\beta = \delta$ and $\alpha < \gamma$. Then, by the monotonicity principle, we obtain that \begin{equation} \omega(\mathbf{x}) \subseteq \left\{\mathbf{x}: x_{1} = 0 \right\}. \end{equation} Note that the conditions $\beta = \delta$ and $\alpha < \gamma$ correspond to $P_{3}$ above. In particular, for $\beta = \delta$, $P_{3} = 0$, which implies that $x_{1}^{*} = 0$ is globally stable. Therefore, under these conditions, the symmetric strategy $[0,0]$ is a Nash equilibrium. In summary, we have just shown that for the specific case where $\beta > \delta$ and $\alpha = \gamma$, the strategy $[1,1]$ is a Nash equilibrium. On the other hand, for the specific case where $\beta = \delta$ and $\alpha < \gamma$, the strategy $[0,0]$ is a Nash equilibrium. \section{Discussion} In the previous section which describes global results, we first concluded that for the case where $\beta > \delta$ and $\alpha = \gamma$, the strategy $[1,1]$ is a Nash equilibrium. The relevance of this is as follows. The condition on the payoffs thus requires that \begin{equation} \label{eq:presult1} \pi(T,T) = \pi(Th,T), \quad \pi(T,Th) > \pi(Th,Th). \end{equation} That is, given the strategy adopted by the other team, neither team could increase their payoff by adopting another strategy if and only if the condition in \eqref{eq:presult1} is satisfied. Given these conditions, if one team has a predominant two-point strategy, it would be the other team's best response to also use a predominant two-point strategy. We also concluded that for the case where $\beta = \delta$ and $\alpha < \gamma$, the strategy $[0,0]$ is a Nash equilibrium. The relevance of this is as follows. The condition on the payoffs thus requires that \begin{equation} \label{eq:presult2} \pi(T,Th) = \pi(Th,Th), \quad \pi(T,T) < \pi(Th,T). \end{equation} That is, given the strategy adopted by the other team, neither team could increase their payoff by adopting another strategy if and only if the condition in \eqref{eq:presult2} is satisfied. Given these conditions, if one team has a predominant three-point strategy, it would be the other team's best response to also use a predominant three-point strategy. Further, we also showed that $x_{1} = 1$ is globally stable under the conditions in \eqref{eq:presult1}. That is, if these conditions hold, every team in the NBA will eventually adopt an offensive strategy predominantly consisting of two-point shots. The conditions in \eqref{eq:presult2} were shown to imply that the point $x_{1} = 0$ is globally stable. This means that if these conditions now hold, every team in the NBA will eventually adopt an offensive strategy predominantly consisting of three-point shots. We also provided through a careful stability analysis of the fixed points criteria for the local stability of strategies. For example, we showed that a predominant three-point strategy is locally stable if $\pi(T,Th) - \pi(Th,Th) < 0$, while it is unstable if $\pi(T,Th) - \pi(Th,Th) \geq 0$. In addition, a predominant two-point strategy was found to be locally stable when $\pi(Th,T) - \pi(T,T) < 0$, and unstable when $\pi(Th,T) - \pi(T,T) \geq 0$. There is also they key point of which one of these strategies has the highest probability of being executed. From \cite{webb}, we know that \begin{equation} \pi(\sigma,\mathbf{x}) = \sum_{s \in \mathbf{S}} \sum_{s' \in \mathbf{S}} p(s) x(s') \pi(s,s'). \end{equation} That is, the payoff to a team using strategy $\sigma$ in a league with profile $\mathbf{x}$ is proportional to the probability of this team using strategy $s \in \mathbf{S}$. We therefore see that a team's optimal strategy would be that for which they could maximize their payoff, that is, for which $p(s)$ is a maximum, while keeping in mind the strategy of the other team, hence, the existence of Nash equilibria. Hopefully, this work also shows that the concept that teams should attempt more three-point shots because a three-point shot is worth more than a two-point shot is a highly ambiguous statement. In actuality, one needs to analyze what offensive strategy is optimal which is constrained by a particular set of payoffs. \newpage \bibliographystyle{ieeetr}
{ "attr-fineweb-edu": 1.461914, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdgM4eIXh0sNxw4pg
\section{Introduction} Millions of travelers book hotel accommodation over the Internet each year. Modern travelers rely on peer options, electronic word of mouth (eWOM), and peer reviews. Popular online travel websites offer reliable reviews and prices \cite{bb1}. Therefore, customers choose to inspect and compare different options on meta-search sites like Kayak.com, Trivago, and TripAdvisor before booking their accommodations. Online travel agencies (OTA's) advertise their website offers on meta-search bidding engines. If the OTA chooses to have a Cost-Per-Click (CPC) ad campaign, the OTA promises to pay a certain amount for each click a certain hotel gets from the platform under predefined conditions. The amount to pay per click is the OTA's $bid$ amount. The problem of predicting the number of clicks a hotel would get for a certain bid amount is an important step in the OTA's advertisement campaign management on a meta-search engine, as $bid \times number of clicks$ defines the cost to be generated. In one study, state-of-the-art prediction algorithms and Extreme Gradient Boosting (XGBoost) \cite{bb2} regressor as well as a minimum Redundancy-Maximum Relevance (mRMR) \cite{bb3} feature selection algorithm were executed to predict the daily clicks to be received per hotel, using a large OTA's data from Turkey \cite{bb3.1}. The data set received from the meta-search bidding engine contained both numerical and categorical features, with each column having missing and outlier values. The number of clicks as the multiplication of the predicted click-through rate (CTR) and the predicted hotel impression were modelled. The highest R-Squared values obtained in the prediction of individual-hotel based CTR and impression values are both achieved by XGBoost in this work. Another study aimed to forecast how many impressions and clicks a hotel will acquire as well as how many rooms it will sell via a meta-search bidding engine \cite{bb4}. The given model predicts how much money an OTA's hotels will make the following day. The authors demonstrate that by incorporating OTA-specific information into prediction models, the generalization of models improves and better results are obtained. In that study, the best results were obtained using tree-based boosting techniques. Predicting hotel searches, clicks, and bookings is a challenging task due to many external factors such as seasonality, events, location, and hotel-based properties. Capturing such properties increases the accuracy of prediction models. Due to the high variance in daily OTA data, the use of non-linear prediction methods and creating relevant features with a time-delayed data preprocessing approach is adopted in a work trying to forecast daily room sales for each hotel in a meta-search bidding platform \cite{bb5}. They applied XGBoost, random forest, gradient boosting, deep neural networks, and generalized linear models (GLM) \cite{bb5.1}. The most successful model to predict bookings is gradient boosting, applied on a dataset enriched by features that can summarize the trends in the target variable well. The demand for hotel rooms in the hotel industry in Turkey between the years 2002-2013 is estimated using ARIMA by Efendioglu and Bulkan \cite{bb6}. In their study, they determined the hotel room capacity according to the cost of the unsold rooms and the ARIMA distribution. They also reported that the hotel room demand in the country could be affected by outer factors such as political crises and warnings about terrorism. This work shows the non-deterministic nature of hotel room demand and how unpredictable factors suddenly affect the click prediction problem. In the literature, studies are focusing on the problem of predicting the CTR of a sponsored display advertisement to be shown on a search engine, related to a query. Click and CTR prediction is an ongoing research for both industry and academia \cite{bb7} \cite{bb8} \cite{bb9}. Our aim of predicting the number of clicks is highly related to the CTR prediction problem, hence those studies are investigated to get a better understanding of related work. In order to predict ad clicks, Google makes use of logistic regression with improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm \cite{bb10} for better sparsity and convergence. Microsoft's Bing Search Engine proposes a new Bayesian online learning algorithm for CTR prediction for sponsored search \cite{bb11}, which is based on a probit regression model that maps discrete or real-valued input features to probabilities. The scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. Yahoo adopts a machine learning framework based on Bayesian logistic regression to predict click-through and conversion rates \cite{bb12}, which is simple, scalable, and efficient. Facebook combines decision trees with logistic regression \cite{bb13}, generating 3\% better results in click prediction, compared to other methods. Ensemble learning \cite{bb14} is a machine learning model combination that gets decisions from various models to enhance the overall performance. The ensemble approach provides the stability and low-variety predictions of machine learning algorithms. It builds a set of decision-makers, namely classifiers and regressors, with various techniques as final decisions \cite{bb15}. An ensemble model is proposed by Wang et al. to predict the CTR of advertisements on search engines \cite{bb16}. Firstly, they tried several Maximum Likelihood Estimation (MLE)-based methods to exploit the training set; including Online Bayesian Probit Regression (BPR) \cite{bb16.1}, Support Vector Machine (SVM), and Latent Factor Model (LFM) \cite{bb16.2} and optimized them by selecting the most descriptive features. They have created a rank-based ensemble model using the outputs of BPR, SVM, and MLE. The results are ensembled using harmonic means to generate the final blending submission. The proposed model's output shows an on average 0.013 improvement over the individual models. Ensemble learning techniques implemented by King et al. to investigate whether they could increase the profitability of pay-per-click (PPC) campaigns \cite{bb17}. They applied voting, bootStrap aggregation (Bagging) \cite{bb17.1}, stacked generalization (or stacking) \cite{bb17.2}, and metacost \cite{bb17.3} techniques to four base classifiers, namely, Naïve Bayes, logistic regression, decision trees, and Support Vector Machines. The research in this work analyzed a data set of PPC advertisements placed on the Google search engine, aiming to classify PPC campaign success. They used average accuracy, recall, and precision metrics to measure the performance of both base classifiers and ensemble models. They also introduced the evaluation metric of total campaign portfolio profit and illustrated how relying on overall model accuracy can be misleading. They conclude that applying ensemble learning techniques in PPC marketing campaigns can achieve higher profits. Eight ensemble methods were proposed by Ling et al. to accurately estimate the CTR in sponsored search ads \cite{bb18}. A single model would lead to sub-optimal accuracy, and the regression models all have different advantages and disadvantages. The ensemble models are created via bagging, boosting, stacking, and cascading. The training data is collected from historical ads' impressions and the corresponding clicks. The Area under the Receiver Operating Characteristic Curve (AUC) and Relative Information Gain (RIG) metrics are computed against the testing data to evaluate prediction accuracy. They conclude that boosting is better than cascading for the given problem. Boosting neural networks with gradient boosting decision trees turned out to be the best model in the given setting. They conclude that the model ensemble is a promising direction for CTR prediction; meanwhile, domain knowledge is also essential in the ensemble design. Etsy, an online e-commerce platform, displays promoted search results, which are similar to sponsored search results and our problem with meta-search bidding engines. CTR prediction is utilized in the system to determine the ranking of the ads \cite{bb19}. They found out that different features capture different aspects, so they classified the features as being historical and content-based. They train separate CTR prediction models based on historical and content-based features, separately. Then, these individual models are combined with a logistic regression model. They reported AUC, Average Impression Log Loss, and Normalized Cross-Entropy metrics to compare the models to non-trivial baselines on a large-scale real-world dataset from Etsy, demonstrating the effectiveness of the proposed system. In this study, we utilize ensemble learning pipelines to predict the number of clicks a hotel will receive the next day, and comparing substantial amount of stand-alone prediction performance of the models. \section{Overview of the Proposed System} \begin{figure} \includegraphics[width=\textwidth]{Figures/overview_of_the_system_v1.2.jpg} \caption{Overview of the System. The main train set is divided into two subsets (train and test) to assess the importance of features. These are used to determine the most representative feature subspace by testing with the individual dataset that should be isolated from the actual test set. Accordingly, Bayesian hyperparameter optimization is applied to each individual model via training with a sub-train set. The dimensionality of the main test set is reduced over a predefined feature subspace, and the model is tested over five different model pipelines, including individual ten regressor models, simple averaged and weighted averaged ensemble models, and stack and blend ensemble pipelines.} \label{fig1} \end{figure} There are five primary components in the proposed system. The complete system's flow diagram is depicted in Fig.-\ref{fig1}. To summarize, queries are used to retrieve the dataset from the database. Preprocessing is used to extract time-domain seasonal decomposition features with suitable data cleaning in the next stage. XGBoost, LightGBM (LGBM) \cite{bb20} and Stochastic Gradient Descent (SGD) \cite{bb21} algorithms are then subjected to hyper-parameter tuning. In the final step, individual and ensembled models are trained and tested with the same train and test sets to generate click predictions. Each model's $R^2$ score is presented, and 46 distinct models are trained and tested via the proposed system. \subsection{Dataset Generation and Data Preprocessing} The data is retrieved from a major OTA company based on Turkey. Contents of the meta-search platform's daily reports are combined with the data retrieved from the OTA. The dataset contains both numerical and categorical features. Some of the columns are eliminated during the data analysis phase as they contain a high ratio of missing values. In this study, we have replaced the missing values with the most common value and the average of the related feature for categorical and numerical features, respectively. In addition to OTA's data, some external features are added to the dataset in order to explain the state of the economical and seasonal properties of the environment. Some simple external data examples are daily weather information and daily exchange rates. Data enrichment improves the quality of the dataset. The closeness of the related day to the next public holiday and the length of the holiday are also added as additional numerical variables. In order to improve the accuracy and generalization ability of the prediction model, additional features are generated from the data following a sliding-window (time-delay) approach. For example, the average and standard deviation of numerical values for some specific time periods (such as the last 3, 7, and 30 days) are calculated and used as input features for prediction. The aim of adding such features is to improve the accuracy and generalization ability of the prediction model. Feature space is enriched with the seasonal decomposition of some time-series features. Seasonal decomposition is a naive decomposition model that generates additive components by breaking the original feature into three. The output of the algorithm is T: Trend, S: Seasonality, and e: Residual, where $Y[t] = T[t] + S[t] + e[t]$. The seasonal component is first removed by applying a convolution filter to the data. The average of this smoothed series for each period is the returned seasonal component \cite{bb22}. Decomposed seasonality, trend, and residual values are added to the dataset as new features. As a final step, feature one-hot encoding is proposed for some of the string-based features and binarized. In the last step, the feature set is normalized with min-max scaling to force values to be between 0 and 1. \subsection{XGBoost-based recursive feature elimination} XGBoost is the part of gradient boosting decision tree which operate via regularization of the tree framework. By using gradient boosting to create the boosted trees and collect the feature scores in an effective manner, each feature's significance to the training model is indicated \cite{bb22.1}. The formula calculation the feature importance of every feature $F_n$ is shown in Eq. ~\ref{eq:XGB_featureimportance}. \begin{equation} F_n(T) = \sqrt{\frac{1}{E} \sum_{e=1}^{E} \hat{i}^{2}(T_e)} \label{eq:XGB_featureimportance} \end{equation} \noindent There is a subdivision of each node into two regions at every node $e$ for each feature $n$ as a part of the feature space $F_n$ from a given single decision tree $T$. The maximally forecasted score boosting rate $\hat{i}^{2}$ represents the metric of squared error shifts of the cost function from the given XGBoost regression outcome of an additive tree $T_e$. The summation of the squared importance over all trees $E$ proposes the summarization of the square importance of the given feature $n$. Accordingly, the root mean squared importance manifests the absolute importance factor of the feature. The estimation of such an improvement depends on replacing the actual feature value in space with random noise to determine a relative magnitude shift in the final regression performance. Running multiple trees simultaneously provides a better understanding of the average importance of the feature. In the next step, the customized recursive feature elimination algorithm is used to minimize the feature space \cite{bb22.2}. The algorithm \ref{alg:xgboost_recursive_alg} shows the procedure of the flow. The goal is to cover the features ($feature\_subspace$) that represent best the feature importance levels in a descending order. To avoid the complexity of the classical recursive-based feature elimination due to the large feature space, the initial feature importance values are considered as bias factors for the features. Given that the randomization factor of the selected features will be auto-biased in the subspace, such a specialization significantly reduces the elimination process. $r^2\_score$ value of a new $feature\_subspace$ is calculated within every iteration until convergence occurs ($r^2\_temp$ value stop being exceeded by $r^2\_score$). Again XGBoost regressor is selected as feature sub-space evaluator. \begin{algorithm}[H] \caption{Recursive XGBoost dimensionally reduction algorithm} \label{alg:xgboost_recursive_alg} \KwData{\\$ \;\;\;\;\;\;\; FI = sort\_descending(feature\_importances)$ \\ $\;\;\;\;\;\;\; r^2\_temp = 0$} \KwResult{\\$ \;\;\;\;\;\;\; feature\_subspace$} \For{$FI_0 \;\; in \;\; FI$} { $feature\_subspace = feature\_space\:(FI_0 < FI)$ \\ $model = initialize\_XGB\_regressor\:()$ \\ $model = XGB\_regressor\_train\:(train\_data, \; train\_labels)$ \\ $r^2\_score = XGB\_regressor\_test\:(model, \; test\_data, \; test\_labels)$ \\ \eIf{$r^2\_score < r^2\_temp$} { \Return $(feature\_subspace)$ }{ $r^2\_temp = r^2\_score$ } } \end{algorithm} \subsection{Bayesian Hyper-parameter Optimization} Hyper-parameter optimization is an essential approach for some machine learning models to enhance prediction performance. There are a few algorithms for tuning hyper-parameters. One of them is a Grid Search \cite{bb23} which tries each combination of given hyper-parameter candidates of a model. Another optimization algorithm is known as random search \cite{bb24}, which randomly extracts hyper-parameter combinations and tries to reach local optima of a performance score. However, none of them are able to reach successful local optima of performance in a short period. Bayesian hyper-parameter optimization \cite{bb25} is a relatively more powerful and efficient algorithm for hyper-parameter tuning. It aims to reach a global optimum in a much shorter time than grid search. There is a probabilistic model of $f(x)$ that aims to be exploited to make decisions about where $X$ is accepted as the next performing function. This procedure helps to find the minimum of non-convex functions in a few epochs, which positively effects the performance. The evaluation metric to rank hyper-parameter combinations through input data is $R-squared (R^2)$. $R-squared$ is a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. The formula of $R^2$ is shown in Eq.~\ref{eq:R2}. \begin{equation} R2 = 1 - \frac{Explained Variation}{Total Variation} \label{eq:R2} \end{equation} In this work, $R^2$ values of individual machine learning algorithms (XGBoost , LightGBM , SGD, Lasso \cite{bb26}, Lasso Lars \cite{bb27}, Ridge \cite{bb28}, Bayesian Ridge \cite{bb29}, Huber \cite{bb30}, Passive Aggressive Regressors \cite{bb31} and Elastic Net \cite{bb32}) are used and compared in ensemble models. \subsection{Ensembling} If there are M models with errors extracted from the same dataset which are uncorrelated with them, the average error of a model is theoretically reduced by some factor by simply averaging the model outputs. On the other hand, if some of the model outputs have lower performance and are not fit to predict results as well as others, overall error may not be reduced or even increase in some cases. \subsubsection{Average \& Weighted Average of Model Outputs} The first and most basic ensembling approach is to take an average of various model outputs. There are two different averaging techniques for ensembling. The first one is taking a mean of predicted values. It provides a lower variance of predicted values since different algorithms proceed to predict various aspects of the input data set. The formula for an average of model outputs is shown in Eq.~\ref{eq:avg}. \begin{equation} Avg_i = \frac{\sum_{r}^n pi_r}{n} \label{eq:avg} \end{equation} where $i$ is the $i^{th}$ sample, $r$ is regressor model, $pi_r$ is individual probability of given regressor and $n$ is the number of models used. However, some machine learning models perform worse than others in terms of prediction, culminating in a poorer overall ensemble prediction performance than some individual regressor prediction performances. The fundamental reason for this is because we give weak regressors the same weight as other ones that provide decent individual performance. As a consequence, while taking an average of all estimations, the weighted average is also utilized in this study to eliminate the detrimental influence of low-performance models. Weights are produced using each model's individual $R^2$ score and scaled between 0 and 1 to standardize the weight of each regressors, ensuring that the sum of all weights is 1. This method allows models that predict higher performance to have a greater impact on final prediction than models that predict lower performance. The formula of the weighted average of model outputs is shown in Eq.~\ref{eq:wavg}. \begin{equation} \begin{split} Wavg_i = \sum_{r} w_r * pi_r, \\ r \in R \, for \, i= 1 \, to \, N, \\ \sum_{r} w_r = 1 \end{split} \label{eq:wavg} \end{equation} where $r$ is the chosen regressor model, $w_r$ is normalized individual $R^2$ performance of regressor. $r$, $pi_r$ is prediction result of regressor $r$ of $i$'th sample and $N$ is the number of models used. \subsubsection{Stack Ensemble Model} Stack Ensemble algorithm, assemble results of individual results for different models to make an intermediate input dataset, and the final model is used to create a final regression result. In the proposed approach, ten different models (XGBoost, LGBM, SGD, Lasso, Lasso Lars, Ridge, Bayesian Ridge, Huber, Passive Aggressive Regressors, and Elastic Net) are trained to stack their extracted predictions, and the intermediate dataset, which is the input to ensemble regressors, is also trained with four different meta-regressor models including XGB, Lasso, Bayesian ridge, and linear regressions for the final click predictions. Additional meta-learners are also tried, but due to their immense poor performance, those models are discarded and do not appear in the outcomes of model variants. Stacking the individual predictions enables us to analyze the intermediate regressor model to linearly weight results to create a learnable weighted average of provided predictions through each sample of input data. Overall ensemble model variations are indicated in Fig.~\ref{fig:stack_blend_ensemble} along with the associations between them. \begin{figure} [H] \includegraphics[width=0.93\columnwidth]{Figures/stack_blend_ensemble_v1.2.jpg} \caption{Ensemble model pipelines. Individual models are trained via a dimensionally reduced training set. Model predictions are further operated via four ensemble methods: taking the prediction list's average and a weighted average to increase the positive bias for some regressors with better prediction performance; four meta-regressor variations are fed by mediated input features; the blend ensemble learning pipeline via combining a collection of initial feature sets with model prediction results, and feeding the blend into four different meta-regressor variations.} \label{fig:stack_blend_ensemble} \end{figure} \subsubsection{Blend Ensemble Model} The Stack ensemble method and the Blend ensemble algorithm \cite{bb34} have similar designs. The separate outcomes of regressor models are assembled in the first stage. Additionally, the individual model outcomes are merged with a dimensionally reduced featureset, which adds mediated features extracted as predicted clicks with knowledge of intended predictions to produce an expanded feature dimension. Similar to stack ensemble models, XGBoost, LightGBM, SGD, Lasso, Lasso Lars, Ridge, Bayesian Ridge, Huber, Passive Aggressive Regressors, and Elastic Net are used to stack their given prediction outputs and blended with the input feature set. Then, the blended dataset is also trained with four different models same as the ones (XGB, Lasso, Bayesian ridge, and linear regressions) chosen for the stack ensemble meta-learners to extract four different $R^2$ results. \section{Experiments and Results} Instead of splitting a dataset into train and test with some percentage, daily click predictions of each hotel are estimated. Accordingly, the train set is designed from the earliest day until test day that clicks will be predicted. By using this approach, 11 consecutive days are chosen as test days and 11 corresponding $R^2$ test scores are produced by processing four different ensembling models (Average \& weighted average, stack ensemble, and blend ensemble). Besides, individual $R^2$ test scores of ten regressor models (XGBoost, LightGBM, SGD, Lasso, Lasso Lars, Ridge, Bayesian Ridge, Huber, Passive Aggressive Regressors, and Elastic Net) are reported for the control group, and efficiency of ensemble models is evaluated. For each test day, 22 different predictions are measured (10 individual predictions, average \& weighted average predictions, five stack ensemble prediction, and four blend ensemble predictions). $R^2$ score of each prediction is saved and the average of each test $R^2$ score is calculated. The average $R^2$ test scores of 21 model types are shown in Fig.~\ref{fig:testR2Scores}. \begin{figure}[H] \includegraphics[width=0.9\columnwidth]{Figures/R2sTable_cropped.pdf} \caption{Overall Test $R^2$ Scores for Each Regressor Model} \label{fig:testR2Scores} \end{figure} \subsection{Click prediction performances of individual models} Individual regressor predictions of all models (XGBoost, LightGBM, SGD, Lasso, Lasso Lars, Ridge, Bayesian Ridge, Huber, Passive Aggressive Regressors, and Elastic Net) are reported for the control group, and $R^2$ scores of models are evaluated as 0.485, 0.538, 0.497, 0.496, 0.272, 0.578, 0.579, 0.514, 0.557, and -0.012 respectively. \subsection{Click prediction performances of ensemble models} The performance of the ensemble model largely exceeded the results of the individual models, with the highest $R^2$ value of 0.639 shared by three stack ensemble models (ensemble stack with linear regression; ensemble stack with Lasso; and ensemble stack with Bayesian ridge). According to these models, ensemble blending with Lasso and ensemble blending with Bayesian ridge regressors came second at 0.638. The key detail here is that the six best-performing models are ensemble ones. The performance drops relatively significantly on a more primitive ensemble model, the weighted average predictor, which comes third with an $R^2$ value of 0.597. The other three ensemble methods, ensemble stack with XGB, ensemble blend with LGBM, and ensemble stack with LGBM, show performance at the isolated model level (0.512, 0.5, and 0.451). It can be inferred from the results that simpler regressor models as meta-predictors overshadow tree based regressors due to the fact that the most of the work is already done by the level-0 learners; the level-1 regressor is basically just a mediator and it makes sense to choose a rather simple algorithm for this purpose \cite{bb35}. Simple linear models at the leaves suppose to work well, and the results are likely to prove once again. \section{Conclusion and discussion} Assorted regressors are ensembled in the proposed study to improve click prediction performance. The feature set is divided into train and test groups depending on the logging date in the first phase. The data collection is then subjected to an XGBoost-based dimension reduction, which significantly reduces the dimension of features. To discover the most ideal hyper-parameters, Bayesian Hyper-parameter optimization is developed for the XGBoost, LightGBM, and SGD models. XGBoost, LightGBM, SGD, Lasso, Lasso Lars, Ridge, Bayesian Ridge, Huber, Elastic Net, and Passive Aggressive regressors are all tested separately and then fused to create ensemble models. The authors suggest four different ensemble approaches. The first ensemble model takes an average of anticipated results as well as a weighted average. A stack ensemble model, for example, assembles all the results of individual forecasts as an intermediate layer that feeds into another individual layer. The third model is a blend ensemble model, which stacks all of the individual prediction outputs and blends them with the original feature set once more. With the outcomes of multiple model outputs, this framework offers an artificial feature generation to boost the feature dimension. The same test set is used to test both individual and ensemble models, and the results of 46 model combinations demonstrate that stack ensemble models produce the best $R^2$ score of all. The greatest $R^2$ score is 0.639 for the stack ensemble model combined with linear regression, whereas the best machine learning model had an $R^2$ score of 0.579. As a conclusion, the ensemble model improves prediction performance by about 10\%. Various types of artificial neural network (ANN) models will be added to ensemble models in the future, with the goal of improving stack and blend ensemble models. Yandex's CatBoost machine learning model \cite{bb36}, which handles categorical information, can also be added to the list of regressors to examine. The concept of meta-learners is designed to provide the final outcome, yet there are possibilities to convert them into intermediate learners via inducing additional hyperparameter optimization mechanisms or additional meta-feature elimination due to forming the additive judgement on stacked predictions on an originally reduced feature dimension \cite{bb37}. Articulating meta-learners as mediators would be an inception based regularizer for intercommunication between multiple meta-models as a single pipeline, which might recalibrate incoming feature space with new model parameters to interact with. \section{Declaration of Competing Interest} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
{ "attr-fineweb-edu": 1.896484, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbmbxK6EuNA_gMTFV
\section{Introduction} Games and sports are emerging as a rich testbed to study the dynamics of competition in a controlled environment. Examples include the analysis of passing networks \cite{Buletal2018,mchale2018identifying} and entropy \cite{martinez2020spatial} in soccer games (see also \cite{rein2016big} for a discussion on data-driven tactical approaches), scoring dynamics \cite{MerCla2014, clauset2015safe,kiley2016game} and play-by-play modeling \cite{vravcar2016modeling,wang2019tac} in professional sports such as hockey, basketball, football, and table tennis, penalty kicks in soccer games \cite{Pal03}, and serves in tennis matches \cite{WalWoo01}. Here we explore the dynamics of \textit{dodgeball}, where the number of players playing different roles changes dynamically and ultimately determines the outcome of the game. While modeling dodgeball might seem like a very specific task, it is a relatively clean and well-defined system where the ability of mean-field techniques \cite{lasry2007mean,bensoussan2013mean} to describe human competition can be put to the test. In addition, it complements ongoing efforts to quantify and model dynamics in sports and games \cite{Buletal2018,mchale2018identifying, martinez2020spatial, MerCla2014,rein2016big, clauset2015safe,kiley2016game, vravcar2016modeling,wang2019tac,Pal03,WalWoo01}. In this paper we present and analyze a mathematical model of dodgeball based on both agent-based stochastic game simulations and an ordinary differential equation (ODE) based compartmental model. By analyzing the stability of fixed points of the ODE system, we find that different game dynamics can occur depending on the teams' strategies: one of the teams achieves a quick victory, either team can achieve a victory depending on initial conditions, or the game evolves into a stalemate. For the simplest strategy choice, these regimes can be interpreted in the context of a competitive Lotka-Volterra model. Numerical simulations of games based on stochastic behavior of individual players reveal that the stalemate regime corresponds to extremely long games with large fluctuations. These long games can be interpreted as a noise-driven escape from the basin of attraction of the stable stalemate fixed point, and are commonly observed in dodgeball games (see Fig.~\ref{fig:game1}). Using both the stochastic and ODE models, we develop a greedy strategy and demonstrate it using stochastic simulations. The structure for the paper is as follows. In Section \ref{sec:game} we describe the rules of the game we will analyze. In Section \ref{sec:dyn} we present and analyze a compartment-based model of dodgeball. In Section \ref{sec:stoch} we present stochastic numerical simulations of dodgeball games and compare these with the predictions of the compartmental model. We then discuss the notion of strategy in the context of this stochastic model. Finally, we present our conclusions in Sec.~\ref{sec:conclusions}. \section{Description of Dodgeball} \label{sec:game} \begin{figure}[b] \centering \includegraphics[width=0.85\columnwidth]{courts.pdf} \caption{(a) Setup of dodgeball court. Players in team $i$ make transitions between Court $i$ and Jail $i$, and Team $i$ loses when there are no players in court $i$.} \label{fig:game0} \end{figure} In this paper we consider the following variant played often in elementary schools in the US (sometimes called {\it prison dodgeball}). Two teams (Team 1 and Team 2) of $N$ players each initially occupy two zones adjacent to each other, which we will refer to as Court 1 and Court 2 (see Fig.~\ref{fig:game0}). Players in a Court can throw balls at players of the opposite team in the other Court. If a player in a Court is hit by such a ball, they move to their respective team's \textit{Jail}, an area behind the opposite team's Court. A player in a Court may also throw a ball to a player of their own Team in their Jail, and if the ball is caught, the catching player returns to their Team's Court. These processes are illustrated schematically in Fig.~\ref{fig:comic}. We denote the number of players on Team $i$ that are in Court $i$ and Jail $i$ by $X_i$ and $Y_i$, respectively. Team $i$ loses when $X_i=0$. For simplicity, we assume there are always available balls and neglect the possibility that a player catches a ball thrown at them by an enemy player. \begin{figure}[t] \begin{subfigure} \centering \includegraphics[width=\linewidth]{game1.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=\linewidth]{game2.png} \end{subfigure} \caption{Evolution of two fifth-grade dodgeball games played in Eisenhower Elementary in Boulder, Colorado, USA. The number of players in Courts $1$ and $2$, $X_1$ and $X_2$, fluctuate for a long time without any team gaining a decisive advantage. The games were eventually stopped and a winner decided on the spot.} \label{fig:game1} \end{figure} In practice, games often last a long time without any of the Teams managing to send all the enemy players to Jail. Because of this, such games are stopped at a predetermined time and the winner is decided based on other factors (e.g., which Team has more players on their Court). An example of this is in Figure~\ref{fig:game1}, which shows the numbers of players in Courts $1$ and $2$, $X_1$ and $X_2$, during two fifth-grade dodgeball games in Eisenhower Elementary in Boulder, Colorado. The values of $X_1$ and $X_2$ seem to fluctuate without any team obtaining decisive advantage. The games continued after the time interval shown and were eventually stopped. Our subsequent model and analysis suggests that this stalemate behavior is the result of underlying dynamics that has a stable fixed point about which $X_1$ and $X_2$ fluctuate. \section{Rate Equation description of game dynamics} \label{sec:dyn} We begin our description of the game dynamics by adopting a continuum formulation where the number of players in Courts $1$ and $2$ are approximated by continuous variables. These variables evolve following rate equations obtained from the rates at which the processes described in the previous section and illustrated in Fig.~\ref{fig:comic} occur. Since the number of players in a dodgeball game is not too large (typically less than $50$), and the game is decided when the number of players in a court drops to zero, one might question the validity of a continuum description. However, as we will see in Sec \ref{sec:stoch}, stochastic simulations with few players show that the rate equations give useful insights about the dynamics of simulated games with a finite number off players. \begin{figure}[b] \centering \includegraphics[width=\linewidth]{processes.pdf} \caption{(Top) A player in a Court can be sent to Jail when hit by a ball from a player in the opposing Court. (Bottom) A player can be saved from Jail when catching a ball thrown by a player from their Court.} \label{fig:comic} \end{figure} To construct the rate equations, we define $\lambda$ as the mean throw rate of the players. Consequently, team $i$ throws balls at a rate of $\lambda X_i$. We also define $F_i(X_1,X_2)$ as the fraction of balls that team $i$ throws that are directed at enemy players, $p_e(X)$ as the probability that a ball thrown at $X$ opposing players hits one of them, and $p_j(Y)$ as the probability that a ball thrown at $Y$ players in jail is caught. Combining these processes and using $Y_i = N-X_i$ we get the Dodgeball Equations: \begin{align} \begin{split} \dot{X}_1 &= \lambda X_1[1-F_1(X_1,X_2)]p_j(N_1-X_1)\\ &- \lambda X_2F_2(X_1,X_2)p_e(X_1), \label{eq:genModel_a} \end{split}\\ \begin{split} \dot{X}_2 &= \lambda X_2[1-F_2(X_1,X_2)]p_j(N_2-X_2)\\ &-\lambda X_1F_1(X_1,X_2)p_e(X_2). \label{eq:genModel_b} \end{split} \end{align} Note that, given the initial conditions $X_i(0)=N$, $X_i(t) \in [0,N]$ for all $t\ge0$. For simplicity, we assume the functions $p_j$ and $p_e$ to be linear, $p_j(Y)=k_j Y$ and $p_e(X)=k_e X$. Defining the normalized number of players $x_i = X_i/N\in [0,1]$ and the dimensionless time $\tau = \lambda N k_j t$, we get the simplified Dodgeball Equations: \begin{align} \frac{dx_1}{d\tau} &= x_1(1-x_1) [1-{f_1}(x_1,x_2)] - {c} x_1 x_2 {f_2}(x_1,x_2), \label{eq:simplified_a}\\ \frac{dx_2}{d\tau} &= x_2(1-x_2) [1-{f_2} (x_1,x_2)]-{c} x_1 x_2 {f_1}(x_1,x_2), \label{eq:simplified_b} \end{align} where ${f_i}(x_1,x_2) = F_i(N x_1,N x_2)$ and ${c} = k_e/k_j>0$ is the effectiveness of throwing a ball at an enemy relative to throwing a ball at jail. \begin{table}[t] \setlength\tabcolsep{0pt} \smallskip \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}|c|c|} \hline Symbol & Meaning\\ \hline $a_i$ & Probability that a player in Team i tries to hit an\\ &opponent instead of saving a teammate from jail\\ \hline $x_i$ & Fraction of players in Team i in Court i\\ \hline $c$ & Probability of hitting/probability of saving\\ \hline \end{tabular*} \caption{Notation used in the dodgeball model Equations~\eqref{eq:fixed_strat_a}-\eqref{eq:fixed_strat_b}.} \label{tab:freq} \end{table} \subsection{Example: fixed strategy} As an illustrative example we will focus on the case when the strategy for both teams is fixed over the course of the game, ${f_i}(x_1,x_2)=a_i\in(0,1)$. We will consider state-dependent choices for $f_i$ (i.e., strategies) in Sec.~\ref{sec:stoch}. Inserting $f_i(x_1,x_2) = a_i$ into Equations (\ref{eq:simplified_a})-(\ref{eq:simplified_b}) gives \begin{align} \frac{dx_1}{d\tau} &= x_1(1-x_1) (1-a_1) - {c} x_1 x_2 a_2 \label{eq:fixed_strat_a},\\ \frac{dx_2}{d\tau} &= x_2(1-x_2) (1-a_2)-{c} x_1 x_2 a_1 \label{eq:fixed_strat_b}, \end{align} which is a 2-species competitive Lotka-Volterra system \cite{gotelli2001primer}. In this case, we can use known results about this system to understand the possible game scenarios. Specifically, at $\tau = 0$ the system starts at $(x_1,x_2) = (1,1)$. For $\tau > 0$, the solution converges towards one of the stable fixed points of (\ref{eq:fixed_strat_a})-(\ref{eq:fixed_strat_b}) in the invariant square $[0,1]\times[0,1]$, which are $(0,0)$, $(0,1)$, $(1,0)$, and the solutions $(x_1^*,x_2^*)$ of the linear system \begin{align} 0 &= (1-x_1) (1-a_1) - {c} x_2 a_2 \label{eq:fix_line_a},\\ 0 &= (1-x_2) (1-a_2) - {c} x_1 a_1\label{eq:fix_line_b}. \end{align} If $a_1 a_2 c^2 \neq (1-a_1)(1-a_2)$ there is a unique solution to these equations, the fixed point \begin{align} x_1^* &= \frac{(1-a_2)[a_2 {c}-(1-a_1)]} {a_1 a_2 {c}^2-(1-a_1)(1-a_2)},\\ x_2^* &= \frac{(1-a_1)[a_1 {c} - (1-a_2)]}{a_1 a_2 {c}^2-(1-a_1)(1-a_2)}. \label{xstar} \end{align} The degenerate case where $a_1 a_2 c^2 = (1-a_1)(1-a_2)$ gives a continuum of fixed points described by \begin{equation} x_1^*+x_2^* = 1, \label{eq:fixed_point_line2} \end{equation} when $a_1 = (1-a_2)/c$ and $a_2 = (1-a_1)/c$, and no solution otherwise. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{panelsfig.pdf} \caption{Stream plots of Equations (\ref{eq:fixed_strat_a}-\ref{eq:fixed_strat_b}) with ${c} = 0.5$ and various values of $a_1$ and $a_2$. (Top left) {\it Stalemate}: for $a_1 = 1/4$, $a_2 = 3/4$, both $(0,1)$ and $(1,0)$ are unstable and $(x_1^*,x_2^*)$ is stable. (Top right) {\it Team 1 wins}: for for $a_1 = 9/16$, $a_2 = 3/4$, $(1,0)$ is a stable fixed point while $(0,1)$ is unstable, giving Team $1$ the advantage; note that in this case $(x_1^*,x_2^*)\notin [0,1]^2$. (Bottom left) {\it Competitive}: for $a_1 = 7/8$, $a_2 = 3/4$, both $(0,1)$ and $(1,0)$ are stable fixed points, and the winner is determined by the initial conditions. (Bottom right) {\it Degenerate}: For the special case $a_1=a_2=(1+c)^{-1}$, every point on the line $x_1+x_2=1$ is a fixed point.} \label{fig:streams} \end{figure} The fixed point $(0,0)$ corresponds to both teams running out of players, the fixed points $(1,0)$ and $(0,1)$ correspond to Team $1$ and Team $2$ winning, respectively, and the fixed point $(x_1^*,x_2^*)$, when it is stable and in $(0,1)^2$, corresponds to a stalemate situation where the number of players in each court remains constant in time. By analyzing the linear stability of the fixed points (see, e.g., \cite{gotelli2001primer}), one finds that the game dynamics can be classified in the following cases: \begin{itemize} \item {\it Stalemate.} This occurs when $(0,1)$, $(1,0)$ are both unstable and $(x_1^*,x_2^*)$ is in $[0,1]^2$ and is stable, which occurs when $a_1 < (1-a_2)/c$ and $a_2 < (1-a_1)/c$. In this scenario, the solution settles in the fixed point $(x_1^*,x_2^*)$ and no Team wins in the deterministic version of the game. The flow corresponding to this case is shown in Fig.~\ref{fig:streams} (top left). This scenario is analogous to the ``Stable coexistence'' of species in the Lotka-Volterra model. \item {\it Competitive.} This occurs when $(0,1)$, $(1,0)$ are stable and the fixed point $(x_1^*,x_2^*)$ is in $[0,1]^2$ and is unstable, which occurs when $a_1 > (1-a_2)/c$ and $a_2 > (1-a_1)/c$. The stable manifold of $(x_1^*,x_2^*)$ acts as a separatrix for the basins of attraction of the fixed points that correspond to victories for Team 1 and Team 2. See Fig.~\ref{fig:streams} (bottom left). This scenario is analogous to the ``Unstable coexistence'' of species in the Lotka-Volterra model. \item {\it Team 1 wins.} This occurs when $(0,1)$ is unstable and $(1,0)$ is stable, which occurs when $a_1 > (1-a_2)/c$ and $a_2 < (1-a_1)/c$. In this scenario, the solution converges towards a victory by Team 1. See Fig.~\ref{fig:streams} (top right). This scenario is analogous to the ``Competitive exclusion'' of species in the Lotka-Volterra model, in which one species is driven to extinction by the other. \item {\it Team 2 wins.} This occurs when $(0,1)$ is stable and $(1,0)$ is unstable, and is analogous to the {\it Team 1 wins} case. In this scenario, the solution converges towards a victory by Team 2. \item {\it Degenerate.} This occurs when there is a continuum of fixed points $x_1^*+x_2^* = 1$. In this scenario, the solution converges towards the line $x_1 + x_2 = 1$, and no winner is produced in the deterministic version of the game. See Fig.~\ref{fig:streams} (bottom right). \end{itemize} \begin{figure}[b] \includegraphics[width=\linewidth]{cases2.pdf} \caption{Deterministic game outcomes based on different strategies $(a_1,a_2)$ for (a) $c<1$, (b) $c>1$.} \label{fig:fix_point guide} \end{figure} Figure \ref{fig:streams} illustrates these different game dynamics by showing the flow induced by Eqs.~(\ref{eq:fixed_strat_a})-(\ref{eq:fixed_strat_b}) in the region $0 \leq x_1 \leq 1$, $0\leq x_2 \leq 1$ for various parameter choices. Stable fixed points are shown as red circles, and unstable fixed points as yellow circles. In Figure~\ref{fig:fix_point guide} we illustrate how the game outcome depends on the strategies used by both teams. The cases $c>1$ and $c<1$ are illustrated in Figs.~\ref{fig:fix_point guide} (a) and (b), respectively. The strategy phase space $(a_1,a_2)$ is divided into four regions separated by the lines $a_1 = (1-a_2)/c$ and $a_2 = (1-a_1)/c$. When both teams preferentially save players of their own team from jail, instead of trying to hit players from the other team (i.e., both $a_1$ and $a_2$ are small), the game results in a stalemate (we reiterate that when stochasticity is included, this scenario corresponds to long games). When both teams preferentially hit players from the other team (i.e., bot $a_1$ and $a_2$ are close to $1$) a winner emerges quickly. When teams have opposite strategies, one of the teams can quickly win, depending on the value of $c$. While the rate equation description provides interesting insights, it relies on the assumption of an infinite number of players. Because of this, some of its predictions are not reasonable for games with a finite number of players. For example, it predicts that the outcome of games is completely determined by parameters and initial conditions. In reality, games are determined by the aggregate behavior of a finite number of individual players, and chance can play an important role. In the next section we will model dodgeball games by considering the stochastic behavior of individual players, and we will find that the insights provided by the rate equations are useful to understand the stochastic dodgeball games. \section{Stochastic Dodgeball Simulations} \label{sec:stoch} In this Section we present numerical simulations of dodgeball games using a stochastic agent-based model that corresponds to the simplified model used in Section~\ref{sec:dyn}. \begin{figure}[b] \centering \includegraphics[width=0.7\linewidth]{compartments.pdf} \caption{Stochastic dodgeball game. Players make transitions between the indicated compartments with the rates shown next to the arrows. The game ends when either $X_1 = 0$ or $X_2 = 0$.} \label{fig:cartoon} \end{figure} In the stochastic version of the game, each team starts with $N$ players in their respective court, $X_1(0) = X_2(0) = N$, and no players in Jail, $Y_1(0) = Y_2(0) = 0$. Players in Court $1$ make stochastic transitions to Jail $1$ at rate $\lambda X_2(t) F_2(X_1,X_2) k_e X_1$, and players in Jail $1$ make transitions to Court $1$ at rate $\lambda X_1 [1-F_1(X_1,X_2)] k_j (N-X_1)$, where, as in Sec.~\ref{sec:dyn}, $F_i(X_1,X_2)$ is the probability that a player in Court $i$ will throw a ball towards an enemy player in the opposite Court instead of trying to save a teammate from Jail, $k_e$ is the probability of hitting a single enemy player, and $k_j$ is the probability that a player in Jail catches a ball thrown at them. The rates of transition for players in Team 2 are obtained by permuting the indices $1$ and $2$. By using the dimensionless time $\tau = \lambda k_j t$, the rates of transition per dimensionless time are $c X_1 X_2 F_2(X_1,X_2)$ and $X_1 (N-X_1) [1-F_1(X_1,X_2)]$ for players to transition from Court 1 to Jail 1 and from Jail 1 to Court 1, respectively, where $c = k_e/k_j$. The compartmental model corresponding to this process is shown schematically in Fig.~\ref{fig:cartoon}. The code used for simulating the agent-based dodgeball model and finding the probability that a team wins can be found on the GitHub repository (\url{https://github.com/Dodgeball-code/Dodgeball}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{stoch_stream.png} \caption{Simulations of games with the same constants as Fig.~\ref{fig:streams}. Trajectories $(X_1,X_2)$ have stochastic fluctuations on top of the deterministic flow of Fig~\ref{fig:streams}. The ``Stalemate'' regime (top left) results in long, back-and-forth games.} \label{fig:stoch_stream} \end{figure} \subsection{Stochastic games} In Figure~\ref{fig:stoch_stream} we show the evolution of four dodgeball games simulated as described above using the same parameters as in Fig.~\ref{fig:streams}. The plots show the trajectories of $(X_1,X_2)$ starting from initial conditions $(50,50)$. Note that, although the trajectories have significant fluctuations, they follow approximately the flow shown in Fig.~\ref{fig:streams}. In particular, for the parameters resulting in the stalemate scenario [i.e., a stable fixed point $(x_1^*,x_2^*) \in (0,1)\times(0,1)$] the number of players in Courts $1$ and $2$ fluctuates around $(N x_1^*,N x_2^*)$ (indicated with an arrow). In practice, these parameters result in extremely long games that continue until a random fluctuation is large enough to decrease $X_1$ or $X_2$ to zero. To further illustrate this, Fig.~\ref{fig:stalemate} shows $X_1(t)$ (blue) and $X_2(t)$ (orange) as a function of $t$ for the parameters in Fig.~\ref{fig:streams}(a). The evolution of this game resembles that of the games seen in Fig.~\ref{fig:game1}, which suggests that those games were in the Stalemate regime. In the degenerate case, Fig.~\ref{fig:streams}(d), the game trajectory has large fluctuations around the line $X_1+X_2 = N$, which corresponds to the line of fixed points $x_1^* + x_2^* = 1$ of the deterministic system. We interpret this behavior as the trajectory diffusing under the effect of the fluctuations along the marginally stable line $X_1+X_2 = N$. Note that in the particular trajectory shown, Team 1 wins even after at some point in time they had only one player in Court 1. In Fig.~\ref{fig:stoch_stream}(c) the game eventually results in a victory by Team 1, even though the deterministic model predicts a victory by Team 2 [see~ Fig.~\ref{fig:streams}(c)], because stochastic fluctuations of the trajectory $(X_1,X_2)$ allow it to cross over to the basin of attraction of $(1,0)$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{stalemate.png} \caption{Fraction of players in Courts 1 and 2 (solid lines) versus dimensionless time $\tau$ for a stochastic game simulation with the same parameters as Fig~\ref{fig:streams} (top left), i.e., $c=1/2$, $a_1 = 1/4$, $a_2=3/4$, and $N=50$. In the ``Stalemate'' regime, the fraction of players fluctuates stochastically about the fixed point values $x_1^* = x_2^*$ (dashed line).} \label{fig:stalemate} \end{figure} As we see from these examples, the outcome of stochastic dodgeball games is determined both by the underlying deterministic flow and by the stochastic fluctuations of the $(X_1,X_2)$ trajectories. To account for this, we focus on how the probability $P$ of winning a game depends on the parameters. This probability can be calculated directly from the outcomes of a large number of simulated games (the algorithm for simulating games is presented in Appendix \ref{sec:appendixa}, but it is much more efficiently calculated by using the properties of the underlying Markov process, as explained in Appendix \ref{sec:appendixb}. To illustrate how the probability of winning can be related to the deterministic results, we fix $c = 2/3$ and $a_2 = 3/4$, and calculate $P_1$ as a function of $a_1$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fixed_simulation.png} \caption{(a) Probability that Team 1 wins a game $P_1$ as a function of $a_1$ with $c=2/3$ and $a_2=3/4$ for $N = 1, 5, 10, 20,$ and $50$ (blue, orange, yellow, purple, and green solid lines, respectively). The dashed red lines mark bifurcations in the deterministic dynamics (see text), and the dashed horizontal line indicates $P_1 = 1/2$. The leftmost region corresponds to the ``Stalemate'' regime leading to long games. The middle region represents ``Team 1 Wins'', which can be noted by the large values of $P_1$ for large values of $N$. The right region is the ``Competitive'' region in the deterministic model noted by mixed values of $P_1$ and quicker games. (b) Average duration of games (in dimensionless time $\tau$) with the same parameters as in the bottom panel. The duration of games in the ``Stalemate'' regime increases with $N$. The shaded area around the green curve represents $3$ standard deviations.} \label{fig:fix_sim} \end{figure} Fig~\ref{fig:fix_sim}(a) shows $P_1$ as a function of $a_1$ for $N = 1, 5, 10, 20,$ and $50$ (blue, orange, yellow, purple, and green solid lines, respectively). As $a_1$ increases from $0$ to $1$, different regimes of the deterministic model are traversed. For the parameters given let $a_s = (1-a_2)/c=3/8$ and $a_c = 1 - a_2 c=1/2$, which are shown as dashed red lines. For $0\leq a_1 < a_s$, the system is in the ``Stalemate'' case, for $a_s < a_1 < a_c$ the system is the ``Team 1 Wins'' case, and for $a_c < a_1 < 1$, it is in the ``Competitive'' case. Now we interpret how $P_1$ changes as $a_1$ is increased. For $a_1 < 1-a_2$, the fixed point $(x_1^*,x_2^*)$ is closer to $(0,1)$ than it is to $(1,0)$, and since victory is achieved by escaping the basin of attraction of the fixed point with random fluctuations, it is much more likely that this escape will occur to the nearest fixed point, in this case $(0,1)$. Therefore, $P_1\sim 0$ in this regime, and it is smaller for larger $N$ since fluctuations are smaller. For $1-a_2 < a_1 < a_s$, the game is still in the stalemate regime, but now $(x_1^*,x_2^*)$ is closer to $(1,0)$ and therefore $P_1 \sim 1$, and increases with $N$. For $a_s < a_1 < a_c$, the game is in the ``Team 1 Wins'' regime, and so $P_1$ approaches $1$ rapidly as $N$ increases. For $a_1 > a_c$, the game is in the ``Competitive'' regime, where the initial condition $(1,1)$ is in the basin of attraction of $(1,0)$ for $a_1 < a_2$ and in the basin of attraction of $(0,1)$ for $a_1> a_2$, which is reflected by the fact that $P_1 > 1/2$ for $a_1 < a_2$ and $P_1 < 1/2$ for $a_2 < a_1$. We note that for very small $N$ (e.g., $N = 1, 5$), the predictions of the deterministic theory break down. This can be understood in the limiting case $N=1$ (blue curve), where the probability of winning can be calculated explicitly as $P_1 = a_1/(a_1+a_2) = 4a_1/(4a_1 + 3)$. According to our interpretation, victory in the ``Stalemate'' regime is achieved by escaping the basin of attraction of the underlying stable fixed point $(x_1^*,x_2^*)$ via fluctuations induced by the finite number of players. Since these fluctuations become less important as the number of players increases, one would expect that the average time $\tau$ to achieve victory would (i) be largest in the ``Stalemate'' regime, and (ii) increase with $N$. Fig~\ref{fig:fix_sim}(b) shows the average game duration $\tau$ as a function of $a_1$, calculated from direct simulation of 5000 stochastic games when $N < 50$ and $100$ games when $N=50$. Consistent with the interpretation above, $\tau$ is much longer in the ``Stalemate'' regime and increases with $N$ [we have found that $\tau$ scales exponentially with $N$ (not shown), as one would expect for an escape problem driven by finite size fluctuations]. Furthermore, it is maximum approximately when $(x_1^*,x_2^*)$ is equidistant to $(0,1)$ and $(1,0)$, i.e., when $a_1=1-a_2$ [see Fig~\ref{fig:fix_sim}(a)]. To get a broader picture of how the choice of fixed strategies $a_1$, $a_2$ affects the probability of winning, we show in Fig.~\ref{fig:a1a2} the probability that Team 1 wins, $P_1$, as a function of $a_1$ and $a_2$, obtained numerically as described in Appendix~\ref{sec:appendixb} for $N=20$ and the same parameters of Fig.~\ref{fig:fix_point guide}(a). The curve for $N=20$ in Fig.~\ref{fig:fix_sim}(a) corresponds to the values shown in the dashed line. \begin{figure} \centering \includegraphics[width=\linewidth]{3DWinHeatmap.png} \caption{Probability that Team 1 wins $P_1$ as a function of $a_1$ and $a_2$. The dashed line corresponds to the $N = 20$ curve in Fig.~\ref{fig:fix_sim}(a).} \label{fig:a1a2} \end{figure} There appears to be a saddle point approximately at $(a_1,a_2) \approx (1/2,1/2)$ corresponding to a Nash equilibrium, i.e., a set of strategies such that neither Team would benefit from a change of strategy if the other Team maintains their strategy. The issue of the appropriate definition and existence of Nash equilibria in finite-player stochastic games and their behavior as the number of players tends to infinity has been studied in the emerging area of {\it mean-field games} \cite{lasry2007mean,bensoussan2013mean}. We leave a more detailed study of Nash equilibria in dodgeball for future study. \subsection{Heuristic Strategy} In the example treated in the previous Sections, the probability that a player in Team $i$ decides to throw a ball to an enemy player instead of rescuing a teammate from jail, $F_i(X_1,X_2)$ is fixed throughout the game at the value $a_i$. In reality, players may adjust this probability in order to optimize the probability of winning. In this Section we will develop a heuristic greedy strategy with the goal of trying to optimize victory. For this purpose, it is useful to define the quantities $H_i$ as \begin{equation} \begin{array}{cc} H_1 = \frac{X_1}{X_1+X_2}, & H_2= \frac{X_2}{X_1+X_2}. \end{array} \label{eqs:H} \end{equation} These quantities have the advantage that they are normalized between $0$ and $1$, with $H_i=0$ ($H_i=1$) corresponding to a loss (victory) by Team $i$. In addition, $H_i$ corresponds to the probability that team $i$ will throw a ball next, and therefore it is a good indicator of how much control team $i$ has. Therefore, it is reasonable for Team $i$ to apply a strategy to increase $H_i$. To develop such a strategy, we define $H_i$ and $H_i^{+}$ as the values of $H_i$ before and after a ball is thrown. Similarly, we define $X_i$ and $X_i^+$ as the values of $X_i$ before and after a ball is thrown. For definiteness, we will present the strategy for Team $1$, and the strategy for Team $2$ will be similar. The basis of the strategy is to choose the value of $F_1(X_1,X_2)$ that maximizes the expected value of $H_1^{+}$, $\mathbb{E}[H_1^{+}]$. Since $F_1$ is the probability that the ball is thrown at enemy players, $p_e$ the probability that such a ball actually hits an enemy player, $1-F_1$ the probability that the ball is thrown at a teammate in jail, and $p_j$ the probability that such a ball is successful in rescuing a teammate, the expected value of $H_1^+$ is given by \begin{multline} \mathbb{E}[H_1^{+}] = F_1\bigg[\frac{X_1}{X_1+X_2-1}p_e +\frac{X_1}{X_1+X_2}(1-p_e)\bigg]\\ +(1-F_1)\bigg[\frac{X_1+1}{X_1+X_2+1}p_j + \frac{X_1}{X_1+X_2}(1-p_j)\bigg], \end{multline} Which can be rewritten as \begin{equation} \mathbb{E}[H_1^{+}] = A + \frac{B}{X_1+X_2} F_1, \label{eq:linEx} \end{equation} where \begin{align} B = \bigg[\frac{X_1^t}{X_1^t+X_2^t-1}p_e - \frac{X_2^t}{X_1^t+X_2^t+1}p_j\bigg] \end{align} and $A$ is independent of $F_1$. Since Eq.~(\ref{eq:linEx}) is linear in $F_1$, it is maximized by choosing $F_1 = 1$ when $B > 0$ and $F_1 = 0$ when $B < 0$. Therefore, the choice of $F_1$ that maximizes the expected value of $H_1^{+}$, $F_1^*$, is \begin{equation} F_1^* = \begin{cases} 1, & \frac{X_1}{X_1+X_2-1}p_e(X_2) \ge \frac{X_2}{X_1+X_2+1}p_j(N-X_1),\\ 0, & \text{otherwise}. \end{cases} \label{eq:disSol} \end{equation} When $X_1$, $X_2 \gg 1$, the strategy simplifies to \begin{equation} F_1^* \approx \begin{cases} 1, & X_1 p_e(X_2) \ge X_2 p_j(N-X_1),\\ 0, & \text{otherwise}. \end{cases} \label{eq:disSolApprox} \end{equation} We note that this can also be derived by maximizing $dH_1/dt$ by using Eqs.~(\ref{eq:simplified_a})-(\ref{eq:simplified_b}). Furthermore, for the case considered in Sections~\ref{sec:dyn} and \ref{sec:stoch}, where $p_e(X_i) = k_e X_i$ and $p_j(Y_i) = k_j Y_i$, the strategy reduces to \begin{equation} F_1^* = \begin{cases} 1, & k_e X_1 \ge k_j(N-X_1),\\ 0, & \text{otherwise}. \end{cases} \label{eq:disSolEx} \end{equation} For example, when $k_e = k_j$ (i.e., the probability of success in hitting an enemy player is the same as the probability of succeeding in rescuing a teammate from jail) the strategy for Team 1 consists in trying always to rescue teammates from Jail 1 when the majority of Team 1 player's are in Jail 1, and in trying to hit players from Team 2 when the majority of Team 1's players are in Court 1. Interestingly, in the limit $X_1$, $X_2 \gg 1$ the strategy for Team 1 is independent of $X_2$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Heuristic_results.png} \caption{Probability of Team 1 winning with the heuristic strategy $F_1$ against a fixed strategy $a_2$. Number of players in each game is set to $N=20$. } \label{fig:strategy_plot} \end{figure} To validate the effectiveness of this strategy, we simulate dodgeball games in which Team 1 adopts the strategy $F_1(X_1,X_2) = F_1^*$ given by Eq.~(\ref{eq:disSol}) and Team 2 uses the fixed strategy $F_2(X_1,X_2) = a_2$. In Fig.~\ref{fig:strategy_plot} we plot the probability that Team 1 wins, $P_1$, as a function of $a_2$ for $c = 2/3, 1, 3/2,$ and $\infty$ (blue, orange, yellow, and purple solid lines, respectively). As the Figure shows, using the Strategy $F_1^*$ consistently results in a probability of winning higher than $1/2$. In general, the strategy $F_1^*$ does best when $c$ is small and $N$ is large. Note the probability of Team $1$ winning is $1/2$ only when $c=\infty$, i.e., the chance of saving a player in jail is $0$. In this case the strategy $a_2 = 1$ is clearly optimal. \section{Conclusions}\label{sec:conclusions} In this paper we presented a mathematical model of dodgeball, which we analyzed via an ODE-based compartmental model and numerical simulations of a stochastic agent-based model. These two complementary methods of analysis revealed a rich dynamical landscape. Depending on Teams' strategies, the dynamics and outcome of the game is determined by a combination of the stability of the fixed points of the underlying dynamical system and the stochastic fluctuations caused by the random behavior of individual players. Additionally, we derived a greedy strategy in the context of the stochastic model of dodgeball. While our strategy was shown to be effective against fixed strategies (i.e., $F_2 = a_2$), it isn't necessarily optimal. This suggests the future work of finding an optimal strategy as well as studying the topic of Nash equilibriums in the context of dodgeball. More data is needed to verify some of the predictions of the dodgeball model. While the time series from real games shown in Fig.~\ref{fig:game1} appear to be consistent with the Stalemate regime, a quantitative comparison would need estimation of the quantities $k_e$, $k_j$, $a_1$, and $a_2$. In principle, these probabilities could be estimated from recorded dodgeball games. Nevertheless, the continuous model of dodgeball is able to offer reasonable insights into the behavior of stochastic agent-based games with a realistic number of players. Our model and analysis relied on various assumptions and simplifications, and relaxing some of these assumptions could be a useful topic for future work as well. One significant assumption used is that a ball thrown at an enemy player will not be caught. However, it is possible for balls to be caught, and this causes the thrower to be sent to jail. The dodgeball model could be extended to include this situation. Who a player decides to target currently only depends on the number of remaining enemies in play and the number of people in jail, but this could be generalized to account for heterogeneous targeting probabilities. The last assumption that will be discussed here is that this model assumes uniform behavior of the players. Individual ability could be modeled by including an individual's ability to catch balls, hit an enemy target, and hit shots on jail. Finally, we assumed that players behave independently (which is a reasonable approximation in Elementary School games). Coordinated strategies such as those used in professional games are not considered here. \acknowledgments We thank James Meiss, Nicholas Landry, Daniel Larremore, and Max Ruth for their useful comments. We also thank Eisenhower Elementary for allowing us to use the data.
{ "attr-fineweb-edu": 1.568359, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd3zxaJiQnhPtoTni
\section{Introduction} Understanding the essence of multiple reviews or opinions is a frequent problem today. High quality opinion summaries can improve search, product comparison, recommender systems, and even dialog assistants. In this domain, abstractive summarization is particularly promising for fluently comparing and contrasting opinions from source reviews. However, while language models trained on huge numbers of source-summary pairs have driven summarization performance in some domains, is it harder to find such pairs on the web for opinions, and it is difficult to present tens or hundreds of reviews to human annotators and train them to write an informative summary. This paper presents a new self-training approach that automatically identifies and leverages \emph{common opinions} across reviews, for example as in Table \ref{tab:introexample}. \begin{table}[t] \begin{tabular}{p{0.2cm}|p{6.6cm}} {\footnotesize R1} & {\footnotesize...very large and clean with a nice size kitchen. {\color{blue}The hotel is located right across the street from balboa park} and within walking distance of a rite aid drugstore..} \\ \hline {\footnotesize R2} & {\footnotesize...If you insist on staying here, reserve a refurbished room and get that promise in writing! The location was great for tourists, {\color{blue} right across from balboa park}. You could walk to the zoo (about 1/4 mi)...}\\\hline {\footnotesize R3} & {\footnotesize...I decided to stay at the park manor suites hotel since it seemed to be close to san diego zoo. {\color{blue} The hotel is conveniently located in front of balboa park}, walking distance to san diego zoo,...}\\\hline {\footnotesize R4} & {\footnotesize...The staff are both pleasant and professional. {\color{blue} Hotel is across from balboa park on sixth ave}. This is the park west area, and features a diverse array of restaurants...}\\\hline {\footnotesize R5} & {\footnotesize...As other reviewers have said, it's very easy to be here without a car - {\color{blue} balboa park is just across the road} and the airport is a short taxi ride away.} \\ \end{tabular} \caption{Example showing a consensus or common opinion between 5 reviews for a hotel on TripAdvisor.com, taken from the SPACE corpus \cite{qt}} \label{tab:introexample} \end{table} This lack of data for opinion summarization has motivated many abstractive summarization methods based on auto-encoders \cite{meansum,brazinskas-copycat,treestruct}, and these do not use any supervision from gold human summaries. A few recent approaches propose self-training of encoder-decoder models on synthetic summary examples. These examples are created by randomly sampling one of the input reviews and treating it as a pseudo-summary and treating other topically-related reviews as the source texts.\cite{amplayo-denoising,fewsum,amplayo-etal-2021-aspect,elsahar-etal-2021-self,brazinskas-etal-2022-efficient} While such pseudo or silver-summaries are able to provide pretraining signals, their objective is one of missing review prediction rather than aggregation of multiple texts. Their pseudo-summaries are also entire reviews which might contain other non-summary worthy content. In this paper, we present a new self-training method which leverages textual entailment signals to produce silver summaries of \emph{high quality and combining information across multiple reviews}. Intuitively, the method aims to identify the consensus or most agreed upon opinions in the source set. In the example in Table 1, if many reviews mention that the ``location is right across balboa park'', we would consider it a highly agreed upon opinion, and as a summary-worthy one. We create silver summaries using a set of such opinions with the highest agreement. We generate such silver summaries on a large scale and show how to train encoder-decoder transformer models using this data. We evaluate our model in both zero-shot or unsupervised setting as well as few-shot learning. Our results show that our method produces huge gains in both cases, outperforming other approaches and achieving new state-of-the-art performance. \section{Related work} Opinion summarization is a widely studied problem, where the role of sentiment, and product aspects (such as `lens' and `focus' for a camera) are well documented. This paper focuses on abstractive summarization for general purpose summaries (not aspect-based) and we provide an overview of the approaches closest to our work. \vspace{2mm} \noindent{\bf Unsupervised neural networks.} As in other areas of text generation, modern opinion summarization methods are also predominantly neural networks. Since large scale data for supervision of encoder-decoder models is largely absent in this domain, many prior methods focused on unsupervised approaches. Common techniques here include auto-encoders and associated generative approaches such as VAEs. In \citet{meansum}, summaries are generated from the mean of the embeddings of input reviews. A similarity loss encourages the generated summaries to be close to the review embeddings, while an autoencoder is used to improve the review embeddings. The intuition that summaries should capture the consensus opinions continues in \citet{brazinskas-copycat}, this time employing a VAE that can steer towards common information. \citet{treestruct} also use VAEs but extend them to produce hierarchical summaries where some sentences convey general impressions, while other provide specific details about user experience. Our work also presents an unsupervised method, but based on encoder-decoder models also taking advantage of self-training which we discuss next. \vspace{2mm} \noindent{\bf Self-training methods.} Some very recent solutions have sought to take advantage of recent large pretrained encoder-decoder models via self-training \cite{amplayo-denoising,fewsum,amplayo-etal-2021-aspect,elsahar-etal-2021-self,brazinskas-etal-2022-efficient}. The approach here is to create large number of pairs of source review sets, paired with a pseudo or silver summary as an approximate target. In all these methods, one of the reviews from the source set is taken as the pseudo summary, and other reviews or topically related reviews to the target is taken as the set of source reviews. This dataset is then used for further pretraining of encoder-decoder transformer models to incorporate signals and language specific to review summarization. These models are usually better than unsupervised models based on generative approaches. While allowing a favorable paradigm shift, and better performance, there are a few limitations of this type of self-training. As pointed out by \citet{fewsum}, reviews are considerably diverse from one another. So an objective that generates a review from other reviews will need to also predict content not present on the source side, a major difference from actual summaries of reviews. Such pseudo-summaries will also contain a lot of first person language which again are less desirable in a summary to users. In this work, we present a novel method of pretraining. We also create silver-summaries on a large scale. However, our summaries actually contain propositions from multiple input summaries and in particular those which are reflective of the consensus among the review authors. These summaries are more powerful signals and move the training task away from review generation. \vspace{2mm} \noindent {\bf Few-shot learning.} With increased use of encoder-decoder models, methods have also been proposed to efficiently augment the training with a small number of human-generated summaries (50 to 100). \citet{pass} train transformer models on a small number of examples and during inference, generate multiple summaries which are later ranked according to coherence to arrive at a final one. Other approaches focus on an additional plug-in network that can predict desired properties of summaries based on a few labelled examples \cite{fewsum} that can augment training signals. \citet{brazinskas-etal-2022-efficient} introduce the use of a few additional parameters in the form of adaptors and only these are finetuned instead of the full network, making the training efficient and robust for few-shot learning. We also demonstrate our self-trained model in few-shot settings. \vspace{2mm} \noindent{\bf Consensus as a goal for summarization.} When the summarization problem contains multiple input texts, intuitively the frequently held or common information across them is one important signal for summary worthy content. Multi-document news summarization has exploited frequency from early times \cite{sumbasic,radev2004} to most recent ones \cite{ernst-etal-2022-proposition}. Recent work has also used consensus as a goal for summarizing scientific publications around health topics \cite{nutribullets}, and identify agreement and discrepancies in Wikipedia document clusters \cite{sentnli}. Intuitively, review summarization also expects to capture the voice of the majority of users as one of its aims. For example, if a majority of users complain about the battery of an item, we would expect a summary to mention that. Instructions to annotators in multiple annotation efforts for opinion summarization explicitly ask annotators to capture what is common and popular \cite{fewsum,qt}. The idea of consensus is also present in the objective of many recent models for opinion summarization \cite{meansum,brazinskas-copycat,qt}. In this work, our self-training approach explicitly tries to capture statements which are agreed upon by a majority of reviews. \section{Textual entailment to identify consensus among review users} \label{sec:silverdata} We propose a novel approach to create silver source-summary pairs for abstractive opinion summarization. A central idea here is the use of textual entailment to find statements reflecting user consensus. We first present our definition of the idea and describe the steps involved in silver data creation. \subsection{Defining review consensus} We define consensus as the number of reviews that support a particular claim. For example, 60 (out of 100) reviews might claim that the seafood dishes are great at a restaurant. Likewise 30 reviews might say that the staff are friendly and polite. Our aim is to obtain those sentences with most user consensus automatically, and use these to create our silver-standard data. But note that the same claim may be expressed in different ways or granularity, and so their frequency in reviews cannot be easily computed. Eg. \emph{`This hotel is in the heart of Times Square'} and \emph{`Hotel's location is slap bang in the middle of Times Square.'} both express the same claim, and \emph{`The fish is tasty'} and \emph{`The salmon is delicious'}, both support the claim that \emph{`The seafood is great.'}. Our idea is to capture this variability using natural language entailment. At a high level, our approach identifies potential claims in the form of \emph{propositions} from a large collection of texts, uses textual entailment to find out how often the collection supports the proposition, and computes a score for the support. Now we explain how we obtain these statements and their scores automatically. \subsection{Extracting propositions} For texts, even when they are sentence-level units, it is hard to reason about them precisely. Many review sentences in addition tend to be rather long. For example, \emph{``I love eating in Likya, the Chefs are so passionate and professional about the food they cook and the staffs are well trained, they treat me very well like a customer.''} contain a bunch of different claims. It is difficult to find support for such complex sentences since the same information is unlikely to be present in other users' reviews. Instead, we split review sentences into \emph{propositions} and use these as our key units. We define a proposition as a `single claim or fact' about the item and extract these as snippets from the original review texts. In fact, recent work on supervised news summarization also uses the extraction and clustering of proposition units to find frequent subtopics, and then fusing the information in the biggest clusters into a summary \cite{dagan-proposition}. In this work, we use simple rules to split review sentences into propositions. We split sentences at conjunctions, period, and comma subject to a minimum clause length of four. Our algorithm processes sentences from left to right to find a delimiter. If the proposed span will create a clause less than the minimum length, we do not split and attach the span to the proposition on the left. Note that these propositions are a linear segmentation of the input sentence, and their concatenation yields the original sentence. Intuitively, this process primarily performs syntactic simplication, without changing the total content that is expressed. The resulting propositions for different sentences in our data is shown in Table \ref{tab:proposition_splitting}. Note that there are some propositions which end up ungrammatical, and our length constraints do not always separate out all the aspects (as in the third example in Table \ref{tab:proposition_splitting}). But overall this simple method works well for review sentences where syntactic embedding is less complex than in genres such as news, and we can scale to large collections efficiently. \begin{table*}[ht!] \centering \begin{tabular}{|p{6cm}|p{8cm}|} \hline {\footnotesize \bf Review sentence} & {\footnotesize \bf Extracted propositions}\\ \hline {\footnotesize There was loads of cupboard space and a fantastic easy to use safe.} & {\footnotesize There was loads of cupboard space and$_1$ a fantastic easy to use safe.$_2$} \\ \hline {\footnotesize Metro station (llcuna, line 4) is 5 minute walk away, beach is a 10 minute walk away.} & {\footnotesize Metro station (llcuna, line 4) is 5 minute walk away,$_1$ beach is a 10 minute walk away.$_2$}\\\hline {\footnotesize The room was very nice and clean, quiet location, staff were helpful, easy access to the centre of town by metro, bakeries and a supermarket nearby.} & {\footnotesize The room was very nice and clean, quiet location, staff were helpful,$_1$ – easy access to the centre of town by metro, bakeries and a supermarket nearby.$_2$} \\\hline \end{tabular} \caption{Example propositions split from source sentences. The propositions on the right are numbered according to their position in the sentence.} \label{tab:proposition_splitting} \end{table*} We extract propositions from all the reviews for an item. Suppose there are $N$ reviews for an item which result in $M$ propositions where $M \gg N$. \subsection{Scoring consensus} Our aim is to find the number of supporting reviews for each of the $M$ propositions. We compute this number using natural language entailment. Specifically, consider review $R_i$ and proposition $m_j$ belonging to the same item. Let us represent a textual entailment relation as $P \rightarrow H$, where $P$ is a premise and $H$ is a hypothesis. In our case, if $R_i \rightarrow m_j$, then we consider that $R_i$ \textit{supports} $m_j$. The final score for proposition $m_j$, $S(m_j) = \sum_{1 \le i \le N}E(R_i, m_j)$ where $E(R_i, m_j)=1$ if $R_i \rightarrow m_j$ else 0. We obtain $E(R_i, m_j)$ using the predictions of an entailment classifier which treats the $R_i$ as the premise and $m_j$ as the hypothesis. If the most likely label from the classifier is `entailment', then $E(R_i, m_j)=1$ and 0 if other labels had the highest probability. In this work, we use a cross attention model, BERT-large \cite{devlin-etal-2019-bert} to obtain these predictions. The input to the model concatenates the premise and hypothesis with a separator symbol and the CLS token's embedding is sent through a linear layer to predict three classes: entailment, contradiction and neutral. We trained this model on the MNLI corpus \cite{mnli} reaching a development accuracy of 84\%. Note that the training data for the entailment model does not contain any examples from the review domain. But we found that predictions are rather reasonable and even better when a higher threshold is applied on the probability of the entailment label. Note that this score computation for all propositions requires an entailment prediction between all pairs of $(R_i, m_j)$. Even though the computation is done only within each item, there are still a quadratic number of pairs per item. So we implement the full computation of silver summaries in a Apache Beam\footnote{\url{https://beam.apache.org/}} pipeline which allows to create parallel data-processing pipelines. Our typical pipelines do inference billions of times by the entailment models. In Table \ref{tab:entailmentset}, we show some of the entailment predictions from our models. We take a proposition and sample random reviews from the set of reviews which entail that proposition. Our model does not explicitly do any sentiment classification, we have picked a positive and negative proposition for demonstrating how precise and clear entailment based support prediction tends to be. \begin{table*}[ht!] \centering \begin{tabular}{|p{15cm}|} \hline {\footnotesize \bf Proposition: ``the property has a lot of character''} \\ {\footnotesize \bf Supporting reviews:}\\ {\footnotesize R1. ...Though i understand the previous posters point that the park manor has charm, I'd say that the actual ``charm'' happens in all the wrong places. That there's a nice and funky lobby with some amazing artistic featurettes and a cute patio with a coy boy, or the spacious rooms with a hodgepodge of furniture and beautiful molding on the walls that seems to go nowhere - yes, charming.}\\ {\footnotesize R2. ...but the views higher would have been spectacular. A quirky place which people will love or hate...} \\ {\footnotesize R3. ...this hotel is beautiful! It 's so elegantly decorted but in an antique way. The ceiling in the lobby... a huge king bed, sofa, armoire, vanity desk, kitchen - stove, refridgerator and the necessary kitchenware. I loved all the antique furniture, so nice to look at and change from standard hotel decor...} \\ {\footnotesize R4. ...I would highly recommend this hotel to anyone who is looking for accommodations with more character than you 'll find at the big chain hotels. A marriott looks like a marriott whether you're in singapore or st. Louis. Why not try the local flavor?...}\\ {\footnotesize R5. ...This hotel is old and dated. The furnishings are very old and the whole hotel needs refurbishing . there are gas stoves in the rooms...}\\ \hline {\footnotesize \bf Proposition: ``obvious neglect to fixtures and fittings.''} \\ {\footnotesize \bf Supporting reviews:} \\ {\footnotesize R1. ...i leant on the bannister at one point and almost fell down three floors...the window would not close... the electricity in our room kept cutting out if we had more than one item on...}\\ {\footnotesize R2. ...my friends also got two leaks in their room... the carpets were old and they were obviously never hoovered in years...i saying they should knock the building down and do the whole thing up... }\\ {\footnotesize R3. ...there were loose electric wires hanging from the ceilings-which i tripped over constantly...the locks on the doors were poor... }\\ {\footnotesize R4. ...there are no elevators and the stairs are falling apart- literally!...broken window which was taped up with parcel tape and cardboard... broken heaters...wardrobe with door falling off... }\\ {\footnotesize R5. ...could not charge phones because outlets did not work...cable tv was finnecky...internet was one computer on the second floor and did not work most of the time...broken fixtures and missing electrical covers...building seemed to be crumbling and it leaked in the foyer when it rained...}\\ \hline % \end{tabular} \caption{Two example propositions (from two hotels in our dataset) with 5 reviews which entail them. The reviews were randomly selected from the full list of reviews which entail each proposition.} \label{tab:entailmentset} \end{table*} \subsection{Silver summaries} We order the propositions in decreasing order of their scores $S(m_i)$, and take the top $n$ as the silver summary sentences. We trim the silver summary up to a certain summary length expressed in tokens. Additionally, we employ a MMR \cite{mmr} style redundancy removal technique to keep diverse content in the summary. We implement this control using a simple method of content word overlap.\footnote{We also explored entailment based diversity measures, but we found that simple content word overlap kept the maximum diversity in aspects commented on within the summaries.} Suppose $S$ is the set of propositions selected in the summary so far. The next proposition chosen is the highest scoring proposition $p_k$ where $overlap(p_k, s_i) < 2$, $\forall i, 1 \leq i \leq \lvert S \rvert$. $overlap$ is computed as the number of content words in common between the two propositions based on the stopword list within NLTK \cite{nltk}. The top propositions for two hotel items from our dataset is shown in Table \ref{tab:top_propositions}. Note that these are snippets from actual reviews for that item or product. \begin{table*} \centering \begin{tabular}{p{6.5cm}|p{8.5cm}}\hline {\footnotesize Hotel with 106 reviews} & {\footnotesize Hotel with 61 reviews} \\ \hline {\footnotesize 1. very comfortable (a big deal for me). (58\%)} & {\footnotesize 1. well equipped with good privacy setting. (82\%)}\\ {\footnotesize 2. well maintained, clean, comfortable suites, (57\%)} & {\footnotesize 2. the family-owned vacation spot is very family oriented. (68\%)}\\ {\footnotesize 3. the rooms were very comfortable, and (55\%)} & {\footnotesize 3. this resort is a comfortable family retreat providing a great getaway. (60\%)}\\ {\footnotesize 4. they have a place to eat but (52\%)} & {\footnotesize 4. a very family friendly place to stay. (60\%)}\\ {\footnotesize 5. the size of the room is nice, (51\%)} &{\footnotesize 5. our unit was very clean, comfortable.. (55\%)}\\ {\footnotesize 6. that was a great rate for a suite. (50\%)} & {\footnotesize 6. units have had great proximity to a pool and (54\%)}\\ {\footnotesize 7. still professional; the room was clean and (50\%)} & {\footnotesize }\\ \hline \end{tabular} \caption{The top propositions for two hotels in our dataset. We take the top 10 propositions and show only the ones kept after redundancy filtering. The percentage of total reviews which entail each proposition is shown within braces.} \label{tab:top_propositions} \end{table*} This final set of summary propositions, $S$, chosen for a given summary length, are then concatenated in the chosen order to create the silver summary. When the propositions are not full sentences, we polish them for capitalization and punctuation to match full summaries. Note that no special facility is present for ordering these sentences by coherence. In many cases, the list of top propositions is a very reasonable summary, and in this first work, we have not carried out further processing for coherence. \subsection{Source texts} \label{sec:reviewsampling} The silver summaries from the previous step are composed of extracted text spans from the source reviews. A system trained to produce such sequences from the full set of input reviews will predominantly copy from the input texts. So we make changes to the set of source reviews to turn the data into one suitable for abstractive summarization. Let $N$ be the total set of input reviews. For each proposition $p_i$ in the summary, we remove the review $R_j$, where $p_i$ came from, i.e. $p_i$ is a span in $R_{j}$. This deletion discourages the verbatim copying of text spans from the source, and encourages systems to perform abstraction. The final input reviews on the source side is a reduced set $N'$, $|N'|<|N|$. Note that sentences (propositions) in the silver standard are supported by many other reviews, albeit in different surface forms, so the signals to produce the silver summary are still present in $N'$. An illustration of input review selections is shown in Figure \ref{fig:source_masking}. This way of creating source-summary pairs resembles one of the powerful pretraining objectives for abstractive summarization known as Gap Sentences Generation, introduced by the Pegasus \cite{pegasus} model. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{images/masking_space.png} \caption{Example which demonstrates how reviews are removed from the summarization input side if they were the original source from which a proposition was extracted. Here, P1 was extracted from R2 and P2 from R4. R2 and R4 will be removed entirely from the summarization input. But note that the summary content is present in other reviews which entail P1 and P2.} \label{fig:source_masking} \end{figure} In practice, the number of source reviews that can be processed as input in most standard sequence to sequence models is fewer than the hundreds present in $N'$. So we sample a smaller set $N''$, size $k$, of reviews to fit the sequence length of the encoder. We could further aid the training by adapting $N''$ to be most useful for generating the target sentences. We can create this sample of size $k$ in three ways. {\sc Uniform.} In the simplest case, we can sample $k$ source reviews uniformly at random. The remaining two methods focus on those reviews which entail one of the silver-summary propositions. {\sc Equal.} We sample $k/|S|$ reviews from the set of reviews entailing each proposition in the summary. The intuition here is that the summarization source contains an equal number of reviews supporting each (silver) summary proposition. {\sc Proportional.} We sample $l$ reviews from the entailment set of each summary proposition, where $l$ is proportional to the size of the entailment set. For example, if `seafood is great' is a summary proposition with 40\% entailment support, then 40\% of the summarization input are reviews on the topic of great seafood, although the review containing the verbatim proposition is filtered out. In the next sections, we describe how we use this data to train abstractive summarization systems. \section{Distantly supervised extractive and abstractive models} \section{Datasets} \label{sec:datasets} We use two sources of data in our experiments. The first is an unlabelled review corpus (no gold or human summaries are available for the items). This dataset is used to create silver-standard summaries for self-training. The second source is an evaluation dataset containing a much smaller set of items (not seen during training) and here for each item, the set of source reviews are paired with one or more human summaries of those reviews. In our experiments we use the SPACE corpus collected by \cite{qt}. It comprises of reviews for hotels from the TripAdvisor website. \vspace{2mm} \noindent{\bf SPACE-unlabelled.} This is a collection of 1.1 million reviews for 11,000 hotels. These reviews are not paired with human summaries. We use this set for silver data creation and further for training. \vspace{2mm} \noindent{\bf SPACE-eval.} contains human generated summaries for a smaller set of 50 hotels. For each of these hotels, 100 input reviews are paired with 3 gold-standard human-written abstractive summaries. The human-summaries were created via a two-step process where annotators first selected key sentences from the input reviews, and then wrote a summary based on the sentence collection. The dataset contains 3 general summaries for each hotel, as well as aspect based such as for food and cleanliness. We only use the general summaries for each input. These 50 hotels are divided into 25 for development and 25 for test sets. This evaluation dataset ideally suits our task since the input contains 100 reviews on which one could ask for common opinions and claims. Most other evaluation sets \cite{brazinskas-copycat,fewsum} contain about 8 randomly sampled reviews which may often not have much in common. \section{Models} We build our abstractive systems using pretrained encoder-decoder models based on T5's \cite{t5} framework. These models encode the input reviews as a sequence and autoregressively generate the output summary words as a sequence. In multi-document summarization, especially opinions, source reviews could easily span hundreds of reviews. Standard self-attention layers found in current transformer models have a polynomial scale relationship to input length, making it impossible to encode and attend to several reviews at once. Many summarization systems avoid this issue by including a content selection component as a first step of a pipeline. Recent work has shown that sparse transformers are able to overcome this issue, simplifying models and many times outperforming pipeline based alternatives. For this reason, we have also built models on top of LongT5 \cite{longt5}, which implements sparse attention by combining local attention with transient global attention, allowing tokens to attend locally and globally via transient global nodes. In this work, we employ LongT5 models (of different sizes: Large (770M), XL (3B)) with a limit of 8,192 sentence pieces. We use the public pretrained checkpoint. \footnote{ \url{https://github.com/google-research/longt5}} \section{Experiments} In this section, we explain how we trained our abstractive summarization models. \subsection{Silver Data} We create our silver data using the unlabelled review corpus introduced in Section \ref{sec:datasets}. We called this silver dataset as {\bf SPACE-OpineSum}. To create this set, we followed the procedure outlined in \ref{sec:silverdata}. We used SPACE items with a minimum of 50 reviews (since very few reviews may not have a lot in common to extract out). This set contains about 4,729 items. Our beam pipelines computed a total of around 1.3B entailment predictions on review-proposition pairs from these items. The resulting silver data contains the same number of items, but now each item is paired with a silver summary. \subsection{Self-training} We explore the usefulness of our self-training in two setups: unsupervised and few-shot learning abstractive summarization. For the unsupervised case, we train our models on the silver-data only. For few-shot learning, we use a small number of annotated input-summary pairs ($<$100) for finetuning our self-supervised systems. \subsubsection{Unsupervised training} Given the silver-data, we trained LongT5-Large (770M parameters) and LongT5-(Large, XL) \cite{longt5} models on the sequence-to-sequence task of generating the highest consensus opinions (i.e. most entailed) given a concatenated sequence of the input reviews. These models do not use any gold-annotated examples for training. We compare these systems with prior unsupervised work in the SPACE-eval dataset introduced in Section \ref{sec:datasets}. We select the best checkpoint based the ROUGE performance on the validation set. \subsection{Few-shot Learning} \label{sec:fewshot} Few-shot learning was implemented by finetuning our self-trained models on a few human annotated source-review and summary pairs. To facilitate this setup, we divide the development examples in SPACE-eval (25 total) into a training set with 15 items and a validation set with 10 items. The test set remains unchanged. We use this training set for few-shot learning and the best checkpoint was selected based on ROUGE scores on the validation set. These models trained better with a rather reduced learning rate, $1/5th$ of the standard $1e-4$. We will compare these models with baselines which do not use self-training with silver summaries. Rather these latter models are warm started from the public pretrained checkpoints and similarly trained on the train split we created above. \section{Results} First we present which settings were most useful for self-training before describing summarization performance. One aspect is the relationship between input source reviews and the silver summary. We trained all our models until validation performance plateaus. In this case, ROUGE was computed on the held-out validation silver data set. In Section \ref{sec:reviewsampling}, we present three ways of sampling the set of source reviews to consider as input: equal, uniform, and proportional. We found that our model performance was similar across these settings. Since our output propositions are only a list, perhaps a model can learn the relationship as long as there are frequency signals in the input, but that frequency does not need to be proportional to the frequency seen in the full set of input reviews. We also compared how many reviews, size $k$, should be present on the input side. While there were no strong patterns as for sampling methods, typically more reviews, eg. 160 performed better most of the time. Next we compare how well the models perform in the unsupervised summarization setting. Here we train our models on the silver data and evaluate on the test set of SPACE-eval. Table \ref{tab:results_unsupervised} presents the ROUGE scores. We compare with previous Lexrank \cite{erkan2004lexrank} results as well as the current best system ACESUM by \cite{amplayo-etal-2021-aspect}. We see that {\sc OpineSum} systems obtain very good performance. Sometimes we do not outperform the best state of art system since these systems are sophisticated and tend to employ a variety of techniques (such as aspect extraction) while our model is only driven by self-training. We would expect that the addition of other modules would improve upon our system. \begin{table}[t] \centering \begin{tabular}{r|ccc} {\bf Model} & {\bf R1} & {\bf R2} & {\bf RL}\\ \hline \multicolumn{4}{c}{Previous systems} \\ \hline Lexrank & 36.86 & 8.81 & 22.96 \\ Acesum & 42.64 & 14.50 & 25.20\\\hline \multicolumn{4}{c}{ {\sc OpineSum} systems} \\ \hline LongT5 Large & {\bf 45.84} & \bf{16.30} & {\bf 29.18} \\ LongT5 XL & 43.41 & 13.82 & 23.84 \\ \hline \end{tabular} \caption{Results for the unsupervised setting. The {\sc OpineSum} systems use self-training only and {\em no gold summaries}.} \label{tab:results_unsupervised} \end{table} Table \ref{tab:results_fewshot} presents results in the few-shot learning setup. There are no prior system results for this few-shot setup on the SPACE data. Nevertheless, the T5 models trained without silver-data are a strong ablation to compare against our few-shot trained models with {\sc OpineSum} warm start. Here, we see that the baseline T5 examples are already rather strong and outperform earlier unsupervised systems. In particular, LongT5 is pre-trained with a summarization-relevant objective: the gap sentence prediction task. That is a probably cause for its high performance on this task. Even with this high baseline, we find that our simple self-training still leads to further significant improvements. We show an example output of our system compared with gold standards and prior system in Table \ref{tab:example_outputs}. One noteworthy difference is between our unsupervised and fewshot systems. The unsupervised system produces shorter summaries and at time disfluencies due to being trained on smoothed propositions. Fewshot learning improves along these dimensions being the summary much closer to the gold standards. Also note that ACESUM summaries contain many phrasal repetitions while that is absent in our outputs. \begin{table*}[h!] \centering \begin{tabular}{|p{15cm}|} \hline {\footnotesize \bf Gold standard summaries} \\ {\footnotesize G1. This hotel was very nice and within walking distance of the Vatican, Colosseum, Forum, ST Peters, etc. Staff were helpful in every way, and the attention to each request and question was efficient and treated with courtesy. The air-conditioned rooms were very nice, clean, and comfortable, with immaculate bathrooms to boot. Breakfast, which is included, was pretty good for a continental buffet.} \\ \\ {\footnotesize G2. Staff received mixed reviews, but were overall considered friendly, attentive, and helpful. The hotel, rooms, and bathrooms were very clean, with daily maid service and linen change. The room was beautiful and airy. The Breakfast was great and varied. The location is excellent, away from the hordes of tourists. It's just a short walk over Ponte Umberto to Piazza Navona, or across Ponte Cavour to reach the popular shopping areas. The building is nice. The restaurant was first rate. However, some thought that the hotel is pricey for the quality of the room.}\\ \\ {\footnotesize G3. The staff was extremely courteous and helpful. The wooden floors were all cleaned and maintained; as well as everything else in the hotel. The rooms were beautiful and large, and the bathroom was immaculate. There was a good, buffet style breakfast with particularly enjoyable cold meats, and with anything else desired. The hotel is located close enough to the Vatican, Colosseum, the Forum, and St. Peters- overall a great location.} \\ \hline \\ {\footnotesize {\bf ACESUM} \cite{amplayo-etal-2021-aspect}}\\ {\footnotesize The staff were very friendly and helpful. the room was clean and clean. it was a great place to stay. if you want to stay in cicerone, it is a good place to get to the shopping area. there are many restaurants, restaurants and restaurants. but the staff are very friendly, friendly and friendly. they are a very nice hotel, a nice place to eat, and a lot of good food, as well as a small restaurant, the breakfast was very good, but a bit of.} \\ \\\hline {\footnotesize \bf {\sc OpineSum}-unsupervised} \\ {\footnotesize The hotel is located within walking distance of the Vatican. The rooms were clean and comfortable. The Cicerone is a nice hotel. As far as the hotel goes. The reception area is nice but the rooms. The breakfast buffet was fine. The room was a good size.} \\ \\\hline {\footnotesize \bf {\sc OpineSum}-fewshot}\\ {\footnotesize The staff was friendly and helpful. The rooms and hotel itself is modern, extremely clean! The rooms are a good size, with comfy beds, a breadth of amenities such as a great shower and a comfortable bed. The breakfast buffet is average, but very good, with lots of variety. The location is very central. The hotel is within walking distance of the Vatican and Piazza Navona. The Cicerone is a beautiful hotel, but the hallways need refurbishing.}\\ \hline \end{tabular} \caption{Example summaries for one item in our dataset. We show the 3 gold standard summaries available on the evaluation set along with previous best system (ACESUM) and our unsupervised and few-shot self-trained systems.} \label{tab:example_outputs} \end{table*} \begin{table} \centering \begin{tabular}{rc|ccc} {\bf Model} & {\bf {\footnotesize Checkpoint}} & {\bf R1} & {\bf R2} & {\bf RL}\\ \hline {\footnotesize LongT5 L} & {\footnotesize Vanilla} & 45.51 & 13.03 & 29.28\\ {\footnotesize LongT5 L} & {\footnotesize \sc{OpineSum}} & {\bf 47.19} & {\bf 14.60} & {\bf 30.13} \\ \hline \end{tabular} \caption{Results for the few-shot learning setting. All the models were finetuned on a small set of 15 training examples described in Section \ref{sec:fewshot}. `Vanilla' systems are warm started from public checkpoints and do not see self-training data.} \label{tab:results_fewshot} \end{table} \section{Conclusion} We have presented a simple self-training approach which leads to sizeable gains on both unsupervised and few-shot abstractive opinion summarization. Our work is one of the first to demonstrate how an intuitive idea of consensus can be incorporated during self-training. It opens up a number of challenges and new problems for future work. In particular, while our silver data contains the provenance for each top proposition---meaning the set of reviews which support each the proposition---this information is only minimally used at the moment. Future work could explore how models could be trained using the entailment weights (scores) of each proposition and the exact links to entailing reviews to yield more performance improvements and faithful generation. We also hope that such self-training models could serve as good checkpoints for other tasks in the opinion domain such as review helpfulness prediction or product popularity and ratings. We hope to explore such directions in future work.
{ "attr-fineweb-edu": 1.21875, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfp_xK1yAgYaM4wgp
\section{Introduction} \label{SectIntroduction} The sports league scheduling problem studied in this note, called ``prob026'' in CSPLib \cite{CSPLib} and also known as the ``balanced tournament design'' problem in combinatorial design theory \cite[pages 238-241]{Colbourn&Dinitz1996}, is a NP-hard problem \cite{Briskorn&al2010} that seems to be first introduced in \cite{Gelling&Odeh1974}: \begin{itemize} \item There are $T=2n$ teams (i.e., $T$ even). The season lasts $W=T-1$ weeks. Weeks are partitioned into $P=T/2$ slots called ``periods'' or ``stadiums''. Each week, one match is scheduled in every period; \item $c_\mathcal{H}$ constraint: All teams play each other exactly once ($\mathcal{H}$alf competition); \item $c_\mathcal{P}$ constraint: No team plays more than twice in a $\mathcal{P}$eriod. This constraint may be motivated by the equal distribution of stadiums to teams; \item $c_\mathcal{W}$ constraint: Every team plays exactly one game in every $\mathcal{W}$eek of the season, i.e., all teams are different in a week. \end{itemize} The problem then is to schedule a tournament with respect to these definitions and constraints. A solution to prob026 is a complete assignment of $D = \{(t,t'), 1 \le t < t' \le T\}$ items (couples of $t$eams) to variables of $X=\{x=\langle p,w \rangle, 1 \le p \le P, 1 \le w \le W\}$ (couples of $p$eriods and $w$eeks) verifying the constraint set $C=\{c_\mathcal{H}, c_\mathcal{P}, c_\mathcal{W}\}$, $\langle p,w \rangle = (t,t')$ meaning that team $t$ meets team $t'$ in period $p$ and week $w$. Thus, a solution can be conveniently represented by a $P \times W$ sized table, whose items are integer couples $(t, t')$, see Table~\ref{Example_valid_schedule} for an example of a valid schedule for $T = 8$. For $T = 70$ teams, this represents a problem with 2\,415 variables and 2\,415 values per variable. There are $T(T - 1)/2$ matches to be scheduled. A valid schedule can be thought of as a particular permutation of these matches. So, for $T$ teams, the search space size is $[T(T - 1)/2]!$. \begin{table}[h] \begin{center} \caption{A valid schedule for 8 teams.} \label{Example_valid_schedule} \begin{tabular}{cccccccc} \noalign{\smallskip}\hline \multirow{2}{*}{Periods} & \multicolumn{7}{c}{Weeks} \\ \cline{2-8} & 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline 1 & 1,2 & 6,8 & 2,5 & 4,5 & 4,7 & 3,8 & 1,7\\ 2 & 3,7 & 5,7 & 3,4 & 1,8 & 5,6 & 2,4 & 2,6\\ 3 & 4,6 & 1,4 & 7,8 & 3,6 & 2,8 & 1,5 & 3,5\\ 4 & 5,8 & 2,3 & 1,6 & 2,7 & 1,3 & 6,7 & 4,8\\ \hline \end{tabular} \end{center} \end{table} Direct construction methods exist when $(T-1) \bmod 3 \neq 0$ \cite{Hamiez&Hao2004a,Haselgrove&Leech1977} or $T/2$ is odd \cite{Lamken&Vanstone1985,Schellenberg&al1977}. However, finding a solution (schedule) in the general case for any arbitrary $T$ remains a highly challenging task. Indeed, to our knowledge, the best performing search algorithm \cite{Hamiez&Hao2008} can solve all the instances for $T$ up to 50, but only some cases when $50 < T \le 70$. Other representative solution approaches include integer programming \cite{McAloon&al1997} (limited to $T \le 12$), transformation into the SAT problem \cite{Bejar&Manya2000} ($T \le 20$), distributed approach ($T \le 28$ according to \cite{Gomes&al1998a}), constraint programming \cite{vanHentenryck&al1999} and tabu search \cite{Hamiez&Hao2001} ($T \le 40$). In this paper, we present two improvements to the \texttt{En}umera\-tive \texttt{A}lgorithm (\texttt{EnASS}) proposed in \cite{Hamiez&Hao2008}. With the proposed enhancements, \textbf{all} the instances for $12 \leq T \le 70$ can now be solved. We provide in the next section a brief recall of the original \texttt{EnASS} method. We show then in the following sections a new \texttt{EnASS} variant that solves \textbf{all} instances up to $T = 60$ (including the problematic $T \bmod 4 = 0$ cases) and another new variant that solves all the $12 \leq T \leq 70$ instances. \section{A brief recall of the \texttt{EnASS} algorithm} \texttt{EnASS} starts with a complete $\overline{s}$ conflicting assignment. $\overline{s}$ is built, in linear-time complexity, to satisfy the $c_\mathcal{W}$ and $c_\mathcal{H}$ constraints (thanks to patterned one-factorization \cite[page 662]{Colbourn&Dinitz1996}). At this stage, the remaining $c_\mathcal{P}$ constraint is not verified in $\overline{s}$, see Table~\ref{Initial_schedule_8} where team 8 appears more than twice in the 4th period. \begin{table \begin{center} \caption{Initial conflicting $\overline{s}$ schedule for 8 teams.}\label{Initial_schedule_8} \begin{tabular}{cccccccc} \noalign{\smallskip}\hline \multirow{2}{*}{Periods} & \multicolumn{7}{c}{Weeks} \\ \cline{2-8} & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline 1 & 1,2 & 2,3 & 3,4 & 4,5 & 5,6 & 6,7 & 1,7 \\ 2 & 3,7 & 1,4 & 2,5 & 3,6 & 4,7 & 1,5 & 2,6 \\ 3 & 4,6 & 5,7 & 1,6 & 2,7 & 1,3 & 2,4 & 3,5 \\ {\bfseries 4} & 5,{\bfseries 8} & 6,{\bfseries 8} & 7,{\bfseries 8} & 1,{\bfseries 8} & 2,{\bfseries 8} & 3,{\bfseries 8} & 4,{\bfseries 8} \\ \hline \end{tabular} \end{center} \end{table} \begin{algorithm \caption{\texttt{EnASS}: An overview.} \label{AlgoEnASS} \begin{algorithmic}[1] \REQUIRE Two periods ($p$ and $\overline{p}$) and a week ($w$) \IF[A solution is obtained since all periods are filled and valid according to $\mathcal{R}$]{$p=P+1$} \RETURN \TRUE \ENDIF \IF[Period $p$ is filled and valid according to $\mathcal{R}$, try to fill next period]{$w=w_l+1$} \label{BeginPeriodFilled} \RETURN \texttt{EnASS}($p+1, w_f, 1$) \ENDIF \label{EndPeriodFilled} \IF[Backtrack since no match from week $w$ in $\overline{s}$ can be scheduled in period $p$ of week $w$ without violating $\mathcal{R}$]{$\overline{p}=P+1$} \RETURN \FALSE \ENDIF \IF[The $\overline{s}\langle \overline{p},w \rangle$ match is already scheduled, try next match]{$\exists\,1 \le p' < p : \langle p',w \rangle =\overline{s}\langle \overline{p},w \rangle$} \RETURN \texttt{EnASS}($p, w, \overline{p}+1$) \ENDIF \STATE $\langle p,w \rangle \gets \overline{s}\langle \overline{p},w \rangle$ \label{Assign} \COMMENT{Schedule the $\overline{s}\langle \overline{p},w \rangle$ match in period $p$ of week $w$} \IF[The previous assignment and next calls lead to a solution]{$\mathcal{R}$ is locally verified and \texttt{EnASS}$(p, w+1, 1)=$ \TRUE} \RETURN \TRUE \ENDIF \STATE \COMMENT{From this point, $\mathcal{R}$ is locally violated or next calls lead to a failure} \STATE Undo step \ref{Assign} \COMMENT{Backtrack} \RETURN \texttt{EnASS}($p, w, \overline{p}+1$) \COMMENT{Try next value for $\langle p,w \rangle$} \end{algorithmic} \end{algorithm} Roughly speaking, \texttt{EnASS} uses $\overline{s}$ to search for a valid tournament by filling a $P \times W$ table (initially empty) row by row, see Algorithm~\ref{AlgoEnASS} where $w_f$ and $w_l$ are the $f$irst and $l$ast weeks \texttt{EnASS} considers when filling any period $p$ ($1 \le w_f < w_l \le W$), $\overline{s}\langle \overline{p},w \rangle$ is the match in $\overline{s}$ scheduled in period $\overline{p}$ and week $w$, and $\mathcal{R}$ is a set of properties (or ``$\mathcal{R}$equirements'') that (partial or full) solutions must verify. \texttt{EnASS} admits three integer parameters: $p$ and $w$ specify which $\langle p,w \rangle$ variable is currently considered, $\overline{p}$ specifies the value assignment tried (see step \ref{Assign}). The function returns TRUE if a solution has been found or FALSE otherwise. Backtracks are sometimes performed in the latter case. \texttt{EnASS} is called first, after the $\overline{s}$ initialization, with $p = 1, w = w_f$ and $\overline{p} = 1$ meaning that it tries to fill the slot in the first period of week $w_f$ with the $\overline{s}\langle 1,w_f \rangle$ match. The basic \texttt{EnASS} skeleton presented in Algorithm~\ref{AlgoEnASS} solves prob026 only up to $T = 12$ when the $\mathcal{R}$ set is restricted to $\left\{ c_\mathcal{P} \right\}$ while considering the first week as invariant with respect to $\overline{s}$ (i.e., $\forall 1 \le p \le P, \langle p, 1 \rangle = \overline{s}\langle p, 1 \rangle$) with $w_f = 2$ (since the first week is invariant) and $w_l = W$. Note that making the first week invariant helps to avoid some evident symmetries mentioned in \cite[see Sect.~4 and 5.3]{Hamiez&Hao2008}. To tackle larger-size problems, several \texttt{EnASS} variants were considered in \cite{Hamiez&Hao2008}. \texttt{EnASS}$_0$ solved prob026 up to $T = 32$, except the $T = 24$ case, including in $\mathcal{R}$ an implicit property (called ``$c_\mathcal{D}$'' in \cite{Hamiez&Hao2008}) of all prob026 solutions: $\mathcal{R}_0 = \left\{c_\mathcal{P}, c_\mathcal{D} \right\}$. The $c_\mathcal{D}$ property was not originally mentioned in the seminal definition of the problem \cite{Gelling&Odeh1974} and seems to be first introduced in \cite{Schellenberg&al1977}. \texttt{EnASS}$_1$, derived from \texttt{EnASS}$_0$ by further including an ``implied'' requirement ($r_{\Rightarrow}$), solved all instances up to $T = 50$: $\mathcal{R}_1 = \left\{c_\mathcal{P}, c_\mathcal{D}, r_{\Rightarrow} \right\}$. Finally, \texttt{EnASS}$_2$ solved some cases (when $T \bmod 4 \neq 0$) for $T$ up to 70 with two additional invariants ($r_I$ and $r_V$): $\mathcal{R}_2 = \left\{c_\mathcal{P}, c_\mathcal{D}, r_{\Rightarrow}, r_I, r_V \right\}$. \section{Solving all instances of prob026 up to $T = 60$} \label{CTS60} The rule $r'_{\Rightarrow}$ used to solve \textbf{all} prob026 instances up to $T = 60$ resembles the original $r_{\Rightarrow}$ requirement introduced in \cite[Sect.~7]{Hamiez&Hao2008}. Like $r_{\Rightarrow}$, $r'_{\Rightarrow}$ fixes more than one variable (two exactly, to be more precise) when exploring a new branch in the search tree. The difference between $r_{\Rightarrow}$ and the new $r'_{\Rightarrow}$ rule is the weeks that are concerned: While $r_{\Rightarrow}$ connects any week $w_f \le w \le P$ to week $T-w+1$, the $r'_{\Rightarrow}$ constraint links any week $1 \le w \le P - 1$ together with week $W-w+1$. More formally, $\forall\,1 \le w \le P-1, r'_\Rightarrow(p,w) \Leftrightarrow \langle p,w \rangle = \overline{s}\langle \overline{p},w \rangle \Rightarrow \langle p,W-w+1 \rangle = \overline{s}\langle \overline{p},W-w+1 \rangle$. This leads to \texttt{EnASS}$_3$ which comes from the \texttt{EnASS}$_1$ algorithm from \cite{Hamiez&Hao2008} by replacing in $\mathcal{R}_1$ the $r_\Rightarrow$ requirement with the new $r'_{\Rightarrow}$ rule: $\mathcal{R}_3 = \{c_\mathcal{P}, c_\mathcal{D},r'_\Rightarrow\}$. Like for \texttt{EnASS}$_1$, step \ref{Assign} in the basic \texttt{EnASS} description (see Algorithm~\ref{AlgoEnASS}) may be adapted since one additional variable has now to be instantiated and $w_l$ has to be set to $P-1$ before running \texttt{EnASS}$_3$. Steps~\ref{BeginPeriodFilled}--\ref{EndPeriodFilled} have also to be modified since, when $w = w_l + 1$, the $P$ week is not yet filled (so, the $p$ period is not entirely filled either). Table~\ref{Example_valid_schedule} in Sect.~\ref{SectIntroduction} shows an example of a solution found by \texttt{EnASS}$_3$ for $T=8$: For instance, scheduling the $(3,4)$ match from week 3 in period 2 forces the $(5,6)$ match from week 5 ($5=7-3+1$) to be also in period~2. In Table~\ref{CTS3vsCTS1}, we show for $6 \le T \le 50$ comparisons of our new \texttt{EnASS}$_3$ variant (as well as another new \texttt{EnASS}$_4$ variant discussed in the next section), against the \texttt{EnASS}$_1$ algorithm which solves all the instances for $T \le 50$ within 3 hours per $T$ value. The reported statistics include execution times (in seconds in all tables) and number of backtracks (columns labeled ``$|$BT$|$'') needed to find a first solution. In Table~\ref{CTS3vsCTS2}, we show for $52 \leq T \leq 70$ comparisons between the new variant \texttt{EnASS}$_3$ (and \texttt{EnASS}$_4$) and the \texttt{EnASS}$_2$ algorithm from \cite{Hamiez&Hao2008} which solves \emph{some} instances with $T \leq 70$ where $T \bmod 4 \neq 0$. ``--'' marks in the ``Time'' (respectively ``$|$BT$|$'') columns indicate that the method found no solution within 3 hours (resp. that $|$BT$|$ exceeds the maximal integer value authorized by the compiler/system, i.e., 4\,294\,967\,295). All \texttt{EnASS} variants were coded in \texttt{C} and all computational results were obtained on an Intel PIV processor (2 Ghz) Linux station with 2 Gb RAM. \begin{table}[h] \begin{center} \caption{Solving all prob026 instances up to $T=50$. }\label{CTS3vsCTS1} \begin{small} \begin{tabular}{rrrcrrcrr} \noalign{\smallskip}\hline \multirow{2}{*}{$T$} & \multicolumn{2}{c}{\texttt{EnASS}$_1$ \cite{Hamiez&Hao2008}} & & \multicolumn{2}{c}{\texttt{EnASS}$_3$ (Sect.~\ref{CTS60})} & & \multicolumn{2}{c}{\texttt{EnASS}$_4$ (Sect.~\ref{CTS70})}\\ \cline{2-3} \cline{5-6} \cline{8-9} & Time & $|$BT$|$ && Time & $|$BT$|$ && Time & $|$BT$|$\\ \hline 6 & $<1$ & 6 && $<1$ & 1 && -- & --\\ 8 & $<1$ & 16 && $<1$ & 6 && $<1$ & 5\\ 10 & $<1$ & 715 && $<1$ & 350 && -- & --\\ 12 & $<1$ & 86 && $<1$ & 25 && $<1$ & 111\\ 14 & $<1$ & 451 && $<1$ & 65 && $<1$ & 125\\ 16 & $<1$ & 557 && $<1$ & 713 && $<1$ & 560\\ 18 & $<1$ & 1\,099 && $<1$ & 772 && $<1$ & 465\\ 20 & $<1$ & 2\,811 && $<1$ & 708 && $<1$ & 227\\ 22 & $<1$ & 11\,615 && $<1$ & 1\,142 && $<1$ & 3\,237\\ 24 & $<1$ & 12\,623 && $<1$ & 5\,332 && $<1$ & 736\\ 26 & $<1$ & 37\,708 && $<1$ & 5\,313 && $<1$ & 2\,311\\ 28 & $<1$ & 35\,530 && $<1$ & 16\,365 && $<1$ & 85\,315\\ 30 & $<1$ & 650\,811 && $<1$ & 49\,620 && $<1$ & 68\,033\\ 32 & $<1$ & 332\,306 && $<1$ & 91\,094 && $<1$ & 22\,407\\ 34 & $<1$ & 1\,342\,216 && $<1$ & 131\,169 && $<1$ & 21\,696\\ 36 & $<1$ & 2\,160\,102 && $<1$ & 524\,491 && $<1$ & 248\,184\\ 38 & 5.34 & 13\,469\,359 && $<1$ & 763\,317 && $<1$ & 83\,636\\ 40 & 6.25 & 16\,393\,039 && 1.70 & 7\,335\,775 && $<1$ & 220\,480\\ 42 & 107.69 & 256\,686\,929 && 2.74 & 11\,575\,637 && $<1$ & 612\,423\\ 44 & 876.91 & 1\,944\,525\,360 && 19.80 & 79\,587\,812 && 1.02 & 2\,489\,017\\ 46 & 1\,573.31 & 3\,565\,703\,651 && 10.22 & 38\,865\,293 && 1.59 & 3\,430\,033\\ 48 & 542.79 & 1\,231\,902\,706 && 1\,112.55 & 4\,289\,081\,568 && 5.69 & 12\,080\,931\\ 50 & 6\,418.52 & -- && 4\,018.20 & -- && 17.38 & 34\,639\,665\\ \hline \end{tabular} \end{small} \end{center} \end{table} \begin{sidewaystable} \begin{center} \caption{Solving all prob026 instances when $50 < T \le 70$. }\label{CTS3vsCTS2} \begin{tabular}{crrcrrcrr} \noalign{\smallskip}\hline \multirow{2}{*}{$T$} & \multicolumn{2}{c}{\texttt{EnASS}$_2$ \cite{Hamiez&Hao2008}} & & \multicolumn{2}{c}{\texttt{EnASS}$_3$ (Sect.~\ref{CTS60})} & & \multicolumn{2}{c}{\texttt{EnASS}$_4$ (Sect.~\ref{CTS70})}\\ \cline{2-3} \cline{5-6} \cline{8-9} &Time &$|$BT$|$ &&Time &$|$BT$|$ &&Time &$|$BT$|$\\ \hline 52&-- &-- &&377.84 &1\,345\,460\,512 &&50.11 &101\,432\,823\\ 54&10.59 &29\,767\,940 &&763.08 &2\,802\,487\,580 &&101.74 &196\,808\,595\\ 56&-- &-- &&2\,552.65 &-- &&334.26 &753\,747\,164\\ 58&269.88 &827\,655\,311 &&13\,715.33 &-- &&878.96 &1\,851\,547\,682\\ 60&-- &-- &&198\,250.44&-- &&2\,364.47 &--\\ 62&279.38 &494\,071\,117 &&-- &-- &&9\,866.51 &--\\ 64&-- &-- &&-- &-- &&32\,386.67 &--\\ 66&7\,508.51&1\,614\,038\,658&&-- &-- &&85\,989.73 &--\\ 68&-- & &&-- &-- &&518\,194.31 &--\\ 70&8\,985.05&-- &&-- &-- &&1\,512\,574.41 &--\\ \hline \end{tabular} \end{center} \end{sidewaystable} From Table~\ref{CTS3vsCTS1}--\ref{CTS3vsCTS2}, one observes that \texttt{EnASS}$_3$ solves more prob026 instances than \texttt{EnASS}$_1$ within 3 hours. Indeed, while \texttt{EnASS}$_1$ is limited to $T \le 50$, \texttt{EnASS}$_3$ finds solutions for $T$ up to 56 in at most 67 minutes (see the $T = 50$ case in Table~\ref{CTS3vsCTS1}). Moreover, except two cases ($T \in \left\{ 16, 48\right\}$), the number of backtracks required to find a solution is much smaller for \texttt{EnASS}$_3$ than for \texttt{EnASS}$_1$. Table~\ref{CTS3vsCTS2} shows that the comparison between \texttt{EnASS}$_3$ and \texttt{EnASS}$_2$ is somewhat mitigated. Indeed, \texttt{EnASS}$_3$ is able to find solutions for \textbf{all} $T$ up to 56 within 3 hours while \texttt{EnASS}$_2$ solves the instances up to $T=70$, but only when $T \bmod 4 \neq 0$. For the cases that are solved by both \texttt{EnASS}$_3$ and \texttt{EnASS}$_2$, \texttt{EnASS}$_2$ finds a solution much faster. On the other hand, \texttt{EnASS}$_3$ finds solutions for $T \in \left\{52, 56, 60 \right\}$ for which \texttt{EnASS}$_2$ fails. Finally, one notices that \texttt{EnASS}$_3$ requires much more time to solve the $T \in \left\{58, 60 \right\}$ instances (about 55 hours for $T = 60$). \section{Solving all prob026 instances when $50 < T \le 70$}\label{CTS70} The rule $r'_I$ used to solve \textbf{all} prob026 instances for $50 < T \le 70$ is similar to the original $r_I$ requirement introduced in \cite[Sect.~7]{Hamiez&Hao2008}. Indeed, like $r_I$, $r'_I$ inverses two weeks and keeps them invariant during the search. The only difference between $r_I$ and the new $r'_I$ rule is the weeks that are concerned: While $r_I$ considers weeks 2 and $W$, the $r'_I$ constraint inverses weeks 2 and $W-1$. More formally, $\forall w \in \{2, W-1\}, r'_I(w) \Leftrightarrow \forall\,1 \le p \le P, \langle p,w \rangle = \overline{s}\langle P-p+1,w \rangle$. This leads to \texttt{EnASS}$_4$ which comes from \texttt{EnASS}$_3$ by adding in $\mathcal{R}_3$ the new $r'_I$ rule: $\mathcal{R}_4 = \{c_\mathcal{P}, c_\mathcal{D}, r'_\Rightarrow, r'_I\}$. Since the first two weeks are now invariant (and the last two due to $r'_{\Rightarrow}$), $w_f$ has to be set to 3 before running \texttt{EnASS}$_4$. Table~\ref{Example_valid_schedule} in Sect.~\ref{SectIntroduction} shows an example of a solution found by \texttt{EnASS}$_4$ (and \texttt{EnASS}$_3$) for $T=8$: For instance, the first match in week 2 is $\overline{s}\langle 4-1+1,2 \rangle$, i.e., $\langle 1, 2 \rangle = (6, 8)$. The computational performance of the \texttt{EnASS}$_4$ variant is provided in Table~\ref{CTS3vsCTS1} for $6 \le T \le 50$ and in Table~\ref{CTS3vsCTS2} for $50 < T \le 70$\footnote{The first solution found by \texttt{EnASS}$_4$ for $50 < T \le 70$ is available on-line from \texttt{http://www .info.univ-angers.fr/pub/hamiez/EnASS4/Sol52-70.html}.}. One notices that \texttt{EnASS}$_4$ is faster than \texttt{EnASS}$_3$ and \texttt{EnASS}$_1$ (see the ``$|$BT$|$'' columns in Table~\ref{CTS3vsCTS1}) to solve instances when $T \ge 12$ (and for $T=8$), except for the $T \in \{12, 14, 16, 22, 28, 30\}$ cases. Furthermore, within 3 hours per $T$ value, \texttt{EnASS}$_4$ is capable of solving larger instances (up to $T=62$, see Table~\ref{CTS3vsCTS2}) than \texttt{EnASS}$_1$ ($T \le 50$) and \texttt{EnASS}$_3$ ($T \le 56$). While \texttt{EnASS}$_2$ solves only some instances for $50 < T \le 70$ (those verifying $T \bmod 4 \neq 0$, see Table~\ref{CTS3vsCTS2}), \texttt{EnASS}$_4$ finds solutions for all these cases. This is achieved within 3 hours for $T$ up to 62, but larger instances can require more execution time (about 18 days for $T=70$). Finally, note that adding the new $r'_I$ rule excludes solutions for $T \in \{6, 10\}$. \section{Conclusion} We provided in this short note two enhancements to an \texttt{En}umerative \texttt{A}lgo\-rithm for \texttt{S}ports \texttt{S}cheduling (\texttt{EnASS}) previously proposed in \cite{Hamiez&Hao2008}. These enhancements are based on additional properties (identified in \emph{some} solutions) as new constraints to reduce the search tree constructed by the algorithm. With these enhancements, all prob026 instances with $T \leq 70$ can be solved for the first time. Since the main idea behind the enhancements is to add refined requirement rules in the \texttt{EnASS} method, we expect that the method can be further improved to solve prob026 instances for $T > 70$. \section*{Acknowledgments} This work was partially supported by the ``Pays de la Loire'' Region (France) within the LigeRO (2010 -- 2013) and RaDaPop (2009 -- 2013) projects.
{ "attr-fineweb-edu": 1.68457, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdY04ubnjot1QX4SB
\section{Introduction} The ongoing explosion of recorded tracking data is enabling the study of fine-grained behavior in many domains: sports \citep{miller2014factorized,yue2014learning,stephan,corrMAimlearn}, video games \citep{dagger}, video \& motion capture \citep{suwajanakorn2017synthesizing,taylor2017deep,xue2016visual}, navigation \& driving \citep{ziebart2009human,zhang2017query,infogail}, laboratory animal behaviors \citep{johnson2016composing,eyjolfsdottir2017learning}, and tele-operated robotics \citep{abbeel2004apprenticeship,lin2006towards}. However, it is an open challenge to develop \emph{sequential generative models} leveraging such data, for instance, to capture the complex behavior of multiple cooperating agents. Figure \ref{fig:bball_examples} shows an example of offensive players in basketball moving unpredictably and with multimodal distributions over possible trajectories. Figure \ref{fig:boids_examples} depicts a simplified Boids model from \citep{reynolds} for modeling animal schooling behavior in which the agents can be friendly or unfriendly. In both cases, agent behavior is \emph{highly coordinated and non-deterministic}, and the space of all multi-agent trajectories is naively exponentially large. When modeling such sequential data, it is often beneficial to design hierarchical models that can capture long-term coordination using intermediate variables or representations \citep{li2015hierarchical,stephan}. An attractive use-case for these intermediate variables is \emph{to capture interesting high-level behavioral semantics in an interpretable and manipulable way}. For instance, in the basketball setting, intermediate variables can encode long-term strategies and team formations. Conventional approaches to learning interpretable intermediate variables typically focus on learning disentangled latent representations in an unsupervised way (e.g., \citep{infogail, robust}), but it is challenging for such approaches to handle complex sequential settings \citep{vlae}. To address this challenge, we present a hierarchical framework that can effectively learn such sequential generative models, while using programmatic weak supervision. Our approach uses a labeling function to programmatically produce useful weak labels for supervised learning of interpretable intermediate representations. This approach is inspired by recent work on data programming \citep{data_programming}, which uses cheap and noisy labeling functions to significantly speed up learning. In this work, we extend this approach to the spatiotemporal regime. Our contributions can be summarized as follows: \vspace{-0.05in} \begin{itemize} \item We propose a hierarchical framework for sequential generative modeling. Our approach is compatible with many existing deep generative models. \item We show how to programmatically produce weak labels of macro-intent{}s to train the intermediate representation in a supervised fashion. Our approach is easy to implement and results in highly interpretable intermediate variables, which allows for conditional inference by grounding macro-intent{}s to manipulate behaviors. \item Focusing on multi-agent tracking data, we show that our approach can generate high-quality trajectories and effectively encode long-term coordination between multiple agents. \end{itemize} \vspace{-0.05in} \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.49\columnwidth} \centering \includegraphics[width=.49\columnwidth]{figs/ex_1.png} \includegraphics[width=.49\columnwidth]{figs/ex_2.png} \caption{Offensive basketball players have multimodal behavior (ball not shown). For instance, the green player ($\blacktriangledown$) moves to either the top-left or bottom-left.} \label{fig:bball_examples} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\columnwidth} \centering \includegraphics[width=.49\columnwidth]{figs/boids_friend.png} \includegraphics[width=.49\columnwidth]{figs/boids_foe.png} \caption{Two types of generated behaviors for 8 agents in Boids model. \tb{Left}: Friendly blue agents group together. \tb{Right}: Unfriendly red agents stay apart.} \label{fig:boids_examples} \end{subfigure} \vspace{-0.05in} \caption{Examples of coordinated multimodal multi-agent behavior. \vspace{-10pt} } \label{fig:examples} \end{center} \vskip -0.1in \end{figure} In addition to synthetic settings, we showcase our approach in an application on modeling team offense in basketball. We validate our approach both quantitatively and qualitatively, including a user study comparison with professional sports analysts, and show significant improvements over standard baselines. \section{Related Work} \label{sec:related} \textbf{Deep generative models.} The study of deep generative models is an increasingly popular research area, due to their ability to inherit both the flexibility of deep learning and the probabilistic semantics of generative models. In general, there are two ways that one can incorporate stochastics into deep models. The first approach models an explicit distribution over actions in the output layer, e.g., via logistic regression \citep{chen2015learning,oord2016wavenet,oord2016pixel,stephan,eyjolfsdottir2017learning}. The second approach uses deep neural nets to define a transformation from a simple distribution to one of interest \citep{goodfellow2014generative,vae,rezende2014stochastic} and can more readily be extended to incorporate additional structure, such as a hierarchy of random variables \citep{ranganath2016hierarchical} or dynamics \citep{johnson2016composing,vrnn,sontag,srnn}. Our framework can incorporate both variants. \textbf{Structured probabilistic models.} Recently, there has been increasing interest in probabilistic modeling with additional structure or side information. Existing work includes approaches that enforce logic constraints \citep{akkaya2016control}, specify generative models as programs \citep{tran2016edward}, or automatically produce weak supervision via data programming \citep{data_programming}. Our framework is inspired by the latter, which we extend to the spatiotemporal regime. \textbf{Imitation Learning.} Our work is also related to imitation learning, which aims to learn a policy that can mimic demonstrated behavior \citep{syed2008game,abbeel2004apprenticeship,ziebart2008maximum,gail}. There has been some prior work in multi-agent imitation learning \citep{corrMAimlearn,song2018multi} and learning stochastic policies \citep{gail,infogail}, but no previous work has focused on learning generative polices while simultaneously addressing generative and multi-agent imitation learning. For instance, experiments in \citep{gail} all lead to highly peaked distributions, while \citep{infogail} captures multimodal distributions by learning unimodal policies for a fixed number of experts. \citep{hrolenok} raise the issue of learning stochastic multi-agent behavior, but their solution involves significant feature engineering. \section{Background: Sequential Generative Modeling} Let $\tb{x}_t \in \mathbb{R}^d$ denote the state at time $t$ and $\tb{x}_{\leq T} = \{ \tb{x}_1, \dots, \tb{x}_T \}$ denote a sequence of states of length $T$. Suppose we have a collection of $N$ demonstrations $\mathcal{D} = \{ \tb{x}_{\leq T} \}$. In our experiments, all sequences have the same length $T$, but in general this does not need to be the case. The goal of sequential generative modeling is to learn the distribution over sequential data $\mathcal{D}$. A common approach is to factorize the joint distribution and then maximize the log-likelihood: \eq{ \theta^* = \argm{\theta} \sum_{\tb{x}_{\leq T} \in \mathcal{D}} \log p_{\theta} (\tb{x}_{\leq T}) = \argm{\theta} \sum_{\tb{x}_{\leq T} \in \mathcal{D}} \sum_{t=1}^T \log p_{\theta} (\tb{x}_t | \tb{x}_{<t}), \label{eq:condprobs} } where $\theta$ are the learn-able parameters of the model, such as a recurrent neural network (RNN). \textbf{Stochastic latent variable models.} However, RNNs with simple output distributions that optimize Eq. (\ref{eq:condprobs}) often struggle to capture highly variable and structured sequential data. For example, an RNN with Gaussian output distribution has difficulty learning the multimodal behavior of the green player moving to the top-left/bottom-left in Figure \ref{fig:bball_examples}. Recent work in sequential generative models address this issue by injecting stochastic latent variables into the model and optimizing using amortized variational inference to learn the latent variables \citep{srnn,zforcing}. In particular, we use a variational RNN (VRNN \citep{vrnn}) as our base model (Figure \ref{fig:vrnn}), but we emphasize that our approach is compatible with other sequential generative models as well. A VRNN is essentially a variational autoencoder (VAE) conditioned on the hidden state of an RNN and is trained by maximizing the (sequential) evidence lower-bound (ELBO): \eq{ \mathbb{E}_{q_{\phi}(\tb{z}_{\leq T} \mid \tb{x}_{\leq T})} \Bigg[ \sum_{t=1}^T \log p_{\theta}(\tb{x}_t \mid \tb{z}_{\leq t}, \tb{x}_{<t}) - D_{KL} \Big( q_{\phi}(\tb{z}_t \mid \tb{x}_{\leq t}, \tb{z}_{<t}) || p_{\theta}(\tb{z}_t \mid \tb{x}_{<t}, \tb{z}_{<t}) \Big) \Bigg]. \label{eq:vrnn_elbo} } Eq. (\ref{eq:vrnn_elbo}) is a lower-bound of the log-likelihood in Eq. (\ref{eq:condprobs}) and can be interpreted as the VAE ELBO summed over each timestep $t$. We refer to appendix \ref{app:sequential} for more details of VAEs and VRNNs. \section{Hierarchical Framework using Macro-intent{}s} \label{sec:approach} In our problem setting, we assume that each sequence $\tb{x}_{\leq T}$ consists of the trajectories of $K$ coordinating agents. That is, we can decompose each $\tb{x}_{\leq T}$ into $K$ trajectories: $\tb{x}_{\leq T} = \{ \tb{x}_{\leq T}^1, \dots, \tb{x}_{\leq T}^K \}$. For example, the sequence in Figure \ref{fig:bball_examples} can be decomposed into the trajectories of $K = 5$ basketball players. Assuming conditional independence between the agent states $\tb{x}_t^k$ given state history $\tb{x}_{<t}$, we can factorize the maximum log-likelihood objective in Eq. (\ref{eq:condprobs}) even further: \eq{ \theta^* = \argm{\theta} \sum_{\tb{x}_{\leq T} \in \mathcal{D}} \sum_{t=1}^T \sum_{k=1}^K \log p_{\theta_k} (\tb{x}_t^k | \tb{x}_{<t}). \label{eq:nll_final} } Naturally, there are two baseline approaches in this setting: \begin{enumerate} \item Treat the data as a single-agent trajectory and train a single model: $\theta = \theta_1 = \cdots = \theta_K$. \item Train independent models for each agent: $\theta = \{ \theta_1, \dots, \theta_K \}$. \end{enumerate} As we empirically verify in Section \ref{sec:experiments}, VRNN models using these two approaches have difficulty learning representations of the data that generalize well over long time horizons, and capturing the coordination inherent in multi-agent trajectories. Our solution introduces a hierarchical structure of \emph{macro-intent{}s} obtained via \emph{labeling functions} to effectively learn low-dimensional (distributional) representations of the data that extend in both time and space for multiple coordinating agents. \textbf{Defining macro-intent{}s.} We assume there exists shared latent variables called macro-intent{}s that: 1) provide a tractable way to capture coordination between agents; 2) encode long-term intents of agents and enable long-term planning at a higher-level timescale; and 3) compactly represent some low-dimensional structure in an exponentially large multi-agent state space. \begin{wrapfigure}{r}{0.24\columnwidth} \vspace{-15pt} \centering \includegraphics[width=0.95\linewidth]{figs/ex_macro.png} \vspace{-10pt} \caption{Macro-intent{}s (boxes) for two players.} \label{fig:macrogoals} \vspace{-20pt} \end{wrapfigure} For example, Figure \ref{fig:macrogoals} illustrates macro-intent{}s for two basketball players as specific areas on the court (boxes). Upon reaching its macro-intent{} in the top-right, the blue player moves towards its next macro-intent{} in the bottom-left. Similarly, the green player moves towards its macro-intent{}s from bottom-right to middle-left. These macro-intent{}s are visible to both players and capture the coordination as they describe how the players plan to position themselves on the court. Macro-intent{}s provide a compact summary of the players' trajectories over a long time. Macro-intent{}s do not need to have a geometric interpretation. For example, macro-intent{}s in the Boids model in Figure \ref{fig:boids_examples} can be a binary label indicating friendly vs. unfriendly behavior. The goal is for macro-intent{}s to encode long-term intent and ensure that agents behave more cohesively. Our modeling assumptions for macro-intent{}s are: \begin{itemize} \item agent states $\{ \tb{x}_t^k\}$ in an episode $[t_1, t_2]$ are conditioned on some shared macro-intent{} $\tb{g}_t$, \item the start and end times $[t_1, t_2]$ of episodes can vary between trajectories, \item macro-intent{}s change slowly over time relative to the agent states: $d \tb{g}_t / dt \ll 1$, \item and due to their reduced dimensionality, we can model (near-)arbitrary dependencies between macro-intent{}s (e.g., coordination) via black box learning. \end{itemize} \textbf{Labeling functions for macro-intent{}s.} Obtaining macro-intent{} labels from experts for training is ideal, but often too expensive. Instead, our work is inspired by recent advances in weak supervision settings known as \emph{data programming}, in which multiple weak and noisy label sources called labeling functions can be leveraged to learn the underlying structure of large unlabeled datasets \citep{snorkel,bach17}. These labeling functions often compute heuristics that allow users to incorporate domain knowledge into the model. For instance, the labeling function we use to obtain macro-intent{}s for basketball trajectories computes the regions on the court in which players remain stationary; this integrates the idea that players aim to set up specific formations on the court. In general, labeling functions are simple scripts/programs that can parse and label data very quickly, hence the name \emph{programmatic weak supervision}. Other approaches that try to learn macro-intent{}s in a fully unsupervised learning setting can encounter difficulties that have been previously noted, such as the importance of choosing the correct prior and approximate posterior \citep{normalizingflow} and the interpretability of learned latent variables \citep{vlae}. We find our approach using labeling functions to be much more attractive, as it outperforms other baselines by generating samples of higher quality, while also avoiding the engineering required to address the aforementioned difficulties. \textbf{Hierarchical model with macro-intent{}s} Our hierarchical model uses an intermediate layer to model macro-intent{}, so our agent VRNN-models becomes: \eq{p_{\theta_k}(\tb{x}_t^k | \tb{x}_{< t}) = \varphi^k(\tb{z}_t^k, \tb{h}_{t-1}^k, \tb{g}_t), \label{eq:vrnn_macro}} where $\varphi^k$ maps to a distribution over states, $\tb{z}_t^k$ is the VRNN latent variable, $\tb{h}_t^k$ is the hidden state of an RNN that summarizes the trajectory up to time $t$, and $\tb{g}_t$ is the shared macro-intent{} at time $t$. Figure \ref{fig:magnet} shows our hierarchical model, which samples macro-intent{}s during generation rather than using only ground-truth macro-intent{}s. Here, we train an RNN-model to sample macro-intent{}s: \eq{ \label{eq:macro_policy} p(\tb{g}_t | \tb{g}_{< t}) = \varphi_g(\tb{h}_{g,t-1}, \tb{x}_{t-1}), } where $\varphi^g$ maps to a distribution over macro-intent{}s and $\tb{h}_{g,t-1}$ summarizes the history of macro-intent{}s up to time $t$. We condition the macro-intent{} model on previous states $\tb{x}_{t-1}$ in Eq. (\ref{eq:macro_policy}) and generate next states by first sampling a macro-intent{} $\tb{g}_t$, and then sampling $\tb{x}_t^k$ conditioned on $\tb{g}_t$ (see Figure \ref{fig:magnet}). Note that all agent-models for generating $\tb{x}_t^k$ share the same macro-intent{} variable $\tb{g}_t$. This is core to our approach as it induces coordination between agent trajectories (see Section \ref{sec:experiments}). We learn our agent-models by maximizing the VRNN objective from Eq (\ref{eq:vrnn_elbo}) conditioned on the shared $\tb{g}_t$ variables while independently learning the macro-intent{} model via supervised learning by maximizing the log-likelihood of macro-intent{} labels obtained programmatically. \begin{wrapfigure}{R}{0.45\linewidth} \vspace{-45pt} \begin{subfigure}[t]{0.48\linewidth} \centering \includegraphics[width=\columnwidth]{figs/vrnn.pdf} \caption{VRNN} \label{fig:vrnn} \end{subfigure} \begin{subfigure}[t]{0.48\linewidth} \centering \includegraphics[width=\columnwidth]{figs/magnet.pdf} \caption{Our model} \label{fig:magnet} \end{subfigure} \caption{Depicting VRNN and our model. Circles are stochastic and diamonds are deterministic. macro-intent{} $\tb{g}_t$ is shared across agents. In principle, any generative model can be used in our framework.} \label{fig:graphicalmodel} \vspace{-30pt} \end{wrapfigure} \section{Experiments} \label{sec:experiments} We first apply our approach on generating offensive team basketball gameplay (team with possession of the ball), and then on a synthetic Boids model dataset. We present both quantitative and qualitative experimental results. Our quantitative results include a user study comparison with professional sports analysts, who significantly preferred basketball rollouts generated from our approach to standard baselines. Examples from the user study and videos of generated rollouts can be seen in our demo video.\footnote{Demo video: \url{https://youtu.be/0q1j22yMipY}} Our qualitative results demonstrate the ability of our approach to generate high-quality rollouts under various conditions. \subsection{Experimental Setup for Basketball} \label{subsec:exp_setup} \textbf{Training data.} Each demonstration in our data contains trajectories of $K = 5$ players on the left half-court, recorded for $T = 50$ timesteps at 6 Hz. The offensive team has possession of the ball for the entire sequence. $\tb{x}_t^k$ are the coordinates of player $k$ at time $t$ on the court ($50 \times 94$ feet). We normalize and mean-shift the data. Players are ordered based on their relative positions, similar to the role assignment in \citep{lucey2013representing}. There are 107,146 training and 13,845 test examples. We ignore the defensive players and the ball to focus on capturing the coordination and multimodality of the offensive team. In principle, we can provide the defensive positions as conditional input for our model and update the defensive positions using methods such as \citep{corrMAimlearn}. We leave the task of modeling the ball and defense for future work. \textbf{Macro-intent{} labeling function.} We extract weak macro-intent{} labels $\hat{\tb{g}}_t^k$ for each player $k$ as done in \citep{stephan}. We segment the left half-court into a $10 \times 9$ grid of $5$ft $\times 5$ft boxes. The weak macro-intent{} $\hat{\tb{g}}_t^k$ at time $t$ is a 1-hot encoding of dimension 90 of the next box in which player $k$ is stationary (speed $\| \tb{x}_{t+1}^k - \tb{x}_t^k \|_2$ below a threshold). The shared global macro-intent{} $\tb{g}_t$ is the concatenation of individual macro-intent{}s. Figure \ref{fig:macrogoal_dist} shows the distribution of macro-intent{}s for each player. We refer to this labeling function as \texttt{LF-stationary} (pseudocode in appendix \ref{app:code}). \begin{figure*}[t] \vskip 0.05in \begin{center} \includegraphics[width=.19\columnwidth]{figs/macros_p1.png} \includegraphics[width=.19\columnwidth]{figs/macros_p2.png} \includegraphics[width=.19\columnwidth]{figs/macros_p3.png} \includegraphics[width=.19\columnwidth]{figs/macros_p4.png} \includegraphics[width=.19\columnwidth]{figs/macros_p5.png} \caption{Distribution of weak macro-intent{} labels extracted for each player from the training data. Color intensity corresponds to frequency of macro-intent{} label. Players are ordered by their relative positions on the court, which can be seen from the macro-intent{} distributions.} \label{fig:macrogoal_dist} \end{center} \vspace{-10pt} \end{figure*} \textbf{Model details.} We model each latent variable $\tb{z}_t^k$ as a multivariate Gaussian with diagonal covariance of dimension 16. All output models are implemented with memory-less 2-layer fully-connected neural networks with a hidden layer of size 200. Our agent-models sample from a multivariate Gaussian with diagonal covariance while our macro-intent{} models sample from a multinomial distribution over the macro-intent{}s. All hidden states ($\tb{h}_{g,t}, \tb{h}_t^1, \dots \tb{h}_t^K$) are modeled with 200 2-layer GRU memory cells each. We maximize the log-likelihood/ELBO with stochastic gradient descent using the Adam optimizer \citep{adam} and a learning rate of 0.0001. \textbf{Baselines.} We compare with 5 baselines that do not use macro-intent{}s from labeling functions: \newpage \begin{enumerate} \item \textbf{RNN-gauss:} RNN without latent variables using 900 2-layer GRU cells as hidden state. \item \textbf{VRNN-single:} VRNN in which we concatenate all player positions together ($K=1$) with 900 2-layer GRU cells for the hidden state and a 80-dimensional latent variable. \item \textbf{VRNN-indep:} VRNN for each agent with 250 2-layer GRUs and 16-dim latent variables. \item \textbf{VRNN-mixed:} Combination of VRNN-single and VRNN-indep. Shared hidden state of 600 2-layer GRUs is fed into decoders with 16-dim latent variables for each agent. \item\textbf{VRAE-mi:} VRAE-style architecture \citep{vrae} that maximizes the mutual information between $\tb{x}_{\leq T}$ and macro-intent{}. We refer to appendix \ref{app:mi} for details. \end{enumerate} \subsection{Quantitative Evaluation for Basketball} \begin{figure*}[t] \begin{center} \begin{minipage}[b]{0.48\linewidth} \begin{center} \begin{sc} \resizebox{.8\linewidth}{!} \begin{tabular}{l|c|c} \tb{Model} & \tb{Basketball} & \tb{Boids} \\ \hline RNN-gauss & 1931 & 2414 \\ VRNN-single & $\geq$ 2302 & $\geq$ 2417 \\ VRNN-indep & $\geq$ 2360 & $\geq$ 2385 \\ VRNN-mixed & $\geq$ 2323 & $\geq$ 2204 \\ VRAE-mi & $\geq$ 2349 & $\geq$ 2331 \\ \hline Ours & $\geq$ \tb{2362} & $\geq$ \tb{2428} \end{tabular} } \end{sc} \captionof{table}{Average log-likelihoods per test sequence. ''$\geq$'' indicates ELBO of log-likelihood. Our hierarchical model achieves higher log-likelihoods than baselines for both datasets.} \label{tab:ll} \end{center} \end{minipage} \hspace{0.01\linewidth} \begin{minipage}[b]{0.48\linewidth} \begin{center} \begin{sc} \resizebox{\linewidth}{!} \begin{tabular}{c|c|c} \tb{vs. Model} & \tb{Win/Tie/Loss} & \tb{Avg Gain} \\ \hline vs. VRNN-single & 25/0/0 & 0.57 \\ vs. VRNN-indep & 15/4/6 & 0.23 \end{tabular} } \end{sc} \captionof{table}{Basketball preference study results. Win/Tie/Loss indicates how often our model is preferred over baselines (25 comparisons per baseline). Gain is computed by scoring +1 when our model is preferred and -1 otherwise. Results are 98\% significant using a one-sample t-test.} \label{tab:study} \end{center} \end{minipage} \end{center} \vskip -0.1in \end{figure*} \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.49\columnwidth} \centering \includegraphics[width=.49\columnwidth]{figs/traj_single.png} \includegraphics[width=.49\columnwidth]{figs/traj_indep.png} \caption{Baseline rollouts of representative quality. \tb{Left}: VRNN-single. \tb{Right}: VRNN-indep. Common problems in baseline rollouts include players moving out of bounds or in the wrong direction. Players do not appear to behave cohesively as a team. } \label{fig:baselines} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\columnwidth} \centering \includegraphics[width=.49\columnwidth]{figs/traj_magnet.png} \includegraphics[width=.49\columnwidth]{figs/traj_macro.png} \caption{\tb{Left}: Rollout from our model. All players remain in bounds. \tb{Right}: Corresponding macro-intent{}s for left rollout. Macro-intent{} generation is stable and suggests that the team is creating more space for the blue player (perhaps setting up an isolation play).} \label{fig:magnet_rollouts} \end{subfigure} \caption{Rollouts from baselines and our model starting from black dots, generated for 40 timesteps after an initial burn-in period of 10 timesteps (marked by dark shading). An interactive demo of our hierarchical model is available at: \url{http://basketball-ai.com/}.} \label{fig:rollouts} \end{center} \vskip -0.1in \end{figure} \textbf{Log-likelihood.} Table \ref{tab:ll} reports the average log-likelihoods on the test data. Our approach outperforms RNN-gauss and is comparable with other baselines. However, higher log-likelihoods do not necessarily indicate higher quality of generated samples \citep{genmodeleval}. As such, we also assess using other means, such as human preference studies and auxiliary statistics. \textbf{Human preference study.} We recruited 14 professional sports analysts as judges to compare the quality of rollouts. Each comparison animates two rollouts, one from our model and another from a baseline. Both rollouts are burned-in for 10 timesteps with the same ground-truth states from the test set, and then generated for the next 40 timesteps. Judges decide which of the two rollouts looks more realistic. Table \ref{tab:study} shows the results from the preference study. We tested our model against two baselines, VRNN-single and VRNN-indep, with 25 comparisons for each. All judges preferred our model over the baselines with 98\% statistical significance. These results suggest that our model generates rollouts of significantly higher quality than the baselines. \begin{SCtable} \resizebox{.6\linewidth}{!}{ \begin{tabular}{l|c|c|c} \tb{Model} & \tb{Speed (ft)} & \tb{Distance (ft)} & \tb{OOB (\%)} \\ \hline RNN-gauss & 3.05 & 149.57 & 46.93 \\ VRNN-single & 1.28 & 62.67 & 45.67 \\ VRNN-indep & 0.89 & 43.78 & 33.78 \\ VRNN-mixed & 0.91 & 44.80 & 27.19 \\ VRAE-mi & 0.98 & 48.25 & 20.09 \\ \hline Ours (LF-window50) & 0.99 & 48.53 & 28.84 \\ Ours (LF-window25) & 0.87 & 42.99 & \tb{14.53} \\ \tb{Ours (LF-stationary)} & \tb{0.79} & \tb{38.92} & 15.52 \\ \hline Ground-truth & 0.77 & 37.78 & 2.21 \end{tabular} } \caption{Domain statistics of 1000 basketball trajectories generated from each model: average speed, average distance traveled, and \% of frames with players out-of-bounds (OOB). Trajectories from our models using programmatic weak supervision match the closest with the ground-truth. See appendix \ref{app:code} for labeling function pseudocode.} \label{tab:statistics} \end{SCtable} \textbf{Domain statistics.} Finally, we compute several basketball statistics (average speed, average total distance traveled, \% of frames with players out-of-bounds) and summarize them in Table \ref{tab:statistics}. Our model generates trajectories that are most similar to ground-truth trajectories with respect to these statistics, indicating that our model generates significantly more realistic behavior than all baselines. \textbf{Choice of labeling function.} In addition to \texttt{LF-stationary}, we also assess the quality of our approach using macro-intent{}s obtained from different labeling functions. \texttt{LF-window25} and \texttt{LF-window50} labels macro-intent{}s as the last region a player resides in every window of 25 and 50 timesteps respectively (pseudocode in appendix \ref{app:code}). Table \ref{tab:statistics} shows that domain statistics from our models using programmatic weak supervision match closer to the ground-truth with more informative labeling functions (\texttt{LF-stationary} $>$ \texttt{LF-window25} $>$ \texttt{LF-window50}). This is expected, since \texttt{LF-stationary} provides the most information about the structure of the data. \subsection{Qualitative Evaluation of Generated Rollouts for Basketball} \label{sect:qualeval} We next conduct a qualitative visual inspection of rollouts. Figure \ref{fig:rollouts} shows rollouts generated from VRNN-single, VRNN-indep, and our model by sampling states for 40 timesteps after an initial burn-in period of 10 timesteps with ground-truth states from the test set. An interactive demo to generate more rollouts from our hierarchical model can be found at: \url{http://basketball-ai.com/}. Common problems in baseline rollouts include players moving out of bounds or in the wrong direction (Figure \ref{fig:baselines}). These issues tend to occur at later timesteps, suggesting that the baselines do not perform well over long horizons. One possible explanation is due to compounding errors \citep{dagger}: if the model makes a mistake and deviates from the states seen during training, it is likely to make more mistakes in the future and generalize poorly. On the other hand, generated rollouts from our model are more robust to the types of errors made by the baselines (Figure \ref{fig:magnet_rollouts}). \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.49\columnwidth} \centering \includegraphics[width=.49\columnwidth]{figs/traj_multi.png} \includegraphics[width=.49\columnwidth]{figs/traj_set.png} \caption{10 rollouts of the green player ($\blacktriangledown$) with a burn-in period of 20 timesteps. \tb{Left}: The model generates macro-intent{}s. \tb{Right}: We ground the macro-intent{}s at the bottom-left. In both, we observe a multimodal distribution of trajectories. } \label{fig:multi_rollouts} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\columnwidth} \centering \includegraphics[width=.49\columnwidth]{figs/coord_pre.png} \includegraphics[width=.49\columnwidth]{figs/coord_post.png} \caption{The distribution of macro-intent{}s sampled from 20 rollouts of the green player changes in response to the change in red trajectories and macro-intent{}s. This suggests that macro-intent{}s encode and induce coordination between multiple players.} \label{fig:macro_dist} \end{subfigure} \caption{Rollouts from our model demonstrating the effectiveness of macro-intent{}s in generating coordinated multi-agent trajectories. Blue trajectories are fixed and ($\bullet$) indicates initial positions.} \label{fig:macro_rollouts} \end{center} \vskip -0.1in \end{figure} \textbf{Macro-intent{}s induce multimodal and interpretable rollouts.} Generated macro-intent{}s allow us to intepret the intent of each individual player as well as a global team strategy (e.g. setting up a specific formation on the court). We highlight that our model learns a multimodal generating distribution, as repeated rollouts with the same burn-in result in a dynamic range of generated trajectories, as seen in Figure \ref{fig:multi_rollouts} Left. Furthermore, Figure \ref{fig:multi_rollouts} Right demonstrates that grounding macro-intent{}s during generation instead of sampling them allows us to control agent behavior. \textbf{Macro-intent{}s induce coordination.} Figure \ref{fig:macro_dist} illustrates how the macro-intent{}s encode coordination between players that results in realistic rollouts of players moving cohesively. As we change the trajectory and macro-intent{} of the red player, the distribution of macro-intent{}s generated from our model for the green player changes such that the two players occupy different areas of the court. \subsection{Synthetic Experiments: Boids Model of Schooling Behavior} To illustrate the generality of our approach, we apply our model to a simplified version of the Boids model \citep{reynolds} that produces realistic trajectories of schooling behavior. We generate trajectories for 8 agents for 50 frames. The agents start in fixed positions around the origin with initial velocities sampled from a unit Gaussian. Each agent's velocity is then updated at each timestep: \eq{\tb{v}_{t+1} = \beta\tb{v}_t + \beta(c_1 \tb{v}_{\text{coh}} + c_2 \tb{v}_{\text{sep}} + c_3 \tb{v}_{\text{ali}} + c_4\tb{v}_{\text{ori}}).} Full details of the model can be found in Appendix \ref{app:boids}. We randomly sample the sign of $c_1$ for each trajectory, which produces two distinct types of behaviors: \emph{friendly agents} ($c_1 > 0$) that like to group together, and \emph{unfriendly agents} ($c_1 < 0$) that like to stay apart (see Figure \ref{fig:boids_examples}). We also introduce more stochasticity into the model by periodically updating $\beta$ randomly. Our labeling function thresholds the average distance to an agent's closest neighbor (see last plot in Figure \ref{fig:boids}). This is equivalent to using the sign of $c_1$ as our macro-intent{}s, which indicates the type of behavior. Note that unlike our macro-intent{}s for the basketball dataset, these macro-intent{}s are simpler and have no geometric interpretation. All models have similar average log-likelihoods on the test set in Table \ref{tab:ll}, but our hierarchical model can capture the true generating distribution much better than the baselines. For example, Figure \ref{fig:boids} depicts the histograms of average distances to an agent's closest neighbor in trajectories generated from all models and the ground-truth. Our model more closely captures the two distinct modes in the ground-truth (friendly, small distances, left peak vs. unfriendly, large distances, right peak) whereas the baselines fail to distinguish them. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{figs/boids.png} \vspace{-.07in} \caption{Synthetic Boids experiments. Showing histograms (horizontal axis: distance; vertical: counts) of average distance to an agent's closest neighbor in 5000 rollouts. Our hierarchical model more closely captures the two distinct modes for friendly (small distances, left peak) vs. unfriendly (large distances, right peak) behavior compared to baselines, which do not learn to distinguish them.} \label{fig:boids} \end{center} \vspace{-20pt} \end{figure} \subsection{Inspecting the Hierarchical Model Class} \textbf{Output distribution for states.} The outputs of all models (including baselines) sample from a multivariate Gaussian with diagonal covariance. We also experimented with sampling from a mixture of $2$, $3$, $4$, and $8$ Gaussian components, but discovered that the models would always learn to assign all the weight on a single component and ignore the others. The variance of the active component is also very small. This is intuitive because sampling with a large variance at every timestep would result in noisy trajectories and not the smooth ones that we see in Figures \ref{fig:rollouts}, \ref{fig:multi_rollouts}. \textbf{Choice of macro-intent{} model.} In principle, we can use more expressive generative models, like a VRNN, to model macro-intent{}s over richer macro-intent{} spaces in Eq. (\ref{eq:macro_policy}). In our case, we found that an RNN was sufficient in capturing the distribution of macro-intent{}s shown in Figure \ref{fig:macrogoal_dist}. The RNN learns multinomial distributions over macro-intent{}s that are peaked at a single macro-intent{} and relatively static through time, which is consistent with the macro-intent{} labels that we extracted from data. Latent variables in a VRNN had minimal effect on the multinomial distribution. \textbf{Maximizing mutual information isn't effective.} The learned macro-intent{}s in our fully unsupervised VRAE-mi model do not encode anything useful and are essentially ignored by the model. In particular, the model learns to match the approximate posterior of macro-intent{}s from the encoder with the discriminator from the mutual information lower-bound. This results in a lack of diversity in rollouts as we vary the macro-intent{}s during generation. We refer to appendix \ref{app:mi} for examples. \section{Discussion} The macro-intent{}s labeling functions used in our experiments are relatively simple. For instance, rather than simply using location-based macro-intent{}s, we can also incorporate complex interactions such as ``pick and roll''. Another future direction is to explore how to adapt our method to different domains, e.g., defining a macro-intent{} representing ``argument'' for a dialogue between two agents, or a macro-intent{} representing ``refrain'' for music generation for ``coordinating instruments'' \citep{thickstun2017learning}. We have shown that weak macro-intent{} labels extracted using simple domain-specific heuristics can be effectively used to generate high-quality coordinated multi-agent trajectories. An interesting direction is to incorporate multiple labeling functions, each viewed as noisy realizations of true macro-intent{}s, similar to \citep{data_programming,snorkel,bach17}. \section{Sequential Generative Models} \label{app:sequential} \paragraph{Recurrent neural networks.} A RNN models the conditional probabilities in Eq. (\ref{eq:condprobs}) with a hidden state $\tb{h}_t$ that summarizes the information in the first $t-1$ timesteps: \eq{ p_{\theta}(\tb{x}_t | \tb{x}_{< t}) = \varphi(\tb{h}_{t-1}), \quad \quad \tb{h}_t = f(\tb{x}_t, \tb{h}_{t-1}), } where $\varphi$ maps the hidden state to a probability distribution over states and $f$ is a deterministic function such as LSTMs \citep{lstm} or GRUs \citep{gru}. RNNs with simple output distributions often struggle to capture highly variable and structured sequential data. Recent work in sequential generative models address this issue by injecting stochastic latent variables into the model and using amortized variational inference to infer latent variables from data. \paragraph{Variational Autoencoders.} A variational autoencoder (VAE) \citep{vae} is a generative model for non-sequential data that injects latent variables $\tb{z}$ into the joint distribution $p_{\theta}(\tb{x}, \tb{z})$ and introduces an inference network parametrized by $\phi$ to approximate the posterior $q_{\phi}(\tb{z} \mid \tb{x})$. The learning objective is to maximize the evidence lower-bound (ELBO) of the log-likelihood with respect to the model parameters $\theta$ and $\phi$: \eq{\mathbb{E}_{q_{\phi}(\tb{z} | \tb{x})}\brcksq{\log p_{\theta}(\tb{x} | \tb{z})} - D_{KL}(q_{\phi}(\tb{z} \mid \tb{x}) || p_{\theta}(\tb{z}))} The first term is known as the reconstruction term and can be approximated with Monte Carlo sampling. The second term is the Kullback-Leibler divergence between the approximate posterior and the prior, and can be evaluated analytically (i.e. if both distributions are Gaussian with diagonal covariance). The inference model $q_{\phi}(\tb{z} \mid \tb{x})$, generative model $p_{\theta}(\tb{x} \mid \tb{z})$, and prior $p_{\theta}(\tb{z})$ are often implemented with neural networks. \paragraph{Variational RNNs.} VRNNs combine VAEs and RNNs by conditioning the VAE on a hidden state $\tb{h}_t$ (see Figure \ref{fig:vrnn}): \eq{ p_{\theta}(\tb{z}_t | \tb{x}_{<t}, \tb{z}_{<t}) & = \varphi_{\text{prior}}(\tb{h}_{t-1}) & \text{(prior)} \label{eq:vrnn_prior} \\ q_{\phi}(\tb{z}_t | \tb{x}_{\leq t}, \tb{z}_{<t}) & = \varphi_{\text{enc}}(\tb{x}_t, \tb{h}_{t-1}) & \text{(inference)} \\ p_{\theta}(\tb{x}_t | \tb{z}_{\leq t}, \tb{x}_{<t}) & = \varphi_{\text{dec}}(\tb{z}_t, \tb{h}_{t-1}) & \text{(generation)} \\ \tb{h}_t & = f(\tb{x}_t, \tb{z}_t, \tb{h}_{t-1}). & \text{(recurrence)} \label{eq:vrnn_state} } VRNNs are also trained by maximizing the ELBO, which in this case can be interpreted as the sum of VAE ELBOs over each timestep of the sequence: \eq{ \mathbb{E}_{q_{\phi}(\tb{z}_{\leq T} \mid \tb{x}_{\leq T})} \Bigg[ \sum_{t=1}^T \log p_{\theta}(\tb{x}_t \mid \tb{z}_{\leq T}, \tb{x}_{<t}) - D_{KL} \Big( q_{\phi}(\tb{z}_t \mid \tb{x}_{\leq T}, \tb{z}_{<t}) || p_{\theta}(\tb{z}_t \mid \tb{x}_{<t}, \tb{z}_{<t}) \Big) \Bigg] \label{eq:vrnn_elbo_appendix} } Note that the prior distribution of latent variable $\tb{z}_t$ depends on the history of states and latent variables (Eq. (\ref{eq:vrnn_prior})). This temporal dependency of the prior allows VRNNs to model complex sequential data like speech and handwriting \citep{vrnn}. \section{Boids Model Details} \label{app:boids} We generate 32,768 training and 8,192 test trajectories. Each agent's velocity is updated as: \eq{\tb{v}_{t+1} = \beta\tb{v}_t + \beta(c_1 \tb{v}_{\text{coh}} + c_2 \tb{v}_{\text{sep}} + c_3 \tb{v}_{\text{ali}} + c_4\tb{v}_{\text{ori}}),} \begin{itemize} \item $\tb{v}_{\text{coh}}$ is the normalized cohesion vector towards an agent's local neighborhood (radius 0.9) \item $\tb{v}_{\text{sep}}$ is the normalized vector away from an agent's close neighborhood (radius 0.2) \item $\tb{v}_{\text{ali}}$ is the average velocity of other agents in a local neighborhood \item $\tb{v}_{\text{ori}}$ is the normalized vector towards the origin \item $(c_1, c_2, c_3, c_4) = (\pm 1, 0.1, 0.2, 1)$ \item $\beta$ is sampled uniformly at random every 10 frames in range $[0.8, 1.4]$ \end{itemize} \section{Maximizing Mutual Information} \label{app:mi} We ran experiments to see if we can learn meaningful macro-intent{}s in a fully unsupervised fashion by maximizing the mutual information between macro-intent{} variables and trajectories $\tb{x}_{\leq T}$. We use a VRAE-style model from \citep{vrae} in which we encode an entire trajectory into a latent macro-intent{} variable $\tb{z}$, with the idea that $\tb{z}$ should encode global properties of the sequence. The corresponding ELBO is: \eq{ \mathcal{L}_1 = \mathbb{E}_{q_{\phi}(\tb{z} \mid \tb{x}_{\leq T})} \Bigg[ \sum_{t=1}^T \sum_{k=1}^K \log p_{\theta_k}(\tb{x}_t^k \mid \tb{x}_{<t}, \tb{z}) \Bigg] - D_{KL} \Big( q_{\phi}(\tb{z} \mid \tb{x}_{\leq T}) || p_{\theta}(\tb{z}) \Big), \label{eq:svae_elbo_appendix} } where $p_{\theta}(\tb{z})$ is the prior, $q_{\phi}(\tb{z} \mid \tb{x}_{\leq T})$ is the encoder, and $p_{\theta_k}(\tb{x}_t^k \mid \tb{x}_{<t}, \tb{z})$ are decoders per agent. It is intractable to compute the mutual information between $\tb{z}$ and $\tb{x}_{\leq T}$ exactly, so we introduce a discriminator $q_{\psi}(\tb{z} \mid \tb{x}_{\leq T})$ and use the following variational lower-bound of mutual information: \eq{ \mathcal{L}_2 = \mathcal{H}(\tb{z}) + \mathbb{E}_{p_{\theta}(\tb{x}_{\leq T} \mid \tb{z})} \Big[ \mathbb{E}_{q_{\phi}(\tb{z} \mid \tb{x}_{\leq T})} \big[ \log q_{\psi}(\tb{z} \mid \tb{x}_{\leq T}) \big] \Big] \leq MI(\tb{x}_{\leq T}, \tb{z}). \label{eq:mi_lowerbound} } We jointly maximize $\mathcal{L}_1 + \lambda \mathcal{L}_2$ wrt. model parameters $(\theta, \phi, \psi)$, with $\lambda = 1$ in our experiments. \paragraph{Categorical vs. real-valued macro-intent{} \tb{z}.} When we train an 8-dimensional categorical macro-intent{} variable with a uniform prior (using gumbel-softmax trick \citep{gumbel}), the average distribution from the encoder matches the discriminator but not the prior (Figure \ref{fig:mi_categorical}). When we train a 2-dimensional real-valued macro-intent{} variable with a standard Gaussian prior, the learned model generates trajectories with limited variability as we vary the macro-intent{} variable (Figure \ref{fig:mi_examples}). \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{figs/mi_categorical.png} \vspace{-.3in} \caption{Average distribution of 8-dimensional categorical macro-intent{} variable. The encoder and discriminator distributions match, but completely ignore the uniform prior distribution.} \label{fig:mi_categorical} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=.19\columnwidth]{figs/mutual/05.png} \includegraphics[width=.19\columnwidth]{figs/mutual/06.png} \includegraphics[width=.19\columnwidth]{figs/mutual/07.png} \includegraphics[width=.19\columnwidth]{figs/mutual/08.png} \includegraphics[width=.19\columnwidth]{figs/mutual/09.png} \includegraphics[width=.19\columnwidth]{figs/mutual/15.png} \includegraphics[width=.19\columnwidth]{figs/mutual/16.png} \includegraphics[width=.19\columnwidth]{figs/mutual/17.png} \includegraphics[width=.19\columnwidth]{figs/mutual/18.png} \includegraphics[width=.19\columnwidth]{figs/mutual/19.png} \caption{Generated trajectories of green player conditioned on fixed blue players given various 2-dimensional macro-intent{} variables with a standard Gaussian prior. \tb{Left to Right columns}: values of 1st dimension in $\{ -1, -0.5, 0, 0.5, 1\}$. \tb{Top row}: 2nd dimension equal to $-0.5$. \tb{Bottom row}: 2nd dimension equal to $0.5$. We see limited variability as we change the macro-intent{} variable.} \label{fig:mi_examples} \end{center} \vskip -0.1in \end{figure} \section{Labeling Functions for Macro-intent{}s in Basketball} \label{app:code} We define macro-intent{}s in basketball by segmenting the left half-court into a $10 \times 9$ grid of $5$ft $\times 5$ft boxes (Figure \ref{fig:macrogoals}). Algorithm \ref{alg:lf-window25} describes \texttt{LF-window25}, which computes macro-intent{}s based on last positions in 25-timestep windows (\texttt{LF-window50} is similar). Algorithm \ref{alg:lf-stationary} describes \texttt{LF-stationary}, which computes macro-intent{}s based on stationary positions. For both, \texttt{Label-macro-intent{}}($\tb{x}_t^k$) returns the 1-hot encoding of the box that contains the position $\tb{x}_t^k$. \begin{algorithm} \caption{Labeling function that computes macro-intent{}s in 25-timestep windows}\label{alg:lf-window25} \begin{algorithmic}[1] \Procedure{LF-window25}{$\tb{x}_{\leq T}$}\Comment{Trajectory $\tb{x}_{\leq T}$ of $K$ players} \State macro-intent{}s $\tb{g} \gets$ initialize array of size $(K, T, 90)$ \For{$k = 1 \dots K$} \State $\tb{g}[k,T] \gets$ \Call{Label-macro-intent{}}{$\tb{x}_T^k$}\Comment{Last timestep} \For{$t = T-1 \dots 1$} \If{(t+1) mod 25 == 0}\Comment{End of 25-timestep window} \State $\tb{g}[k,t] \gets$ \Call{Label-macro-intent{}}{$\tb{x}_t^k$} \Else \State $\tb{g}[k,t] \gets \tb{g}[k,t+1]$ \EndIf \EndFor \EndFor \State \textbf{return} $\tb{g}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Labeling function that computes macro-intent{}s based on stationary positions}\label{alg:lf-stationary} \begin{algorithmic}[1] \Procedure{LF-stationary}{$\tb{x}_{\leq T}$}\Comment{Trajectory $\tb{x}_{\leq T}$ of $K$ players} \State macro-intent{}s $\tb{g} \gets$ initialize array of size $(K, T, 90)$ \For{$k = 1 \dots K$} \State speed $\gets$ compute speeds of player $k$ in $\tb{x}_{\leq T}^k$ \State stationary $\gets$ speed $<$ threshold \State $\tb{g}[k,T] \gets$ \Call{Label-macro-intent{}}{$\tb{x}_T^k$}\Comment{Last timestep} \For{$t = T-1 \dots 1$} \If{stationary[t] and not stationary[t+1]}\Comment{Player $k$ starts moving} \State $\tb{g}[k,t] \gets$ \Call{Label-macro-intent{}}{$\tb{x}_t^k$} \Else\Comment{Player $k$ remains stationary} \State $\tb{g}[k,t] \gets \tb{g}[k,t+1]$ \EndIf \EndFor \EndFor \State \textbf{return} $\tb{g}$ \EndProcedure \end{algorithmic} \end{algorithm} \subsubsection*{Acknowledgments} This research is supported in part by NSF \#1564330, NSF \#1637598, and gifts from Bloomberg, Activision/Blizzard and Northrop Grumman. Dataset was provided by STATS: \url{https://www.stats.com/data-science/}.
{ "attr-fineweb-edu": 1.932617, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfrXxK7Tt522Wc05x
\section{Introduction} There are many examples in the real-world of agents or teams of agents aiming to optimise their performance over long periods of time. These often involve a series of multi-step games that feed into one another as well as other factors in the wider environment. Examples of this includes security games where agents aim to constantly protect facilities against attackers that are able to change their tactics and decisions \cite{paruchuri2008playing,shieh2012protect,kiekintveld2009computing}, as well as in the stock market where agents aim to continually make optimal decisions to make profits in fluid real-world environments \cite{lux1999scaling,bak1997price,kagan1995risk}. In this paper, we focus on the long term optimisation of decision-making in team sports. Specifically in games of Association Football (soccer).\footnote{Referred to as just ``football'' throughout this paper.} Although the models could be applied in a number of domains, football presents us with an interesting challenge where a team of human agents compete against other teams of agents across long periods and the success of teams is not only judged in individual games but how they perform over a season in a league format (supported with many years of real-world datasets). This means that there are a set of teams whom each season play every other team twice, both home and away. Teams are awarded points based on winning, losing or drawing and at the end of the season teams are awarded prize money and other incentives based on their points gained in comparison to all other teams in a league rankings/standings.\footnote{https://www.express.co.uk/sport/football/1300924/Premier-League-prize-money-2020-how-much-Liverpool-earn.} Past work in this area has focused on optimising performance in individual games \cite{beal2020optimising} or for extracting contribution of individual players \cite{beal2020learning,decroos2020vaep,fernandez2019decomposing}. However, to date, there is no formal model for optimising team performance and tactical decision-making over a longer period of time. Against this background, we propose a formal model for optimising the long-term performance of football teams and how they can extract knowledge from other games in the league environment. We introduce the novel notion of a \emph{fluent objective} which is a sequence of ``objective variables", each one corresponding to a particular point in the agent's planning horizon (i.e., a game in the game season). We should also clarify that these variables can take the form of a broader goal (e.g., win the league or do not get relegated). We use Markov chain Monte Carlo simulations to help look ahead into the future and allow us to set realistic achievable objectives which add more context to our tactical decision-making in individual games. We also take inspiration from observational learning \cite{borsa2019observational,bandura2008observational,jang1999ensemble} to help teams extract information from other games that happen in the environment and past games they have played themselves. This is used to identify tactical decisions that boost the chances of gaining positive results against given oppositions. As the season progresses, teams learn more as more games unfold --- we encapsulate this into our modelling. Thus, this paper advances the state of the art in the following ways: \begin{enumerate} \item We propose a mathematical model for optimising the long-term performance of human teams and apply this to the game of football. \item Using real-world data from 760 real-world football games from the past two seasons of the English Premier League (EPL), we can set the fluent objective based on accurate league simulations and further improve individual game payoffs by using knowledge from prior games. In particular, we show that we can increase teams finishing position on average by up to 2.9 ranks (out of 20). \item By using a fluent objective and prior game knowledge we are able to show an increased probability of improved long-term performance in real-world football teams (by up to 35.6\%). \end{enumerate} Our results show that by looking ahead and thinking about long-term goals, teams can add more context to the tactical decisions that are made for individual games and thus are more likely to achieve the long-term objectives that they have set. The rest of this paper is structured as follows, in Section 2 we provide a background and in Section 3 we discuss how we model long term performance. In Section 4 and 5 we discuss how we calculate the fluent objective and learn from prior games respectively. We run simulation experiments on our models in Section 6 and discuss these in Section 7. Finally, Section 8 concludes. \section{Background} In this section, we review related literature showing other examples of modelling real-world problems. We also give an overview of why long-term football tactics are important, what is involved and discuss how this is approached for individual games in \cite{beal2020optimising}. \subsection{Related Work} Here, we explore the related work to how we can model long-term flowing games such as a sports league as well and giving some background into sports tactics literature. \subsubsection{Modelling the Real-World} As far as we are aware, the notion and modelling of \emph{fluent objectives} in this paper, which allows us to optimise long-term performance, is entirely novel. However, it was inspired by work presented in situations and fluents in first-order logic and situation calculus \cite{lin2008situation}. We see this approach being used to create a model for environmental context in \cite{ranganathan2003infrastructure}. The authors model enables context awareness to help build context-aware applications. Similarly in our model, we aim to gain context of the other teams in the environments to help make decisions based on the future league standings. There are also agents reacting to situations in their environment in \cite{sim2003agents}, where agents react to the ever-changing variables in the stock market. In our work, we also aim to learn from prior games and other games that happen in the environment to gain a better understanding into what tactics work against given opponents. This is closely related to the work presented in \cite{borsa2019observational}, where the authors explore the notion of ``observation learning" which is is a type of learning that occurs as a function of observing, retaining and imitating the behaviour of another agent. This is applicable to football as if we observe another team perform well against another opponent then we may want to imitate their tactics to help us to win. Other examples of this type of work are shown in \cite{piot2013learning,russell1998learning,silver2016mastering}. \subsubsection{Sports Tactics} In the sports domain, there are examples of work focused on team tactics and decision-making in football and other team sports \cite{beal2019artificial}. In terms of long-term decision-making though the key example of agents being used to optimise this in sport is shown in \cite{matthews2012competing} which presents a successful model for competing in fantasy football games.\footnote{https://fantasy.premierleague.com/help/rules.} Here, the authors use machine learning to predict the performance of individual players and then use deep-reinforcement learning to optimise decisions on a week-by-week basis and look ahead to maximise their chances of success. By doing so, they rank in the top 1\% of human players. In our work, we can take inspiration from this in the real-world and help human coaches and managers make decisions on human footballers. We also see examples of tactical papers for sport in \cite{jordan2009optimizing} exploring different risk strategies for play-calling in American Football. As well as some key football papers to help improve human performance and identify high-performing players and pairs of players are shown in \cite{fernandez2019decomposing,decroos2020vaep,beal2020learning}. To provide more intuitions around long-term decision-making, in the next subsection we give a background to football tactics and their importance to the game as well as the league structure. \subsection{Long-Term Football Tactics}\label{subsec:tactics-background} In football, individual games are incredibly important, but what is often overlooked tactically is the impact that each game has over a longer period of time and on the overall league standings. The final league standings is the final position of all teams in a league based on the points they have gained over an $N$ game season. In a standard football league (e.g., English Premier League or German Bundesliga), across a season each team plays each other twice (once home and once away) a win is worth 3 points, a draw 1 point and a loss no points. There are huge intrinsic and financial gains to be made by finishing higher up the table and there are certain milestones that teams aim for to boost their success such as qualification for European competitions.\footnote{http://eightyfivepoints.blogspot.com/2018/03/show-me-money-how-much-is-each-premier.html.} The season is often broken down into given ``game-weeks" where all teams play a game within the week. We can therefore breakdown the season into these game-weeks as incremental steps in a game. In each week our team plays a game and a number of other games also take place. We therefore, want to maximise our own performance in our game and learn from other games for the future when we play those teams (see Figure \ref{fig:flowchart}). Therefore, in this paper we aim to model teams tactical decisions based on the overall league environment and use \emph{fluent objectives} to add context to our decisions and prior games knowledge to imitate other successful teams. In the next section, we discuss the model that this paper builds on for optimising tactical decision-making in individual games. \subsection{Modelling the Game of Football}\label{subsec:extend} The modelling presented in this paper extends on the formal model for football that is presented in \cite{beal2020optimising} for optimising the tactics in an individual game. In \cite{beal2020optimising} the authors use a multi-step game to represent the pre-match tactical decisions that are made using a Bayesian game (representing the unknowns of opposition decisions), this then feeds into the in-match decisions made which is modelled as a stochastic game (representing the score-line states in a game). Using these models teams are able to optimise their tactics by up to 16.1\%. In this paper, we extend that model by adding context of the wider environment of the league. By using our fluent objective and prior game weightings we can further optimise these tactics to no only improve the chances of a positive result in the individual game but improve the long-term performance of the team in the league standings. \section{Modelling Long Term Team Performance} \begin{figure*} \centering \includegraphics[scale=0.6]{Images/football-flow.pdf} \caption{Sequence of Multi-Games Across a Season} \label{fig:flowchart} \end{figure*} In this section, we discuss how we model the long-term performance of football teams over a season and identify how we can use fluent objectives and learn from games to optimise long-term performance of a team. At the start of a given season or competition, a team will have some aim of how well they want to do and what they want to achieve. In a knockout style cup competition such as the FIFA World Cup or English FA Cup, every team is aiming to win every game as this is the only way to win overall; there are no prizes for second place. Across a full season, however, there are a number of objectives that a team can have that will help maximise their financial gains and reputation of the team. For example, as discussed in Section \ref{subsec:tactics-background}, in the English Premier League there is always only one winner but there are also benefits to finishing in the top 4, top 7 and avoiding finishing in the bottom 3. We therefore, model an entire season in football that could be applied to help optimise teams' long-term performance in any league across the world and at any level. \subsection{Sequence of Multi-Games Across a Season} In Figure \ref{fig:flowchart} we show the structure of our model for an entire season in football. This style of model could also be applied in security games or for emergency response where we aim to optimise the performance of teams of agents in evolving environments with ever-changing objectives \cite{ramchurn2016disaster,shieh2012protect}. We build on the multi-step (Bayesian into stochastic) games for optimising single game tactics to help teams achieve their objectives in an $N$ game season. There is a sequence of steps that we highlight and show how each one feeds into the next. We also show how a teams' pre-season objective can be fed into the first game which in-turn can use this to aid the tactical decision-making process as well as the parameters we learn while playing each game (e.g., certain tactics that work well against certain teams). Both the pre-match Bayesian game and the in-match stochastic game can use the objective to help set the risk parameters and select the tactics that will best help the team in the overall environment of the league. This objective then changes as the season progresses and teams aim for different levels of achievement, therefore making this a \emph{fluent objective}; e.g., a team may have had high hopes at the start of the season of winning the league, but if they have a poor start they may have to update their objective to ensure they finish in the top 4. As we show in Figure \ref{fig:flowchart}, the pre-season objective is set as $O_0$, this then changes each game-week as the environment around the team develops, changing to $O_1$ after game-week 1, $O_2$ after game-week 2 and so on until the final in-season objective the week before the final game of the season $N-1$. The final fluent objective, $O_N$, corresponds to the overall end of season outcome ($S_O$), which we can compare to the fluent objective at each game-week to assess the team performance across the season. As discussed in Section \ref{subsec:model-obj}, the $O_x$ and $S_O$ variables might not have distinct values (i.e., maybe $O_0 = O_1$ and so on). We also consider how we can learn from the games that are played as the season progresses. As we play each game we learn something new, both about what works for our own team and what works against a given opposition. We therefore learn parameters from each game that we can carry forward through each game-week and similarly to the fluent objective we update each week. For example, we may find that when our team uses a given formation against a certain style of opponent we see better results. As we show in Figure \ref{fig:flowchart}, this is encapsulated by a \emph{prior knowledge parameter} $P$, which is updated after each game we play where $P_1$ is after game-week 1, $P_2$ after game-week 2 and so on until the penultimate game-week of the season $O_{N-1}$. We explain the precise form of the $P$ parameter in Section \ref{subsec:prior} below. Finally, we must consider the other games that are happening each week in the league environment, $\mathcal{G}_N$ is the set of other games in game-week $N$ and $\mathcal{G} = \{G_1, G_2, ..., G_z\} $ where $z$ is the number of other games played in that week. Within each game-week, all other teams also play one another, so that at the end of the season, each team has played every other team twice (once at home and once away). For example, in the EPL there are 20 teams in the league, each team plays the other 19 teams' twice which is 38 games. In the EPL there are a total of 380 games, and so there are 342 that do not involve the team that we are focused on for our optimisation. These games are observable so we can learn from each one, which in turn affects our fluent objective $O$ and what we learn after each game-week $P$. As discussed in Section \ref{subsec:tactics-background}, the outcomes of the other games affect the league table with teams gaining 3 points for a win and 1 point for a draw. We therefore must consider the other teams' performances when setting $O$. We can also observe other games tactically to learn what styles and formations work best against given teams, this is how we can learn $P$ from prior games. In the following subsections, we go into more detail regarding how we model the fluent objective $O$ and how we can learn from prior games $P$. \subsection{Fluent Objectives}\label{subsec:model-obj} At the start of each season, a team will have some objective for what they are looking to achieve in the next season. These goals are decided based on several factors such as previous season performance and money invested into the team. The goals are usually set by the owners/directors of the team and are based on their subjective opinions of how their team should perform and where they should place in the league against the other teams. The opinions of what the team should achieve then changes over the season which can drive key decisions such as a change in coach/manager for an under-performing team or investing more money into an over-performing team so they achieve a European place which comes with huge financial gains. In other settings, these type of objectives could be the defence of a given target or the rescue of a person. Our model for the fluent objective can objectively evaluate how we expect a team to perform over a season and allow teams to change their tactical decision-making based on this. There two different objectives that can be set: a more granular objective of the expected league position and an objective of what could be achieved in terms of broader incentives in the league (e.g., avoiding relegation or qualifying for European competitions). In this paper, we focused on the latter and can define the set of possible objectives as $\mathcal{O} = \{o_1, o_2, ..., o_k\}$ where $k$ is the number of different objectives. An example of the set of objectives --- more accurately, the set of values that an $O_x$ objective variable can take --- in the EPL would be: \begin{itemize} \item \textbf{Winning the League ($o_1$):} Awarded to the team who finishes top of the league. \item \textbf{Qualifying for the Champions League ($o_2$):} Awarded to the top 4 teams, so in this case the objective relates to teams finishing 2nd-4th.\footnote{https://www.premierleague.com/european-qualification-explained.} \item \textbf{Qualifying for the Europa League ($o_3$):} Another European competition usually awarded to teams who finish between 5th-7th. \item \textbf{Top Half Finish ($o_4$):} The financial benefit of finishing higher in the league are huge and therefore teams often aim to finish in the top half of the table (higher than 10th).\footnote{https://www.goal.com/en-gb/news/how-much-money-do-premier-league-2019-20-winners-get/19jbauady17cw1ieojo40yextz.} \item \textbf{Avoiding Relegation ($o_5$):} The bottom 3 (18th-20th) teams in the EPL are relegated into the English Football League (EFL) Championship which is the second division of the English football leagues. \end{itemize} To set the objective we can simulate how we expect the season to unfold and create a distribution $\mathcal{D}$ that allows us to use a Maximum a Posteriori (MAP) estimation \cite{gauvain1994maximum} for the probability of the team finishing in each position. This then allows us to calculate a set of probabilities for of a team achieving each objective $\mathcal{P} = \{p(o_1), p(o_2), ..., p(o_k)\}$. We then set the $O_o$ (for a pre-season objective) as the most likely objective that can be achieved by a team that season. This process can then be re-run after each game-week is completed to give the fluent objective $O_1$ to $O_{N-1}$. Our simulation of the league will include the real-results which will get more accurate as the season progresses and we learn more about each team. This will then mean we have a fluent objective that will change as the season progresses. At the end of the season, we can compare $O_0$ to $O_{N-1}$ to the final outcome $S_O$ that the team achieves. \subsection{Learning From Prior Games}\label{subsec:prior} As well as the fluent objective, we can also improve the tactical decision-making in our Bayesian and stochastic games by adding prior knowledge $P$ that we learn after each game we play and observe. In more general terms we aim to observe and learn from other successful agents and our own actions. This could also be applicable in swarms of UAVs or imitating other agents trading in the financial markets settings. We can learn a set of weights $\mathcal{W}$ that relate to how effective given style/formation pairs (actions that are made in the multi-step games) that we select in our games are against given oppositions style/formation pairs. These weights are initially set to 1 and are then increased if found to be effective and decrease if found to be ineffective. These can be updated after each game-week and also updated from the other games that we observe. Our $P$ value is defined in Equation \ref{eq:pval}. \begin{equation}\label{eq:pval} P = \left( \begin{array}{ccccc} w_{11} & w_{12} & w_{13} & \hdots & w_{1j} \\ w_{21} & w_{22} & w_{23} & \hdots & w_{2j} \\ \vdots & \vdots & \vdots & \hdots & \vdots \\ w_{i1} & w_{i2} & w_{i3} & \hdots & w_{ij} \end{array} \right) \end{equation}\\ Where $w \in \mathcal{W}$ and $i$/$j$ is the number of possible style/formation pairs. The columns represent the style/formation pair selected by our team and the rows represent the style/formation selected by the opposition (e.g., $w_{ij}$ is how effective our style/formation pair $i$ is against an opposition using style/formation pair $j$). In the following sections, we give more details into how we calculate our fluent objective $O$ and how we can learn the weights that make up $P$. We explore how these are used in the individual football match multi-step game (discussed in Section \ref{subsec:extend}) to further optimise the tactical decision-making process. \section{Calculating the Fluent Objective} In this section, we discuss how we simulate seasons, calculate the fluent objective, and how this can be used to optimise game tactics. \subsection{Simulating Season Outcomes} When we simulate the season outcomes and calculate the distributions of where we expect the team to finish we are interested in predicting all remaining games in the season for both our team and all other teams in the league. To do this we first look at the single-game prediction which is discussed in the next subsection. \subsubsection{Single-Game Prediction} To predict the outcomes of single games in the league we use the model that is defined in \cite{beal2020optimising} which is used for calculating the single-game payoffs. The model uses the team's tactical style, potential formation and team strength to give probabilities of a team winning the game. The set of features used are: home team style, away team style, home team formation, away team formation and then team strengths are calculated by using the outputs from the model described in \cite{Dixon_Coles}. The target class is the final result of the game: home team win, away team win or a draw. Using these features, we train a multi-class classification deep neural network. The neural network is trained using stochastic gradient descent using a categorical cross-entropy loss function (Equation \ref{eq:ccelf}) and a soft-max activation function. \begin{equation}\label{eq:ccelf} -\frac{1}{N}\sum^N_{i=1}\log p_{\textit{model}} [y_i \in O_{y_i}] \end{equation} where, $N$ is the number of games that we are using to train the model and $p_{\textit{model}} [y_i \in O_{y_i}]$ is the probability that $y_i$ is in the class $O$. This model takes the given teams, possible playing styles and possible formations to estimate the probability of winning, drawing or losing the game. Using these probabilities we can simulate the outcome of the entire season, this is discussed in the next sub-section. \subsubsection{Season Simulation} To simulate the remaining games of the season, we use the real-world fixture list to ensure that the ordering of the games is correct. We then find the probability of a home win, away win and draw in each game and use a Markov chain Monte Carlo simulation \cite{vrugt2008accelerating} to simulate all remaining games and total up the points that each team will gain (3 points for a win, 1 for a draw and 0 for a loss). This works well as it emulates the randomness that we see in real-world football games. We repeat this process 100,000 times for each simulation which allows us to derive a distribution for the probability that a team will finish in each place in the league in the final standings. An example of this distribution is shown in Figure \ref{fig:hist}. \begin{figure}[h!] \centering \begin{tikzpicture} \begin{axis}[ ymin=0, ymax=25, xmin=1, xmax=20, area style, xlabel=Final League Position, ylabel=Probability (\%), width=\columnwidth-40, height=\columnwidth-150, y label style={at={(axis description cs:0.15,.5)},anchor=south}, ] \addplot+[ybar interval,mark=no] plot coordinates { (1, 0) (2, 0) (3, 0) (4, 0) (5, 1) (6, 1) (7, 2) (8, 3) (9, 6) (10, 8) (11, 11) (12, 14) (13, 17) (14, 21) (15, 20) (16, 13) (17, 7) (18, 5) (19, 3) (20, 2) }; \end{axis} \end{tikzpicture} \caption{Example League Outcome Probability Distribution.} \label{fig:hist} \end{figure} \subsection{Setting the Fluent Objective} Once we have calculated the distributions of possible place outcomes form the MCMC simulation, we use a Maximum a Posteriori (MAP) estimation \cite{gauvain1994maximum} to set the fluent objective. To do this, we can use the posterior distribution to find interval estimates of the final position for the team in the league. We use the position intervals for the objectives discussed in Section \ref{subsec:model-obj} and can find the $o_k \in \mathcal{O}$ that maximises the posterior PDF. This then sets the objective $O_n$ that is used in game-week $n$ and is updated after each game-week. \subsection{Optimising Tactics using the Fluent Objective} Once we have set the fluent objective we can now use this when optimising the team tactics in the multi-step game for optimising individual game tactics in that game-week. In the pre-match Bayesian game outlined in \cite{beal2020optimising}, Beal et al. present 3 options that can be used depending on the overall environment. Here we present modified, novel notions of these options, which now employ the fluent objective. \begin{itemize} \item \textbf{Best Response:} Used to maximise the chances of winning a game. This option is selected if a team is currently not on track to achieve their objective for the season and must win games to be able to achieve their goals. \item \textbf{Spiteful:} Used to minimise the chances of the opposition winning the game (and therefore improve your chances of drawing/winning). This option is selected if a team is well ahead of their objective and that by preventing losing the game they are more likely to stay on track for their objective across the season. \item \textbf{Expectimax:} This is a mixture of the two above and factors both into account (mathematically defined in \cite{beal2020optimising} where refered to as ``minmax").\footnote{We rename since the approach does not align with the usual meaning of the term ``minimax'' or ``minmax'' in Game Theory.} This is selected if a team is on track for their objective and is aiming to stay that way. \end{itemize} In terms of the in-match stochastic game that is also defined in \cite{beal2020optimising} there are two options that can be selected when making in-match decisions. \begin{itemize} \item \textbf{Aggressive Approach:} This is set if a team is losing/drawing a game and wants to win. It will maximise the chance of a team moving to a more positive state. Therefore, if we know that the objective is to win and gain three points we will select this approach. \item \textbf{Reserved Approach:} This is set if a team is winning/drawing and is happy with their current state. It is used to maximise the chances of staying in the current state. Therefore this is used if winning or if a point is a good result in the overall environment in relation to the objective. \end{itemize} In the next section, we move on to assess how we can learn from prior games and other games in the environment and how this can be added to our optimising decisions model. \section{Learning from Previous Games} In this section, we discuss how we can learn from completed prior games that we play and that other teams in the league play. This allows us to find formation/style combinations that work best against a given formation/style combination that an opposition team may use. To do this we learn a matrix of weights $P$ that corresponds to estimated successes of the formation/style combinations. To estimate each of the weights $w \in P$ we factor in both the games that we have played as well as the games that we have observed. Each weight $w$ corresponds to how effective a given formation/style combination is against a given opposition formation/style. These are computed using Equation \ref{eq:weight} where we look at the games won when using the formation/style ($x$) against the given opposition formation/style ($y$), both in games we have played (first fraction) and in games we have observed (second fraction). \begin{equation}\label{eq:weight} w_{xy} = \Bigg(\frac{games won}{games played}+\frac{observed games won}{observed games}\Bigg)\div{2} \end{equation}\\ These weights in $P$ are updated after each game-week so should become more accurate across the season. In game-week 1 all weights can either be set to 1 or be carried over from the previous season. In the next season, we outline how $P$ is used to optimise the pre-game tactics in the Bayesian Game and in-match decisions in the stochastic game. \subsection{Optimising Tactics using Prior Games} Once we have computed the weights that we use in $P$, these can be used when making our pre-match decisions in our Bayesian game. In the optimisation model, a payoff table is computed for each combination of opposition actions to give the probability of the match outcomes based on their selected action of styles $S$ and formations $f$, where $h$ is home win, $d$ is a draw and $a$ is an away win. The payoff for the team is the weighted sum of win and draw probabilities that we store in payoff table made up from the different decision that we can make. \iffalse \begin{table}[h!] \begin{tabular}{cccc} & $S_1$ & $\hdots$ & $S_x$ \\ \cline{2-4} \multicolumn{1}{l|}{$f_1$} & \multicolumn{1}{l|}{$p(h,d,a|S_1,f_1)$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$p(h,d,a|S_x,f_1)$} \\ \cline{2-4} \multicolumn{1}{l|}{$f_2$} & \multicolumn{1}{l|}{$p(h,d,a|S_1,f_2)$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$p(h,d,a|S_x,f_2)$} \\ \cline{2-4} \multicolumn{1}{l|}{$f_3$} & \multicolumn{1}{l|}{$p(h,d,a|S_1,f_3)$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$p(h,d,a|S_x,f_3)$} \\ \cline{2-4} \multicolumn{1}{l|}{$\vdots$} & \multicolumn{1}{l|}{$\vdots$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$\vdots$} \\ \cline{2-4} \multicolumn{1}{l|}{$f_y$} & \multicolumn{1}{l|}{$p(h,d,a|S_1,f_y)$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$p(h,d,a|S_x,f_y)$} \\ \cline{2-4} \end{tabular} \caption{\small An example payoff table for a team who can have a tactical style of $S_1$ to $S_x$ and a given formation $f_1$ to $f_y$.} \label{tab:bayes_nash} \end{table} \vspace*{-\baselineskip} \fi We can then apply the computed weights in $P$ to the payoff table to weigh each payoff depending on how successful these have been in prior games and in observed games. Therefore, we can optimise the tactical decision based on the weighted payoffs in these tables using either the best approach, spiteful or expectimax approaches which are decided based on our fluent objective. This means that if a formation/style combination has never worked in games we have played or observed the payoff will be weighted by 0 and not be selected. The same approach can be applied when changing the formation and style in the in-match stochastic game and each change made can be weighted by the corresponding element in $P$. In the next section, we perform a number of experiments on our models and assess the performance over a whole given season as well as how the inclusion of $O$ and $P$ each game-week can be used to help teams improve their performance and meet their objectives. \section{Empirical Evaluation} To evaluate our models we use a dataset collected from two seasons (2017/18 and 2018/19) from the English Premier League (EPL).\footnote{All data provided by StatsBomb - www.statsbomb.com.} The dataset breaks down each of the games from the tournament into an event-by-event analysis where each event gives different metrics including event type (e.g., pass, shot, tackle), the pitch coordinates of the event and the event outcome. This type of dataset is industry-leading in football and used by top professional teams. Thus, this is a rich real-world dataset that allows us to rigorously assess the value of our model. \subsection{Experiment 1: Learning the Fluent Objective} Here, we test our fluent objective model in each game-week. Firstly, we evaluate the individual game prediction model that is used to feed the probabilities of outcomes into our season simulation. Secondly, we evaluate our season simulation prediction model using a Markov-chain Monte-Carlo (MCMC) simulation with respect to its accuracy as the season progresses. In Experiment 2, we test our MAP estimator for setting fluent objectives at each game-week. To predict the outcome probabilities of individual games we a the deep learning neural network model that calculates pay-offs in the Bayesian game.\footnote{We use a fully-connected feed-forward NN with 3 layers \& a ReLU activation function.} Over the past two EPL seasons the accuracy of the model is 72.99\% with a precision of 69.48\%, recall of 59.5\% and F1 Score of 59.82 \%. This model is used to calculate the probability distribution used in our MCMC model for the entire season. We then run a number of experiments of our MCMC simulation of a season. We predict all remaining games 100,000 times and find the most likely league standings after 38 game-weeks. We can compare this to the final league ranks and compare the differences. In Figure \ref{fig:weeks}, we show an average of all clubs' absolute difference in their actual finishing position and their predicted finishing position. This is run after each game-week so we have more information about the games that have already been completed. Week 0 is the prediction before any games have been played and week 37 is the final prediction after 37 out of 38 games have been played. \begin{figure}[h!] \centering \begin{tikzpicture}[thick,scale=1, every node/.style={scale=0.8}] \begin{axis} [ xlabel=Gameweek, ymin=0,ymax=1.5, xmin=0, xmax=37, width=\columnwidth-10, height=\columnwidth-90, legend pos=north west, smooth, y label style={at={(axis description cs:0.075,.5)},anchor=south}, ylabel= \# Differences] \addplot[color=red,line width=0.25mm, solid] coordinates{ (0,0.7) (1,0.7) (2,0.9) (3,1.1) (4,1.0) (5,0.7) (6,0.7) (7,0.8) (8,0.6) (9,0.9) (10,0.8) (11,1.0) (12,0.9) (13,1.0) (14,1.1) (15,0.9) (16,0.9) (17,0.8) (18,0.8) (19,0.8) (20,0.8) (21,1.0) (22,1.0) (23,1.1) (24,0.9) (25,0.8) (26,0.9) (27,1.0) (28,0.7) (29,0.9) (30,1.0) (31,0.9) (32,0.8) (33,0.5) (34,0.8) (35,0.5) (36,0.2) (37,0.2) }; \addplot[color=blue,line width=0.1mm, dashed] coordinates{ (0,0.7) (1,0.7) (2,0.7666666666666666) (3,0.85) (4,0.8800000000000001) (5,0.8800000000000001) (6,0.8800000000000001) (7,0.86) (8,0.76) (9,0.74) (10,0.76) (11,0.82) (12,0.8400000000000001) (13,0.9199999999999999) (14,0.9600000000000002) (15,0.9800000000000001) (16,0.96) (17,0.9400000000000001) (18,0.9) (19,0.8400000000000001) (20,0.82) (21,0.8400000000000001) (22,0.8800000000000001) (23,0.9400000000000001) (24,0.96) (25,0.96) (26,0.9400000000000001) (27,0.9399999999999998) (28,0.86) (29,0.8600000000000001) (30,0.9) (31,0.9) (32,0.86) (33,0.82) (34,0.8) (35,0.7) (36,0.56) (37,0.44000000000000006) }; \addlegendentry{\small Ave Difference} \addlegendentry{\small Moving Average} \end{axis} \end{tikzpicture} \caption{2018/19 EPL Actual League Standings vs MCMC Predictions} \label{fig:weeks} \end{figure} As shown in Figure \ref{fig:weeks}, we can see how in the first half of the season the league standings remain fairly unpredictable due to the number of different possible combinations that we are attempting to predict --- there are a total of $\num{2.43e+18}$ different combinations of team order that the league could finish in.\footnote{The vast number of possible combination is why we use position differences rather than the overall accuracy of the entire standings after each game-week.} We do see however that as the season unfolds and we have a better idea of team performance the simulation accuracy improves. This is also to be expected as we are simulating fewer games later into the season and we have more evidence from those having taken place in the real world. This shows that we have a suitable method to extract a distribution of where we expect a team to finish and can therefore derive the fluent objective using a MAP estimation to get our objective. This is shown in the next experiment. \subsection{Experiment 2: Setting the Fluent Objective} To test our MAP estimation, after each game-week simulation we set the fluent objective for all 20 EPL teams. We then assess if their objective was met at that game-week and show the percentage of teams that were successful in meeting their objectives. This is shown in Figure \ref{fig:weeks2} where week 0 is the prediction before any games and week 37 is the final prediction. \begin{figure}[h!] \centering \begin{tikzpicture}[thick,scale=1, every node/.style={scale=0.8}] \begin{axis} [ xlabel=Gameweek, smooth, ymin=40,ymax=100, xmin=0, xmax=37, width=\columnwidth-10, height=\columnwidth-90, legend pos=north west, y label style={at={(axis description cs:0.075,.5)},anchor=south}, ylabel= Accuracy \%] \addplot[color=red,line width=0.25mm, solid] coordinates{ (0,65.0) (1,65.0) (2,65.0) (3,55.00000000000001) (4,55.00000000000001) (5,65.0) (6,65.0) (7,65.0) (8,75.0) (9,65.0) (10,65.0) (11,65.0) (12,65.0) (13,65.0) (14,55.00000000000001) (15,65.0) (16,65.0) (17,65.0) (18,65.0) (19,65.0) (20,75.0) (21,65.0) (22,65.0) (23,75.0) (24,65.0) (25,75.0) (26,75.0) (27,65.0) (28,75.0) (29,85.0) (30,75.0) (31,75.0) (32,75.0) (33,85.0) (34,75.0) (35,75.0) (36,85.0) (37,85.0) }; \addplot[color=blue,line width=0.1mm, dashed] coordinates{ (0,65.0) (1,65.0) (2,65.0) (3,62.5) (4,61.0) (5,61.0) (6,61.0) (7,61.0) (8,65.0) (9,67.0) (10,67.0) (11,67.0) (12,67.0) (13,65.0) (14,63.0) (15,63.0) (16,63.0) (17,63.0) (18,63.0) (19,65.0) (20,67.0) (21,67.0) (22,67.0) (23,69.0) (24,69.0) (25,69.0) (26,71.0) (27,71.0) (28,71.0) (29,75.0) (30,75.0) (31,75.0) (32,77.0) (33,79.0) (34,77.0) (35,77.0) (36,79.0) (37,81.0) }; \addlegendentry{\small \% Accuracy} \addlegendentry{\small Moving Average} \end{axis} \end{tikzpicture} \caption{Accuracy of Setting the Fluent Objective (2018/19 EPL Season).} \label{fig:weeks2} \end{figure} \vspace*{-\baselineskip} As we can see in Figure \ref{fig:weeks2} the fluent objective accuracy rises as the season progresses and from week 15 onwards we see the accuracy of the fluent objective setting rise more clearly. This shows that we can set realistic to aim for as the season progresses in relation to the actual league outcomes and what was achieved by the teams. One thing to note in this experiment is that not every team in the league can meet their objective as there may be more teams aiming for something than can achieve it (e.g., 3 teams aiming to win the league). Also, 3 teams must always be relegated which the minimum objective is to avoid, meaning that even in the best case only 85\% of teams will achieve their objective. We find that in weeks 36 and 37, we reach the maximum 85\% of teams meeting their objectives. \subsection{Experiment 3: Learning from Observing Games} To test the impact of the addition of the weights $w$ that we estimate in $P$, we evaluate how the weights are able to boost our ability to predict the outcomes of games based on the tactical decisions and therefore improve our payoff model. To evaluate our $P$ weights, we compare the accuracy of the predictions of the model presented in \cite{Dixon_Coles} both with and without $P$ (this model makes up part of the feature set that is used for calculating the payoffs). We then assess the differences in terms of the models' ability to be able to accurately predict the outcome of the game running the tests over 1046 games. In both cases, the prediction is the highest probability that is given to one of the results (home win, away win and draw). The results from this experiment are shown in Figure \ref{fig:payoff}.\footnote{The precision, recall and F1 score are computed as a weighted average of the ability to predict each outcome using SciKit Learns' multi-class support.} \pgfplotstableread[row sep=\\,col sep=&]{ interval & diff \\ Accuracy & 60.038 \\ Precision & 56.32 \\ Recall & 60.03 \\ F1 Score & 57.11 \\ }\mydata \pgfplotstableread[row sep=\\,col sep=&]{ interval & diff \\ Accuracy & 61.759 \\ Precision & 57.82 \\ Recall & 61.759 \\ F1 Score & 58.388 \\ }\newdata \begin{figure}[h!] \centering \begin{tikzpicture} \centering \begin{axis}[ ybar, bar width=0.3cm, symbolic x coords={Accuracy,Precision,Recall,F1 Score}, xtick=data, ylabel={\small Percentage (\%)}, width=\columnwidth-30, height=\columnwidth-120, ymin=50,ymax=70, y label style={at={(axis description cs:0.15,.5)},anchor=south}, ] \addplot[pattern=north east lines, pattern color=blue, every node near coord/.style={inner ysep=5pt}, error bars/.cd, y dir=both, y explicit] table[x=interval,y=diff]{\mydata}; \addplot[pattern=horizontal lines, pattern color=red, every node near coord/.style={inner ysep=5pt}, error bars/.cd, y dir=both, y explicit] table[x=interval,y=diff]{\newdata}; \addlegendentry{\small Without $P$} \addlegendentry{\small With $P$} \end{axis} \end{tikzpicture} \caption{\small Payoff Model Performance Comparison.} \label{fig:payoff} \end{figure} As we can see in Figure \ref{fig:payoff} by using the weights in $P$ we are able to boost in the accuracy of the model, and therefore the accuracy of our payoffs, achieving a boost of 1.76\%. We also see that there is an increase in the precision, recall and F1 of our model by 1.50\%, 1.72\% and 1.27\% respectively. Even though this represents a fairly small increase to the results of the model in \cite{Dixon_Coles}, it shows that by learning from what tactics have worked (both for your team and others), we can boost our ability to calculate the tactical decision pay-off and therefore our ability to optimise decisions made. Over a large scale of time such as a 38 game-week season, a 1.76\% boost in performance could be the difference between finishing a place higher in the league which can have huge financial gain and help to achieve our set fluent objective. \subsection{Experiment 4: Optimising Team Long-Term Performance} Our final experiment assesses how we incorporate the fluent objective $O$ and weights in $P$ into the tactical decision-making optimisation model presented in \cite{beal2020optimising} and evaluate how this improves team performance to help them meet their objective. To test this we simulate an entire season week by week and apply our model to a single team in the simulation. After each game-week we simulate the remaining games and recalculate $O$ and $P$ as outlined in Figure \ref{fig:flowchart}. We then compare our results using the new model across a simulated season against a simulation where we do not use the $O$ and $P$. We show the results from this when running separate simulations for a set of different teams\footnote{We use the bottom 8 teams in the 2018/19 EPL season to show we can improve their performance.} (the team we use is the only team using the new model in each simulation) in Figure \ref{fig:boost}. We show the average difference in the mean-expected finishing position from the distribution of each team that we run our season simulation for, both using the new model and without. \begin{figure}[h!] \centering \begin{tikzpicture}[thick,scale=1, every node/.style={scale=0.75}] \begin{axis}[ xbar, xmin=0,xmax=4, xlabel={Average Difference in Final Position}, bar width=0.25cm, symbolic y coords= {With $P$ and $O$}, Without $P$ and $O$}, ytick=data, width=\columnwidth-45, height=\columnwidth-155, enlarge y limits={abs=0.5cm}, legend style={at={(0.675,0.05)},anchor=south west} ] \addplot[pattern=vertical lines, pattern color=red] coordinates { (3.735375,{With $P$ and $O$}) (0.83175,Without $P$ and $O$)}; \end{axis} \end{tikzpicture} \caption{\small Payoffs of Real-World vs. Optimised Decisions} \label{fig:boost} \end{figure} This shows how our model can improve the probability of teams' finishing positions and see that on average there is a 2.90 position improvement when using $O$ and $P$ compared to without for our test set of teams. This is achieved as by using $O$ and $P$ teams can add more context to their decisions, also by selecting the optimal tactics each week in the simulation using the model in \cite{beal2020optimising} we would also expect to see a boost to the performance. Below, we highlight an example of the distribution improvement of the simulation when aiming to optimise the performance of Southampton FC (only team using the optimisation model in the simulation). Figure \ref{fig:hist-new} shows the distribution with $O$ and $P$ applied and not applied. \begin{figure}[h!] \centering \begin{tikzpicture} \begin{axis}[ ymin=0, ymax=35, xmin=1, xmax=20, xlabel=Final League Position, ylabel=Probability (\%), width=\columnwidth-40, height=\columnwidth-120, y label style={at={(axis description cs:0.15,.5)},anchor=south}, smooth, legend pos=north west, ] \addplot[color=red,line width=0.25mm] coordinates{ (1,0.0) (2,0.0) (3,0.0) (4,0.1) (5,0.1) (6,0.5) (7,0.7) (8,1.0) (9,1.9) (10,3.2) (11,5.4) (12,7.0) (13,8.8) (14,14.7) (15,17.1) (16,15.6) (17,13.6) (18,6.4) (19,3.6) (20,0.3) }\closedcycle; \addplot[color=blue,line width=0.25mm] coordinates{ (1,0.0) (2,0.0) (3,1.2) (4,1.7) (5,5.4) (6,8.5) (7,12.5) (8,13.2) (9,13.5) (10,11.5) (11,8.7) (12,6.3) (13,5.2) (14,4.7) (15,3.9) (16,2.3) (17,0.7) (18,0.7) (19,0.0) (20,0.0) }\closedcycle; \addplot[red,sharp plot,update limits=false,line width=0.25mm, dashed] coordinates {(14.564, 0) (14.564, 100)}; \addplot[blue,sharp plot,update limits=false,line width=0.25mm, dashed] coordinates {(9.425, 0) (9.425, 100)}; \node[text=blue] at (102.5,320) {\small $\mu=9.4$}; \node[text=red] at (155,320) {\small $\mu=14.6$}; \addlegendentry{\small Without} \addlegendentry{\small With} \end{axis} \end{tikzpicture} \caption{Example League Outcome Probability Distribution for Southampton FC in 2018/19.} \label{fig:hist-new} \end{figure} As we can see from the example shown in Figure \ref{fig:hist-new}, we can use the fluent objectives to help teams boost their probabilities of winning games that matter, and thus boost their expected finishing position, increasing the mean of the expected finishing distribution by up to 35.6\%. We see similar improvements to this across our test set of teams. In the next section, we will further discuss these results, the real-world implications and some further findings. \section{Discussion} One interesting finding from further experiments is when we simulate the season with all teams using the model discussed in this paper to select their tactics. When we run this simulation, we find that the results cancels itself out and the final standings are very similar to what we see when we run the simulation without the new fluent objective and prior game weights. We see that there is a boost of under 1 position on average per team when every team uses the model in the same season. This shows that teams can gain a boost in their performance over the season but only if they utilise the game theoretic approaches while all others are not. Another observation we see in our results is when we compare the increase in the positional distribution using the model between the stronger top-half teams and the teams who are in the lower half of the league and aiming to stay in the division. When using the model for the latter, we observe a substantial boost of up to 35.6\% in long-term performance. This may be due to the algorithm helping teams using the new model gain positive results in the closer games at the bottom of the table when playing teams of similar ability and thus preventing them gaining points by taking all 3 for yourself. In turn, higher up the league teams often win the games they are expected to against weaker teams so the performance boost is lower. It is also worth noting that across the season there are also a number of other variables that can affect team decision-making both tactically and off the pitch. As teams re-assess their objectives in the season, there are decisions off the pitch that can help boost their performing as well as the tactical decision optimisation that helps on it. One example is a change in managers/coaches, this is often a measure taken for an underperforming team and can help boost performance. If a team is doing well and wants to push higher up the table or is struggling and needs new players, then during January teams are able to invest money into new players to improve their team and again improve. These types of decisions could be added into the model to help decision makers at clubs subjectively decide when to invest more money or make changes. \section{Conclusions and Future Work} This paper presents a novel model for the long-term tactical decisions that are made in football and helps teams to optimise their decisions by adding more long-term context. We introduce the concept of a \emph{fluent objective} that allows us to re-evaluate team performance and base decisions based on a wider environment. We find that we can build models that are able to predict the final outcome of the table on a regular basis, and then using a MAP estimation to effectively set the fluent objective each week. We also learn from other games that happen in the overall environment and find that this can boost the performance of pay-off models in our multi-step games. Overall, we find that our model can be used for football teams who are looking to improve their overall expected league position (on average improves teams by 2.90 positions) and, show that the concept of a fluent objective can help to optimise long-term performance in a competitive league setting. Due to the success we show when using fluent objectives for an application to football in this paper, in future work we intend to test our approach in other domains. For example, they could be used in security games and UAV swarms as the objective also often change over a given time frame. This testing will help to further verify how the modelling of objectives can aid long-term performance. We also aim to further improve our $P$ weights with applications of the observational learning and reinforcement learning as presented in \cite{borsa2019observational}. Finally, the reinforcement learning techniques presented in \cite{silver2016mastering,matthews2012competing} could be used to further optimise team performance. \begin{acks} We would like to thank the reviewers for their comments. This research is supported by the AXA Research Fund and the EPSRC NPIF doctoral training grant number EP/S515590/1. \end{acks} \clearpage \bibliographystyle{ACM-Reference-Format} \balance \section{Introduction} There are many examples in the real-world of agents or teams of agents aiming to optimise their performance over long periods of time. These often involve a series of multi-step games that feed into one another as well as other factors in the wider environment. Examples of this includes security games where agents aim to constantly protect facilities against attackers that are able to change their tactics and decisions \cite{paruchuri2008playing,shieh2012protect,kiekintveld2009computing}, as well as in the stock market where agents aim to continually make optimal decisions to make profits in fluid real-world environments \cite{lux1999scaling,bak1997price,kagan1995risk}. In this paper, we focus on the long term optimisation of decision-making in team sports. Specifically in games of Association Football (soccer).\footnote{Referred to as just ``football'' throughout this paper.} Although the models could be applied in a number of domains, football presents us with an interesting challenge where a team of human agents compete against other teams of agents across long periods and the success of teams is not only judged in individual games but how they perform over a season in a league format (supported with many years of real-world datasets). This means that there are a set of teams whom each season play every other team twice, both home and away. Teams are awarded points based on winning, losing or drawing and at the end of the season teams are awarded prize money and other incentives based on their points gained in comparison to all other teams in a league rankings/standings.\footnote{https://www.express.co.uk/sport/football/1300924/Premier-League-prize-money-2020-how-much-Liverpool-earn.} Past work in this area has focused on optimising performance in individual games \cite{beal2020optimising} or for extracting contribution of individual players \cite{beal2020learning,decroos2020vaep,fernandez2019decomposing}. However, to date, there is no formal model for optimising team performance and tactical decision-making over a longer period of time. Against this background, we propose a formal model for optimising the long-term performance of football teams and how they can extract knowledge from other games in the league environment. We introduce the novel notion of a \emph{fluent objective} which is a sequence of ``objective variables", each one corresponding to a particular point in the agent's planning horizon (i.e., a game in the game season). We should also clarify that these variables can take the form of a broader goal (e.g., win the league or do not get relegated). We use Markov chain Monte Carlo simulations to help look ahead into the future and allow us to set realistic achievable objectives which add more context to our tactical decision-making in individual games. We also take inspiration from observational learning \cite{borsa2019observational,bandura2008observational,jang1999ensemble} to help teams extract information from other games that happen in the environment and past games they have played themselves. This is used to identify tactical decisions that boost the chances of gaining positive results against given oppositions. As the season progresses, teams learn more as more games unfold --- we encapsulate this into our modelling. Thus, this paper advances the state of the art in the following ways: \begin{enumerate} \item We propose a mathematical model for optimising the long-term performance of human teams and apply this to the game of football. \item Using real-world data from 760 real-world football games from the past two seasons of the English Premier League (EPL), we can set the fluent objective based on accurate league simulations and further improve individual game payoffs by using knowledge from prior games. In particular, we show that we can increase teams finishing position on average by up to 2.9 ranks (out of 20). \item By using a fluent objective and prior game knowledge we are able to show an increased probability of improved long-term performance in real-world football teams (by up to 35.6\%). \end{enumerate} Our results show that by looking ahead and thinking about long-term goals, teams can add more context to the tactical decisions that are made for individual games and thus are more likely to achieve the long-term objectives that they have set. The rest of this paper is structured as follows, in Section 2 we provide a background and in Section 3 we discuss how we model long term performance. In Section 4 and 5 we discuss how we calculate the fluent objective and learn from prior games respectively. We run simulation experiments on our models in Section 6 and discuss these in Section 7. Finally, Section 8 concludes. \section{Background} In this section, we review related literature showing other examples of modelling real-world problems. We also give an overview of why long-term football tactics are important, what is involved and discuss how this is approached for individual games in \cite{beal2020optimising}. \subsection{Related Work} Here, we explore the related work to how we can model long-term flowing games such as a sports league as well and giving some background into sports tactics literature. \subsubsection{Modelling the Real-World} As far as we are aware, the notion and modelling of \emph{fluent objectives} in this paper, which allows us to optimise long-term performance, is entirely novel. However, it was inspired by work presented in situations and fluents in first-order logic and situation calculus \cite{lin2008situation}. We see this approach being used to create a model for environmental context in \cite{ranganathan2003infrastructure}. The authors model enables context awareness to help build context-aware applications. Similarly in our model, we aim to gain context of the other teams in the environments to help make decisions based on the future league standings. There are also agents reacting to situations in their environment in \cite{sim2003agents}, where agents react to the ever-changing variables in the stock market. In our work, we also aim to learn from prior games and other games that happen in the environment to gain a better understanding into what tactics work against given opponents. This is closely related to the work presented in \cite{borsa2019observational}, where the authors explore the notion of ``observation learning" which is is a type of learning that occurs as a function of observing, retaining and imitating the behaviour of another agent. This is applicable to football as if we observe another team perform well against another opponent then we may want to imitate their tactics to help us to win. Other examples of this type of work are shown in \cite{piot2013learning,russell1998learning,silver2016mastering}. \subsubsection{Sports Tactics} In the sports domain, there are examples of work focused on team tactics and decision-making in football and other team sports \cite{beal2019artificial}. In terms of long-term decision-making though the key example of agents being used to optimise this in sport is shown in \cite{matthews2012competing} which presents a successful model for competing in fantasy football games.\footnote{https://fantasy.premierleague.com/help/rules.} Here, the authors use machine learning to predict the performance of individual players and then use deep-reinforcement learning to optimise decisions on a week-by-week basis and look ahead to maximise their chances of success. By doing so, they rank in the top 1\% of human players. In our work, we can take inspiration from this in the real-world and help human coaches and managers make decisions on human footballers. We also see examples of tactical papers for sport in \cite{jordan2009optimizing} exploring different risk strategies for play-calling in American Football. As well as some key football papers to help improve human performance and identify high-performing players and pairs of players are shown in \cite{fernandez2019decomposing,decroos2020vaep,beal2020learning}. To provide more intuitions around long-term decision-making, in the next subsection we give a background to football tactics and their importance to the game as well as the league structure. \subsection{Long-Term Football Tactics}\label{subsec:tactics-background} In football, individual games are incredibly important, but what is often overlooked tactically is the impact that each game has over a longer period of time and on the overall league standings. The final league standings is the final position of all teams in a league based on the points they have gained over an $N$ game season. In a standard football league (e.g., English Premier League or German Bundesliga), across a season each team plays each other twice (once home and once away) a win is worth 3 points, a draw 1 point and a loss no points. There are huge intrinsic and financial gains to be made by finishing higher up the table and there are certain milestones that teams aim for to boost their success such as qualification for European competitions.\footnote{http://eightyfivepoints.blogspot.com/2018/03/show-me-money-how-much-is-each-premier.html.} The season is often broken down into given ``game-weeks" where all teams play a game within the week. We can therefore breakdown the season into these game-weeks as incremental steps in a game. In each week our team plays a game and a number of other games also take place. We therefore, want to maximise our own performance in our game and learn from other games for the future when we play those teams (see Figure \ref{fig:flowchart}). Therefore, in this paper we aim to model teams tactical decisions based on the overall league environment and use \emph{fluent objectives} to add context to our decisions and prior games knowledge to imitate other successful teams. In the next section, we discuss the model that this paper builds on for optimising tactical decision-making in individual games. \subsection{Modelling the Game of Football}\label{subsec:extend} The modelling presented in this paper extends on the formal model for football that is presented in \cite{beal2020optimising} for optimising the tactics in an individual game. In \cite{beal2020optimising} the authors use a multi-step game to represent the pre-match tactical decisions that are made using a Bayesian game (representing the unknowns of opposition decisions), this then feeds into the in-match decisions made which is modelled as a stochastic game (representing the score-line states in a game). Using these models teams are able to optimise their tactics by up to 16.1\%. In this paper, we extend that model by adding context of the wider environment of the league. By using our fluent objective and prior game weightings we can further optimise these tactics to no only improve the chances of a positive result in the individual game but improve the long-term performance of the team in the league standings. \section{Modelling Long Term Team Performance} \begin{figure*} \centering \includegraphics[scale=0.6]{Images/football-flow.pdf} \caption{Sequence of Multi-Games Across a Season} \label{fig:flowchart} \end{figure*} In this section, we discuss how we model the long-term performance of football teams over a season and identify how we can use fluent objectives and learn from games to optimise long-term performance of a team. At the start of a given season or competition, a team will have some aim of how well they want to do and what they want to achieve. In a knockout style cup competition such as the FIFA World Cup or English FA Cup, every team is aiming to win every game as this is the only way to win overall; there are no prizes for second place. Across a full season, however, there are a number of objectives that a team can have that will help maximise their financial gains and reputation of the team. For example, as discussed in Section \ref{subsec:tactics-background}, in the English Premier League there is always only one winner but there are also benefits to finishing in the top 4, top 7 and avoiding finishing in the bottom 3. We therefore, model an entire season in football that could be applied to help optimise teams' long-term performance in any league across the world and at any level. \subsection{Sequence of Multi-Games Across a Season} In Figure \ref{fig:flowchart} we show the structure of our model for an entire season in football. This style of model could also be applied in security games or for emergency response where we aim to optimise the performance of teams of agents in evolving environments with ever-changing objectives \cite{ramchurn2016disaster,shieh2012protect}. We build on the multi-step (Bayesian into stochastic) games for optimising single game tactics to help teams achieve their objectives in an $N$ game season. There is a sequence of steps that we highlight and show how each one feeds into the next. We also show how a teams' pre-season objective can be fed into the first game which in-turn can use this to aid the tactical decision-making process as well as the parameters we learn while playing each game (e.g., certain tactics that work well against certain teams). Both the pre-match Bayesian game and the in-match stochastic game can use the objective to help set the risk parameters and select the tactics that will best help the team in the overall environment of the league. This objective then changes as the season progresses and teams aim for different levels of achievement, therefore making this a \emph{fluent objective}; e.g., a team may have had high hopes at the start of the season of winning the league, but if they have a poor start they may have to update their objective to ensure they finish in the top 4. As we show in Figure \ref{fig:flowchart}, the pre-season objective is set as $O_0$, this then changes each game-week as the environment around the team develops, changing to $O_1$ after game-week 1, $O_2$ after game-week 2 and so on until the final in-season objective the week before the final game of the season $N-1$. The final fluent objective, $O_N$, corresponds to the overall end of season outcome ($S_O$), which we can compare to the fluent objective at each game-week to assess the team performance across the season. As discussed in Section \ref{subsec:model-obj}, the $O_x$ and $S_O$ variables might not have distinct values (i.e., maybe $O_0 = O_1$ and so on). We also consider how we can learn from the games that are played as the season progresses. As we play each game we learn something new, both about what works for our own team and what works against a given opposition. We therefore learn parameters from each game that we can carry forward through each game-week and similarly to the fluent objective we update each week. For example, we may find that when our team uses a given formation against a certain style of opponent we see better results. As we show in Figure \ref{fig:flowchart}, this is encapsulated by a \emph{prior knowledge parameter} $P$, which is updated after each game we play where $P_1$ is after game-week 1, $P_2$ after game-week 2 and so on until the penultimate game-week of the season $O_{N-1}$. We explain the precise form of the $P$ parameter in Section \ref{subsec:prior} below. Finally, we must consider the other games that are happening each week in the league environment, $\mathcal{G}_N$ is the set of other games in game-week $N$ and $\mathcal{G} = \{G_1, G_2, ..., G_z\} $ where $z$ is the number of other games played in that week. Within each game-week, all other teams also play one another, so that at the end of the season, each team has played every other team twice (once at home and once away). For example, in the EPL there are 20 teams in the league, each team plays the other 19 teams' twice which is 38 games. In the EPL there are a total of 380 games, and so there are 342 that do not involve the team that we are focused on for our optimisation. These games are observable so we can learn from each one, which in turn affects our fluent objective $O$ and what we learn after each game-week $P$. As discussed in Section \ref{subsec:tactics-background}, the outcomes of the other games affect the league table with teams gaining 3 points for a win and 1 point for a draw. We therefore must consider the other teams' performances when setting $O$. We can also observe other games tactically to learn what styles and formations work best against given teams, this is how we can learn $P$ from prior games. In the following subsections, we go into more detail regarding how we model the fluent objective $O$ and how we can learn from prior games $P$. \subsection{Fluent Objectives}\label{subsec:model-obj} At the start of each season, a team will have some objective for what they are looking to achieve in the next season. These goals are decided based on several factors such as previous season performance and money invested into the team. The goals are usually set by the owners/directors of the team and are based on their subjective opinions of how their team should perform and where they should place in the league against the other teams. The opinions of what the team should achieve then changes over the season which can drive key decisions such as a change in coach/manager for an under-performing team or investing more money into an over-performing team so they achieve a European place which comes with huge financial gains. In other settings, these type of objectives could be the defence of a given target or the rescue of a person. Our model for the fluent objective can objectively evaluate how we expect a team to perform over a season and allow teams to change their tactical decision-making based on this. There two different objectives that can be set: a more granular objective of the expected league position and an objective of what could be achieved in terms of broader incentives in the league (e.g., avoiding relegation or qualifying for European competitions). In this paper, we focused on the latter and can define the set of possible objectives as $\mathcal{O} = \{o_1, o_2, ..., o_k\}$ where $k$ is the number of different objectives. An example of the set of objectives --- more accurately, the set of values that an $O_x$ objective variable can take --- in the EPL would be: \begin{itemize} \item \textbf{Winning the League ($o_1$):} Awarded to the team who finishes top of the league. \item \textbf{Qualifying for the Champions League ($o_2$):} Awarded to the top 4 teams, so in this case the objective relates to teams finishing 2nd-4th.\footnote{https://www.premierleague.com/european-qualification-explained.} \item \textbf{Qualifying for the Europa League ($o_3$):} Another European competition usually awarded to teams who finish between 5th-7th. \item \textbf{Top Half Finish ($o_4$):} The financial benefit of finishing higher in the league are huge and therefore teams often aim to finish in the top half of the table (higher than 10th).\footnote{https://www.goal.com/en-gb/news/how-much-money-do-premier-league-2019-20-winners-get/19jbauady17cw1ieojo40yextz.} \item \textbf{Avoiding Relegation ($o_5$):} The bottom 3 (18th-20th) teams in the EPL are relegated into the English Football League (EFL) Championship which is the second division of the English football leagues. \end{itemize} To set the objective we can simulate how we expect the season to unfold and create a distribution $\mathcal{D}$ that allows us to use a Maximum a Posteriori (MAP) estimation \cite{gauvain1994maximum} for the probability of the team finishing in each position. This then allows us to calculate a set of probabilities for of a team achieving each objective $\mathcal{P} = \{p(o_1), p(o_2), ..., p(o_k)\}$. We then set the $O_o$ (for a pre-season objective) as the most likely objective that can be achieved by a team that season. This process can then be re-run after each game-week is completed to give the fluent objective $O_1$ to $O_{N-1}$. Our simulation of the league will include the real-results which will get more accurate as the season progresses and we learn more about each team. This will then mean we have a fluent objective that will change as the season progresses. At the end of the season, we can compare $O_0$ to $O_{N-1}$ to the final outcome $S_O$ that the team achieves. \subsection{Learning From Prior Games}\label{subsec:prior} As well as the fluent objective, we can also improve the tactical decision-making in our Bayesian and stochastic games by adding prior knowledge $P$ that we learn after each game we play and observe. In more general terms we aim to observe and learn from other successful agents and our own actions. This could also be applicable in swarms of UAVs or imitating other agents trading in the financial markets settings. We can learn a set of weights $\mathcal{W}$ that relate to how effective given style/formation pairs (actions that are made in the multi-step games) that we select in our games are against given oppositions style/formation pairs. These weights are initially set to 1 and are then increased if found to be effective and decrease if found to be ineffective. These can be updated after each game-week and also updated from the other games that we observe. Our $P$ value is defined in Equation \ref{eq:pval}. \begin{equation}\label{eq:pval} P = \left( \begin{array}{ccccc} w_{11} & w_{12} & w_{13} & \hdots & w_{1j} \\ w_{21} & w_{22} & w_{23} & \hdots & w_{2j} \\ \vdots & \vdots & \vdots & \hdots & \vdots \\ w_{i1} & w_{i2} & w_{i3} & \hdots & w_{ij} \end{array} \right) \end{equation}\\ Where $w \in \mathcal{W}$ and $i$/$j$ is the number of possible style/formation pairs. The columns represent the style/formation pair selected by our team and the rows represent the style/formation selected by the opposition (e.g., $w_{ij}$ is how effective our style/formation pair $i$ is against an opposition using style/formation pair $j$). In the following sections, we give more details into how we calculate our fluent objective $O$ and how we can learn the weights that make up $P$. We explore how these are used in the individual football match multi-step game (discussed in Section \ref{subsec:extend}) to further optimise the tactical decision-making process. \section{Calculating the Fluent Objective} In this section, we discuss how we simulate seasons, calculate the fluent objective, and how this can be used to optimise game tactics. \subsection{Simulating Season Outcomes} When we simulate the season outcomes and calculate the distributions of where we expect the team to finish we are interested in predicting all remaining games in the season for both our team and all other teams in the league. To do this we first look at the single-game prediction which is discussed in the next subsection. \subsubsection{Single-Game Prediction} To predict the outcomes of single games in the league we use the model that is defined in \cite{beal2020optimising} which is used for calculating the single-game payoffs. The model uses the team's tactical style, potential formation and team strength to give probabilities of a team winning the game. The set of features used are: home team style, away team style, home team formation, away team formation and then team strengths are calculated by using the outputs from the model described in \cite{Dixon_Coles}. The target class is the final result of the game: home team win, away team win or a draw. Using these features, we train a multi-class classification deep neural network. The neural network is trained using stochastic gradient descent using a categorical cross-entropy loss function (Equation \ref{eq:ccelf}) and a soft-max activation function. \begin{equation}\label{eq:ccelf} -\frac{1}{N}\sum^N_{i=1}\log p_{\textit{model}} [y_i \in O_{y_i}] \end{equation} where, $N$ is the number of games that we are using to train the model and $p_{\textit{model}} [y_i \in O_{y_i}]$ is the probability that $y_i$ is in the class $O$. This model takes the given teams, possible playing styles and possible formations to estimate the probability of winning, drawing or losing the game. Using these probabilities we can simulate the outcome of the entire season, this is discussed in the next sub-section. \subsubsection{Season Simulation} To simulate the remaining games of the season, we use the real-world fixture list to ensure that the ordering of the games is correct. We then find the probability of a home win, away win and draw in each game and use a Markov chain Monte Carlo simulation \cite{vrugt2008accelerating} to simulate all remaining games and total up the points that each team will gain (3 points for a win, 1 for a draw and 0 for a loss). This works well as it emulates the randomness that we see in real-world football games. We repeat this process 100,000 times for each simulation which allows us to derive a distribution for the probability that a team will finish in each place in the league in the final standings. An example of this distribution is shown in Figure \ref{fig:hist}. \begin{figure}[h!] \centering \begin{tikzpicture} \begin{axis}[ ymin=0, ymax=25, xmin=1, xmax=20, area style, xlabel=Final League Position, ylabel=Probability (\%), width=\columnwidth-40, height=\columnwidth-150, y label style={at={(axis description cs:0.15,.5)},anchor=south}, ] \addplot+[ybar interval,mark=no] plot coordinates { (1, 0) (2, 0) (3, 0) (4, 0) (5, 1) (6, 1) (7, 2) (8, 3) (9, 6) (10, 8) (11, 11) (12, 14) (13, 17) (14, 21) (15, 20) (16, 13) (17, 7) (18, 5) (19, 3) (20, 2) }; \end{axis} \end{tikzpicture} \caption{Example League Outcome Probability Distribution.} \label{fig:hist} \end{figure} \subsection{Setting the Fluent Objective} Once we have calculated the distributions of possible place outcomes form the MCMC simulation, we use a Maximum a Posteriori (MAP) estimation \cite{gauvain1994maximum} to set the fluent objective. To do this, we can use the posterior distribution to find interval estimates of the final position for the team in the league. We use the position intervals for the objectives discussed in Section \ref{subsec:model-obj} and can find the $o_k \in \mathcal{O}$ that maximises the posterior PDF. This then sets the objective $O_n$ that is used in game-week $n$ and is updated after each game-week. \subsection{Optimising Tactics using the Fluent Objective} Once we have set the fluent objective we can now use this when optimising the team tactics in the multi-step game for optimising individual game tactics in that game-week. In the pre-match Bayesian game outlined in \cite{beal2020optimising}, Beal et al. present 3 options that can be used depending on the overall environment. Here we present modified, novel notions of these options, which now employ the fluent objective. \begin{itemize} \item \textbf{Best Response:} Used to maximise the chances of winning a game. This option is selected if a team is currently not on track to achieve their objective for the season and must win games to be able to achieve their goals. \item \textbf{Spiteful:} Used to minimise the chances of the opposition winning the game (and therefore improve your chances of drawing/winning). This option is selected if a team is well ahead of their objective and that by preventing losing the game they are more likely to stay on track for their objective across the season. \item \textbf{Expectimax:} This is a mixture of the two above and factors both into account (mathematically defined in \cite{beal2020optimising} where refered to as ``minmax").\footnote{We rename since the approach does not align with the usual meaning of the term ``minimax'' or ``minmax'' in Game Theory.} This is selected if a team is on track for their objective and is aiming to stay that way. \end{itemize} In terms of the in-match stochastic game that is also defined in \cite{beal2020optimising} there are two options that can be selected when making in-match decisions. \begin{itemize} \item \textbf{Aggressive Approach:} This is set if a team is losing/drawing a game and wants to win. It will maximise the chance of a team moving to a more positive state. Therefore, if we know that the objective is to win and gain three points we will select this approach. \item \textbf{Reserved Approach:} This is set if a team is winning/drawing and is happy with their current state. It is used to maximise the chances of staying in the current state. Therefore this is used if winning or if a point is a good result in the overall environment in relation to the objective. \end{itemize} In the next section, we move on to assess how we can learn from prior games and other games in the environment and how this can be added to our optimising decisions model. \section{Learning from Previous Games} In this section, we discuss how we can learn from completed prior games that we play and that other teams in the league play. This allows us to find formation/style combinations that work best against a given formation/style combination that an opposition team may use. To do this we learn a matrix of weights $P$ that corresponds to estimated successes of the formation/style combinations. To estimate each of the weights $w \in P$ we factor in both the games that we have played as well as the games that we have observed. Each weight $w$ corresponds to how effective a given formation/style combination is against a given opposition formation/style. These are computed using Equation \ref{eq:weight} where we look at the games won when using the formation/style ($x$) against the given opposition formation/style ($y$), both in games we have played (first fraction) and in games we have observed (second fraction). \begin{equation}\label{eq:weight} w_{xy} = \Bigg(\frac{games won}{games played}+\frac{observed games won}{observed games}\Bigg)\div{2} \end{equation}\\ These weights in $P$ are updated after each game-week so should become more accurate across the season. In game-week 1 all weights can either be set to 1 or be carried over from the previous season. In the next season, we outline how $P$ is used to optimise the pre-game tactics in the Bayesian Game and in-match decisions in the stochastic game. \subsection{Optimising Tactics using Prior Games} Once we have computed the weights that we use in $P$, these can be used when making our pre-match decisions in our Bayesian game. In the optimisation model, a payoff table is computed for each combination of opposition actions to give the probability of the match outcomes based on their selected action of styles $S$ and formations $f$, where $h$ is home win, $d$ is a draw and $a$ is an away win. The payoff for the team is the weighted sum of win and draw probabilities that we store in payoff table made up from the different decision that we can make. \iffalse \begin{table}[h!] \begin{tabular}{cccc} & $S_1$ & $\hdots$ & $S_x$ \\ \cline{2-4} \multicolumn{1}{l|}{$f_1$} & \multicolumn{1}{l|}{$p(h,d,a|S_1,f_1)$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$p(h,d,a|S_x,f_1)$} \\ \cline{2-4} \multicolumn{1}{l|}{$f_2$} & \multicolumn{1}{l|}{$p(h,d,a|S_1,f_2)$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$p(h,d,a|S_x,f_2)$} \\ \cline{2-4} \multicolumn{1}{l|}{$f_3$} & \multicolumn{1}{l|}{$p(h,d,a|S_1,f_3)$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$p(h,d,a|S_x,f_3)$} \\ \cline{2-4} \multicolumn{1}{l|}{$\vdots$} & \multicolumn{1}{l|}{$\vdots$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$\vdots$} \\ \cline{2-4} \multicolumn{1}{l|}{$f_y$} & \multicolumn{1}{l|}{$p(h,d,a|S_1,f_y)$} & \multicolumn{1}{l|}{$\hdots$} & \multicolumn{1}{l|}{$p(h,d,a|S_x,f_y)$} \\ \cline{2-4} \end{tabular} \caption{\small An example payoff table for a team who can have a tactical style of $S_1$ to $S_x$ and a given formation $f_1$ to $f_y$.} \label{tab:bayes_nash} \end{table} \vspace*{-\baselineskip} \fi We can then apply the computed weights in $P$ to the payoff table to weigh each payoff depending on how successful these have been in prior games and in observed games. Therefore, we can optimise the tactical decision based on the weighted payoffs in these tables using either the best approach, spiteful or expectimax approaches which are decided based on our fluent objective. This means that if a formation/style combination has never worked in games we have played or observed the payoff will be weighted by 0 and not be selected. The same approach can be applied when changing the formation and style in the in-match stochastic game and each change made can be weighted by the corresponding element in $P$. In the next section, we perform a number of experiments on our models and assess the performance over a whole given season as well as how the inclusion of $O$ and $P$ each game-week can be used to help teams improve their performance and meet their objectives. \section{Empirical Evaluation} To evaluate our models we use a dataset collected from two seasons (2017/18 and 2018/19) from the English Premier League (EPL).\footnote{All data provided by StatsBomb - www.statsbomb.com.} The dataset breaks down each of the games from the tournament into an event-by-event analysis where each event gives different metrics including event type (e.g., pass, shot, tackle), the pitch coordinates of the event and the event outcome. This type of dataset is industry-leading in football and used by top professional teams. Thus, this is a rich real-world dataset that allows us to rigorously assess the value of our model. \subsection{Experiment 1: Learning the Fluent Objective} Here, we test our fluent objective model in each game-week. Firstly, we evaluate the individual game prediction model that is used to feed the probabilities of outcomes into our season simulation. Secondly, we evaluate our season simulation prediction model using a Markov-chain Monte-Carlo (MCMC) simulation with respect to its accuracy as the season progresses. In Experiment 2, we test our MAP estimator for setting fluent objectives at each game-week. To predict the outcome probabilities of individual games we a the deep learning neural network model that calculates pay-offs in the Bayesian game.\footnote{We use a fully-connected feed-forward NN with 3 layers \& a ReLU activation function.} Over the past two EPL seasons the accuracy of the model is 72.99\% with a precision of 69.48\%, recall of 59.5\% and F1 Score of 59.82 \%. This model is used to calculate the probability distribution used in our MCMC model for the entire season. We then run a number of experiments of our MCMC simulation of a season. We predict all remaining games 100,000 times and find the most likely league standings after 38 game-weeks. We can compare this to the final league ranks and compare the differences. In Figure \ref{fig:weeks}, we show an average of all clubs' absolute difference in their actual finishing position and their predicted finishing position. This is run after each game-week so we have more information about the games that have already been completed. Week 0 is the prediction before any games have been played and week 37 is the final prediction after 37 out of 38 games have been played. \begin{figure}[h!] \centering \begin{tikzpicture}[thick,scale=1, every node/.style={scale=0.8}] \begin{axis} [ xlabel=Gameweek, ymin=0,ymax=1.5, xmin=0, xmax=37, width=\columnwidth-10, height=\columnwidth-90, legend pos=north west, smooth, y label style={at={(axis description cs:0.075,.5)},anchor=south}, ylabel= \# Differences] \addplot[color=red,line width=0.25mm, solid] coordinates{ (0,0.7) (1,0.7) (2,0.9) (3,1.1) (4,1.0) (5,0.7) (6,0.7) (7,0.8) (8,0.6) (9,0.9) (10,0.8) (11,1.0) (12,0.9) (13,1.0) (14,1.1) (15,0.9) (16,0.9) (17,0.8) (18,0.8) (19,0.8) (20,0.8) (21,1.0) (22,1.0) (23,1.1) (24,0.9) (25,0.8) (26,0.9) (27,1.0) (28,0.7) (29,0.9) (30,1.0) (31,0.9) (32,0.8) (33,0.5) (34,0.8) (35,0.5) (36,0.2) (37,0.2) }; \addplot[color=blue,line width=0.1mm, dashed] coordinates{ (0,0.7) (1,0.7) (2,0.7666666666666666) (3,0.85) (4,0.8800000000000001) (5,0.8800000000000001) (6,0.8800000000000001) (7,0.86) (8,0.76) (9,0.74) (10,0.76) (11,0.82) (12,0.8400000000000001) (13,0.9199999999999999) (14,0.9600000000000002) (15,0.9800000000000001) (16,0.96) (17,0.9400000000000001) (18,0.9) (19,0.8400000000000001) (20,0.82) (21,0.8400000000000001) (22,0.8800000000000001) (23,0.9400000000000001) (24,0.96) (25,0.96) (26,0.9400000000000001) (27,0.9399999999999998) (28,0.86) (29,0.8600000000000001) (30,0.9) (31,0.9) (32,0.86) (33,0.82) (34,0.8) (35,0.7) (36,0.56) (37,0.44000000000000006) }; \addlegendentry{\small Ave Difference} \addlegendentry{\small Moving Average} \end{axis} \end{tikzpicture} \caption{2018/19 EPL Actual League Standings vs MCMC Predictions} \label{fig:weeks} \end{figure} As shown in Figure \ref{fig:weeks}, we can see how in the first half of the season the league standings remain fairly unpredictable due to the number of different possible combinations that we are attempting to predict --- there are a total of $\num{2.43e+18}$ different combinations of team order that the league could finish in.\footnote{The vast number of possible combination is why we use position differences rather than the overall accuracy of the entire standings after each game-week.} We do see however that as the season unfolds and we have a better idea of team performance the simulation accuracy improves. This is also to be expected as we are simulating fewer games later into the season and we have more evidence from those having taken place in the real world. This shows that we have a suitable method to extract a distribution of where we expect a team to finish and can therefore derive the fluent objective using a MAP estimation to get our objective. This is shown in the next experiment. \subsection{Experiment 2: Setting the Fluent Objective} To test our MAP estimation, after each game-week simulation we set the fluent objective for all 20 EPL teams. We then assess if their objective was met at that game-week and show the percentage of teams that were successful in meeting their objectives. This is shown in Figure \ref{fig:weeks2} where week 0 is the prediction before any games and week 37 is the final prediction. \begin{figure}[h!] \centering \begin{tikzpicture}[thick,scale=1, every node/.style={scale=0.8}] \begin{axis} [ xlabel=Gameweek, smooth, ymin=40,ymax=100, xmin=0, xmax=37, width=\columnwidth-10, height=\columnwidth-90, legend pos=north west, y label style={at={(axis description cs:0.075,.5)},anchor=south}, ylabel= Accuracy \%] \addplot[color=red,line width=0.25mm, solid] coordinates{ (0,65.0) (1,65.0) (2,65.0) (3,55.00000000000001) (4,55.00000000000001) (5,65.0) (6,65.0) (7,65.0) (8,75.0) (9,65.0) (10,65.0) (11,65.0) (12,65.0) (13,65.0) (14,55.00000000000001) (15,65.0) (16,65.0) (17,65.0) (18,65.0) (19,65.0) (20,75.0) (21,65.0) (22,65.0) (23,75.0) (24,65.0) (25,75.0) (26,75.0) (27,65.0) (28,75.0) (29,85.0) (30,75.0) (31,75.0) (32,75.0) (33,85.0) (34,75.0) (35,75.0) (36,85.0) (37,85.0) }; \addplot[color=blue,line width=0.1mm, dashed] coordinates{ (0,65.0) (1,65.0) (2,65.0) (3,62.5) (4,61.0) (5,61.0) (6,61.0) (7,61.0) (8,65.0) (9,67.0) (10,67.0) (11,67.0) (12,67.0) (13,65.0) (14,63.0) (15,63.0) (16,63.0) (17,63.0) (18,63.0) (19,65.0) (20,67.0) (21,67.0) (22,67.0) (23,69.0) (24,69.0) (25,69.0) (26,71.0) (27,71.0) (28,71.0) (29,75.0) (30,75.0) (31,75.0) (32,77.0) (33,79.0) (34,77.0) (35,77.0) (36,79.0) (37,81.0) }; \addlegendentry{\small \% Accuracy} \addlegendentry{\small Moving Average} \end{axis} \end{tikzpicture} \caption{Accuracy of Setting the Fluent Objective (2018/19 EPL Season).} \label{fig:weeks2} \end{figure} \vspace*{-\baselineskip} As we can see in Figure \ref{fig:weeks2} the fluent objective accuracy rises as the season progresses and from week 15 onwards we see the accuracy of the fluent objective setting rise more clearly. This shows that we can set realistic to aim for as the season progresses in relation to the actual league outcomes and what was achieved by the teams. One thing to note in this experiment is that not every team in the league can meet their objective as there may be more teams aiming for something than can achieve it (e.g., 3 teams aiming to win the league). Also, 3 teams must always be relegated which the minimum objective is to avoid, meaning that even in the best case only 85\% of teams will achieve their objective. We find that in weeks 36 and 37, we reach the maximum 85\% of teams meeting their objectives. \subsection{Experiment 3: Learning from Observing Games} To test the impact of the addition of the weights $w$ that we estimate in $P$, we evaluate how the weights are able to boost our ability to predict the outcomes of games based on the tactical decisions and therefore improve our payoff model. To evaluate our $P$ weights, we compare the accuracy of the predictions of the model presented in \cite{Dixon_Coles} both with and without $P$ (this model makes up part of the feature set that is used for calculating the payoffs). We then assess the differences in terms of the models' ability to be able to accurately predict the outcome of the game running the tests over 1046 games. In both cases, the prediction is the highest probability that is given to one of the results (home win, away win and draw). The results from this experiment are shown in Figure \ref{fig:payoff}.\footnote{The precision, recall and F1 score are computed as a weighted average of the ability to predict each outcome using SciKit Learns' multi-class support.} \pgfplotstableread[row sep=\\,col sep=&]{ interval & diff \\ Accuracy & 60.038 \\ Precision & 56.32 \\ Recall & 60.03 \\ F1 Score & 57.11 \\ }\mydata \pgfplotstableread[row sep=\\,col sep=&]{ interval & diff \\ Accuracy & 61.759 \\ Precision & 57.82 \\ Recall & 61.759 \\ F1 Score & 58.388 \\ }\newdata \begin{figure}[h!] \centering \begin{tikzpicture} \centering \begin{axis}[ ybar, bar width=0.3cm, symbolic x coords={Accuracy,Precision,Recall,F1 Score}, xtick=data, ylabel={\small Percentage (\%)}, width=\columnwidth-30, height=\columnwidth-120, ymin=50,ymax=70, y label style={at={(axis description cs:0.15,.5)},anchor=south}, ] \addplot[pattern=north east lines, pattern color=blue, every node near coord/.style={inner ysep=5pt}, error bars/.cd, y dir=both, y explicit] table[x=interval,y=diff]{\mydata}; \addplot[pattern=horizontal lines, pattern color=red, every node near coord/.style={inner ysep=5pt}, error bars/.cd, y dir=both, y explicit] table[x=interval,y=diff]{\newdata}; \addlegendentry{\small Without $P$} \addlegendentry{\small With $P$} \end{axis} \end{tikzpicture} \caption{\small Payoff Model Performance Comparison.} \label{fig:payoff} \end{figure} As we can see in Figure \ref{fig:payoff} by using the weights in $P$ we are able to boost in the accuracy of the model, and therefore the accuracy of our payoffs, achieving a boost of 1.76\%. We also see that there is an increase in the precision, recall and F1 of our model by 1.50\%, 1.72\% and 1.27\% respectively. Even though this represents a fairly small increase to the results of the model in \cite{Dixon_Coles}, it shows that by learning from what tactics have worked (both for your team and others), we can boost our ability to calculate the tactical decision pay-off and therefore our ability to optimise decisions made. Over a large scale of time such as a 38 game-week season, a 1.76\% boost in performance could be the difference between finishing a place higher in the league which can have huge financial gain and help to achieve our set fluent objective. \subsection{Experiment 4: Optimising Team Long-Term Performance} Our final experiment assesses how we incorporate the fluent objective $O$ and weights in $P$ into the tactical decision-making optimisation model presented in \cite{beal2020optimising} and evaluate how this improves team performance to help them meet their objective. To test this we simulate an entire season week by week and apply our model to a single team in the simulation. After each game-week we simulate the remaining games and recalculate $O$ and $P$ as outlined in Figure \ref{fig:flowchart}. We then compare our results using the new model across a simulated season against a simulation where we do not use the $O$ and $P$. We show the results from this when running separate simulations for a set of different teams\footnote{We use the bottom 8 teams in the 2018/19 EPL season to show we can improve their performance.} (the team we use is the only team using the new model in each simulation) in Figure \ref{fig:boost}. We show the average difference in the mean-expected finishing position from the distribution of each team that we run our season simulation for, both using the new model and without. \begin{figure}[h!] \centering \begin{tikzpicture}[thick,scale=1, every node/.style={scale=0.75}] \begin{axis}[ xbar, xmin=0,xmax=4, xlabel={Average Difference in Final Position}, bar width=0.25cm, symbolic y coords= {With $P$ and $O$}, Without $P$ and $O$}, ytick=data, width=\columnwidth-45, height=\columnwidth-155, enlarge y limits={abs=0.5cm}, legend style={at={(0.675,0.05)},anchor=south west} ] \addplot[pattern=vertical lines, pattern color=red] coordinates { (3.735375,{With $P$ and $O$}) (0.83175,Without $P$ and $O$)}; \end{axis} \end{tikzpicture} \caption{\small Payoffs of Real-World vs. Optimised Decisions} \label{fig:boost} \end{figure} This shows how our model can improve the probability of teams' finishing positions and see that on average there is a 2.90 position improvement when using $O$ and $P$ compared to without for our test set of teams. This is achieved as by using $O$ and $P$ teams can add more context to their decisions, also by selecting the optimal tactics each week in the simulation using the model in \cite{beal2020optimising} we would also expect to see a boost to the performance. Below, we highlight an example of the distribution improvement of the simulation when aiming to optimise the performance of Southampton FC (only team using the optimisation model in the simulation). Figure \ref{fig:hist-new} shows the distribution with $O$ and $P$ applied and not applied. \begin{figure}[h!] \centering \begin{tikzpicture} \begin{axis}[ ymin=0, ymax=35, xmin=1, xmax=20, xlabel=Final League Position, ylabel=Probability (\%), width=\columnwidth-40, height=\columnwidth-120, y label style={at={(axis description cs:0.15,.5)},anchor=south}, smooth, legend pos=north west, ] \addplot[color=red,line width=0.25mm] coordinates{ (1,0.0) (2,0.0) (3,0.0) (4,0.1) (5,0.1) (6,0.5) (7,0.7) (8,1.0) (9,1.9) (10,3.2) (11,5.4) (12,7.0) (13,8.8) (14,14.7) (15,17.1) (16,15.6) (17,13.6) (18,6.4) (19,3.6) (20,0.3) }\closedcycle; \addplot[color=blue,line width=0.25mm] coordinates{ (1,0.0) (2,0.0) (3,1.2) (4,1.7) (5,5.4) (6,8.5) (7,12.5) (8,13.2) (9,13.5) (10,11.5) (11,8.7) (12,6.3) (13,5.2) (14,4.7) (15,3.9) (16,2.3) (17,0.7) (18,0.7) (19,0.0) (20,0.0) }\closedcycle; \addplot[red,sharp plot,update limits=false,line width=0.25mm, dashed] coordinates {(14.564, 0) (14.564, 100)}; \addplot[blue,sharp plot,update limits=false,line width=0.25mm, dashed] coordinates {(9.425, 0) (9.425, 100)}; \node[text=blue] at (102.5,320) {\small $\mu=9.4$}; \node[text=red] at (155,320) {\small $\mu=14.6$}; \addlegendentry{\small Without} \addlegendentry{\small With} \end{axis} \end{tikzpicture} \caption{Example League Outcome Probability Distribution for Southampton FC in 2018/19.} \label{fig:hist-new} \end{figure} As we can see from the example shown in Figure \ref{fig:hist-new}, we can use the fluent objectives to help teams boost their probabilities of winning games that matter, and thus boost their expected finishing position, increasing the mean of the expected finishing distribution by up to 35.6\%. We see similar improvements to this across our test set of teams. In the next section, we will further discuss these results, the real-world implications and some further findings. \section{Discussion} One interesting finding from further experiments is when we simulate the season with all teams using the model discussed in this paper to select their tactics. When we run this simulation, we find that the results cancels itself out and the final standings are very similar to what we see when we run the simulation without the new fluent objective and prior game weights. We see that there is a boost of under 1 position on average per team when every team uses the model in the same season. This shows that teams can gain a boost in their performance over the season but only if they utilise the game theoretic approaches while all others are not. Another observation we see in our results is when we compare the increase in the positional distribution using the model between the stronger top-half teams and the teams who are in the lower half of the league and aiming to stay in the division. When using the model for the latter, we observe a substantial boost of up to 35.6\% in long-term performance. This may be due to the algorithm helping teams using the new model gain positive results in the closer games at the bottom of the table when playing teams of similar ability and thus preventing them gaining points by taking all 3 for yourself. In turn, higher up the league teams often win the games they are expected to against weaker teams so the performance boost is lower. It is also worth noting that across the season there are also a number of other variables that can affect team decision-making both tactically and off the pitch. As teams re-assess their objectives in the season, there are decisions off the pitch that can help boost their performing as well as the tactical decision optimisation that helps on it. One example is a change in managers/coaches, this is often a measure taken for an underperforming team and can help boost performance. If a team is doing well and wants to push higher up the table or is struggling and needs new players, then during January teams are able to invest money into new players to improve their team and again improve. These types of decisions could be added into the model to help decision makers at clubs subjectively decide when to invest more money or make changes. \section{Conclusions and Future Work} This paper presents a novel model for the long-term tactical decisions that are made in football and helps teams to optimise their decisions by adding more long-term context. We introduce the concept of a \emph{fluent objective} that allows us to re-evaluate team performance and base decisions based on a wider environment. We find that we can build models that are able to predict the final outcome of the table on a regular basis, and then using a MAP estimation to effectively set the fluent objective each week. We also learn from other games that happen in the overall environment and find that this can boost the performance of pay-off models in our multi-step games. Overall, we find that our model can be used for football teams who are looking to improve their overall expected league position (on average improves teams by 2.90 positions) and, show that the concept of a fluent objective can help to optimise long-term performance in a competitive league setting. Due to the success we show when using fluent objectives for an application to football in this paper, in future work we intend to test our approach in other domains. For example, they could be used in security games and UAV swarms as the objective also often change over a given time frame. This testing will help to further verify how the modelling of objectives can aid long-term performance. We also aim to further improve our $P$ weights with applications of the observational learning and reinforcement learning as presented in \cite{borsa2019observational}. Finally, the reinforcement learning techniques presented in \cite{silver2016mastering,matthews2012competing} could be used to further optimise team performance. \begin{acks} We would like to thank the reviewers for their comments. This research is supported by the AXA Research Fund and the EPSRC NPIF doctoral training grant number EP/S515590/1. \end{acks} \clearpage \bibliographystyle{ACM-Reference-Format} \balance
{ "attr-fineweb-edu": 1.651367, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeUs4eIOjRwMTjqoM
\section{Synthetic Argument Data} \label{app:aaac} A synthetically generated {\small AAAC} record, which nicely illustrates the underdetermination of argument reconstruction, with two implicit premises, one distracting statement and a simple (one-step) argument (formatted as presented to the model): \begin{scriptsize}\ttfamily \noindent\textit{source:} It is not the case that Tracy is not an admirer of Fullerton and Tracy has seen La Habra. Plus, if someone loves Chico, then they haven't visited Monterey, owing to the fact that loving Laguna Beach is sufficient for not having visited Monterey. \noindent\textit{reasons:} loving Laguna Beach is sufficient for not having visited Monterey (ref: (2)) \noindent\textit{conjectures:} if someone loves Chico, then they haven't visited Monterey (ref: (4)) \noindent\textit{argdown:}\newline (1) If someone is an admirer of Chico, then they are an admirer of Laguna Beach or a visitor of Stockton.\newline (2) If someone admires Laguna Beach, then they haven't visited Monterey.\newline (3) If someone has visited Stockton, then they haven't visited Monterey.\newline --\newline with generalized dilemma (neg variant) from (1) (2) (3)\newline --\newline (4) If someone admires Chico, then they haven't visited Monterey. \noindent\textit{premises:} If someone is an admirer of Chico, then they are an admirer of Laguna Beach or a visitor of Stockton. (ref: (1)) | If someone admires Laguna Beach, then they haven't visited Monterey. (ref: (2)) | If someone has visited Stockton, then they haven't visited Monterey. (ref: (3)) \noindent\textit{conclusion:} If someone admires Chico, then they haven't visited Monterey. (ref: (4)) \noindent\textit{premises\_form:} (x): Fx -> (G x v H x) (ref: (1)) | (x): G x -> not I x (ref: (2)) | (x): H x -> not I x (ref: (3)) \noindent\textit{conclusion\_form:} (x): F x -> not I x (ref: (4)) \noindent\textit{keys:} F: admirer of Chico | G: admirer of Laguna Beach | H: visitor of Stockton | I: visitor of Monterey \end{scriptsize} \section{Training Set-up} \label{app:training_setup} By interpreting a generative mode as a sequence-to-sequence task, we may translate a multi-angular DeepA2 dataset (e.g., {\small AAAC01}) into a multi-task sequence-to-sequence format, on which a sequence-to-sequence model can be trained. For each record in the multi-angular DeepA2 dataset, we randomly sample 14 modes in accordance with the weights provided in Table~\ref{table:all_generative_modes} and add, for each mode, a corresponding sequence-to-sequence record to the training data. This results, for {\small AAAC01}, in a sequence-to-sequence training dataset with $14\times 16.000$ records. \begin{table}[htbp] \centering \begin{tabularx}{\linewidth}{@{}p{0.20\linewidth}@{}Y@{}Y|p{0.17\linewidth}@{}Y@{}Y|p{0.23\linewidth}@{}Y@{}Y@{}} \toprule mode & w\textsubscript 1 & w\textsubscript{2} & mode & w\textsubscript 1 & w\textsubscript{2} & mode & w\textsubscript 1 & w\textsubscript{2} \\ \midrule \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew6}{\scriptsize$\mathbf{P\,C\,O\!\leadsto\!{F}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$} & .2 & .2 & \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$} & .2 & .2 & \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$} & .7 & -- & \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$} & .7 & -- & & & \\ \bottomrule \end{tabularx} \caption{21 generative modes with corresponding weights in {\small AAAC} (w\textsubscript 1) and \emph{EntailmentBank} (w\textsubscript 2) training data.} \label{table:all_generative_modes} \end{table} Our models (base model T5-large with 770M parameters, and pretrained ArgumentAnalyst) are trained with batch-size 2 and learning rate 0.00001. For {\small AAAC01}, eval loss starts to increase at epoch 8; with \emph{EntailmentBank} data, eval loss increases from epoch 2 onwards. \section{Iterative Prediction with Generative Chains} \label{app:gen_chains} Generative chains are implemented with a dynamic dictionary (9 keys, corresp.\ to the dimensions of DeepA2 data), which is initialized with the source text, provides input for the generative modes, and is updated after each generative step with the mode's generated output. Output is generated with beam search decoding and beam width 2. \begin{table}[htb] \centering \begin{small} \begin{tabularx}{\linewidth}{@{}l@{\hspace{5pt}}p{0.75\linewidth}r@{\hspace{3pt}}r@{}} \toprule \# & {mode sequence} & l. & s. \\ \midrule \textbf{1} & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} & 3 & 0 \smallskip\\ 2 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,J\!\leadsto\!{A}}$} & 3 & 1 \smallskip\\ 3 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\!\leadsto\!{A}}$} & 3 & 1 \smallskip\\ 4 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 2 \smallskip\\ 5 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 6 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 7 & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 8 & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ \textbf{9} & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 4 & 4 \smallskip\\ 10 & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 4 & 4 \smallskip\\ 11 & \parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} } & 7 & 8 \smallskip\\ 12 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}} & 9 & 11 \smallskip\\ \textbf{13} &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}} & 9 & 11 \smallskip\\ 14 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} }\vspace{1pt} & 15 & 20 \smallskip\\ 15 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} } & 11 & 18 \smallskip\\ 16 & \parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\,C\,O\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} } & 12 & 21 \\ \bottomrule \end{tabularx} \end{small} \caption{16 generative chains (without final formalization sub-sequences) evaluated in this study. The illustrative chains highlighted in the main paper are \#1 (straight), \#9 (hermeneutic cycle), and \#13 (logical streamlining).} \label{table:all_generative_chains_app} \end{table} Table~\ref{table:all_generative_chains_app} displays all generative chains we resort to in this study, all of which are used in the \textit{first experiment}. The \textit{second experiment} makes use of chains 1--11. The \textit{third experiment} deploys chains 1--13. \section*{Acknowledgments} We're indebted to Christian Voigt for his critical and constructive feedback throughout the DeepA2 project. % \section{Introduction} Argumentative text analysis is an interpretation method for clarifying arguments \citep{Fisher:2004cq}. Being studied in argumentation theory, logic, or epistemology, it is widely taught and applied as a key critical thinking skill in, e.g., law \citep{Alexy:1989rh}, the humanities \citep{Bruce:2011iy}, social sciences \citep{Fairclough2012}, policy advice \citep{HanssonHirschHadornRaUBook2016}, or public debate \citep{Beck_Neupane_Carroll_2019}. This paper presents a computational approach for \emph{deep argument analysis}, i.e., for \textbf{reconstructing natural-language arguments} from a given text, as in the following example \citep[adapted from][]{sep-stem-cells}: \noindent \begin{tabular}{@{}c@{}c@{}c@{}} \small{\textbf{source text}}&$\leadsto$&\small{\textbf{reconstructed argument}}\\ \begin{minipage}{.48\linewidth} \fontsize{9}{10}\selectfont It is unethical to destroy human embryos. The most basic argument supporting this claim just stresses that it is wrong to intentionally kill innocent human beings. \end{minipage}&& \begin{minipage}{.47\linewidth} \fontsize{9}{10}\selectfont (P1) It is impermissible to kill innocent human beings. (P2) The human embryo is an innocent human being. (C) \textsc{Thus}: It is impermissible to kill the human embryo. \end{minipage} \end{tabular}\medskip The literature on argument reconstruction \citep[cf.][]{Feldman1998,Scholz2000,Lau:2011st,BowllKemp2014,Brun2014-BRURAF,BrunBetzRaU2016} characterizes deep argument analysis as: \begin{itemize} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item a complex task involving a variety of \textbf{sub-tasks}, such as identifying reasons and conclusions in a text, formalizing sentences, checking validity of an inference, logical streamlining, or explicating implicit premises. \item a non-conservative, \textbf{creative task} that goes beyond mere text annotation and essentially generates a new, more transparent text. \item an \textbf{iterative process} through which reconstructions are built and revised step-by-step, and the solution space is gradually explored. \item a hermeneutical task, guided by the \textbf{principle of charity}, which urges one to come up with an interpretation (reconstruction) as strong and plausible as possible. \item assuming a \textbf{normative background theory} about what constitutes a strong and plausible argument in the first place. \item being affected by \textbf{severe underdetermination}, both in terms of the process and the final outcome; in particular, there typically exist rival, yet equally legitimate reconstructions of one and the same text. \end{itemize} Given these special characteristics, \emph{deep argument analysis} poses many challenges for machine models of natural language understanding. In this paper, we introduce a novel modular modeling approach for analysing complex argumentation that builds on recent pre-trained text2text transformers \cite{raffel2020exploring}. Our approach -- DeepA2 (illustrated in Figure~\ref{fig:basic_design}) -- works by systematically decomposing a complex reconstruction problem to smaller text2text sub-tasks (see Section~\ref{sec:framework}), which allows for emulating the types of interpretation strategies and heuristics studied in argument theory. Referring to the different components of a comprehensive argumentative analysis, we may also define tailor-made metrics for assessing argument reconstructions. To demonstrate the benefits of our approach, we construct a new argumentation dataset ({\small AAAC}) that exhibits several complex \emph{interpretive dimensions}, show how to map other existing datasets into our framework (Section~\ref{sec:datasets}), and train and evaluate our main model, referred to as \textbf{ArgumentAnalyst}, within DeepA2 (Section~\ref{sec:experiments}). \begin{figure*} \begin{center} \input{figs/basic_design_tacl} \end{center} \caption{Example text-to-text tasks for deep argument analysis, defined by DeepA2.} \label{fig:basic_design} \end{figure*} Our empirical results show: 1. ArgumentAnalyst generates -- out-of-domain -- semantically meaningful argument reconstructions, 70\% of which are logically valid. By pooling alternative reconstructions, virtually every source text in the synthetic dataset can be reconstructed as a valid argument. 2. Modular generation chains which emulate iterative reconstruction strategies are highly successful: they yield, in particular, a more coherent interpretation of an argumentative text, exploit the text more thoroughly, and generally outperform one-step generation as soon as problems become difficult. 3. ArgumentAnalyst outperforms \emph{EntailmentWriter} \citep{dalvi2021explaining} on difficult \emph{EntailmentBank} problems with respect to telling apart relevant premises from distractors. 4. ArgumentAnalyst generates reliable higher-order evidence \citep{christensen2010higher} which can be used for diagnosing logical fallacies -- despite the fact that ArgumentAnalyst is maximally charitable and is trained to reconstruct any input whatsoever as a logically valid argument, even if the input argument, taken at face value, \emph{is} painstakingly fallacious. In concluding this paper, we sum-up and interpret these findings as general vindication of DeepA2's modular, multi-angular design (Section~\ref{sec:conclusion}). \section{Related Work} Taking \textbf{transformers as soft reasoners}, recent work, pioneered by \citet{Clark2020_TransSoftReas}, has shown that pre-trained language models (PTLMs) possess basic deductive and abductive reasoning capabilities on diverse domains \citep{banerjee2020self,betz2020critical,Bostrom2021FlexibleOF}, but are equally prone to fallacies and biases \citep{kassner2020negated,talmor2020olmpics}. Besides drawing the correct conclusion, transformers are able to generate correct reasoning chains that justify an answer, which in turn further increases answer accuracy \citep{saha2020prover,tafjord2020proofwriter,gontier2020measuring,Saha2021multiPRoverGM,dalvi2021explaining}. \textbf{Neural semantic parsing} uses sequence models to \emph{formalize} natural language sentences \citep{Kamath2019ASO}. \citet{Shin2021ConstrainedLM} show that PTLMs are zero-shot parsers, and that intermediate steps which rephrase and streamline the original input before parsing it to a formal language improve accuracy. \textbf{Argument mining} is an active research field that studies computational methods for retrieving argumentative components from a text corpus \citep{Wachsmuth2017BuildingAA,Moens:2018zt,Potthast2019ArgumentSA,LawrenceReed2020}. Recently, work in this field has started to use PTLMs: \citet{EinDor2020CorpusWA} and \citet{Gretz2020ALD} succeed in retrieving relevant pro- or con-arguments for a given topic from a large corpus with a fine-tuned BERT model \citep{Devlin2019BERTPO}. Using BERT, \citet{BarHaim2020FromAT} map argumentative texts to key points that succinctly summarize the argument's gist. \citet{Akiki2020ExploringAR} explore abstractive argument retrieval by means of text generation with GPT2 \citep{Radford2019}. Similarly, \citet{Syed2021GeneratingIC} deploy BART \citep{lewis2019bart} to generate conclusions of argumentative texts on a challenging corpus compiled from Reddit and various online debate corpora. \citet{Rodrigues2020ReproductionAR}, revisiting the argument comprehension task \citep{HabernalEtAl2014,Habernal2018TheAR}, demonstrate that identifying implicit premises -- and deep argument analysis \emph{a fortiori} -- remains a hard, unsolved task. Recently, \citet{Chakrabarty2021ImplicitPG} have shown that augmenting training data with discourse-aware commonsense knowledge improves the plausibility of automatically identified implicit premises. Such a knowledge-driven perspective is orthogonal to, and may eventually complement the logical approach adopted in this paper. \section{Framework} \label{sec:framework} \subsection{Problem Definition} \label{subsec:problem} Deep argument analysis of a given text seeks to answer the following \textbf{central question}: Can we make sense of the text as a presentation of a rational argument? And if so, what exactly is the argument; and how precisely is it related to the text? In carrying out a deep argument analysis, one explicates, rephrases and rebuilds -- even repairs -- the text's argument in one's own words. That is why deep argument analysis is also referred to as \emph{rational reconstruction} \citep[cf.][]{sep-carnap-suppD}. The reconstructed argument forms, together with details about its logical properties and about its relation to the source text, a \emph{comprehensive argumentative analysis} of a text. The latter can be seen as an interpretative hypothesis that is abductively inferred from a source text by means of an inference to the best explanation. Here is another example that illustrates how far a reconstruction may deviate from the original text that presents the argument \citep[adapted from][]{BrunBetzRaU2016}: \noindent \begin{tabular}{@{}c@{}c@{}c@{}} \small{\textbf{source text}}&$\leadsto$&\small{\textbf{reconstructed argument}}\\ \begin{minipage}{.48\linewidth} \fontsize{9}{10}\selectfont So, the researcher's central dilemma exists in an especially acute form in psychology: either the animal is not like us, in which case there is no reason for performing the experiment; or else the animal is like us, in which case we ought not to perform on the animal an experiment that would be considered outrageous if performed on one of us. \end{minipage}&& \begin{minipage}{.47\linewidth} \fontsize{9}{10}\selectfont (P1) If the animal is not like us, it is wrong to perform the experiment. (P2) If the animal is like us, it is wrong to perform the experiment. (C) \textsc{Thus} (with \emph{classical di\-lemma}): It is wrong to perform the experiment. \end{minipage} \end{tabular}\medskip A compelling argumentative analysis yields (i) a rational argument that is (ii) closely related to the source text. Deep argument analysis is, accordingly, guided by a \textbf{dual goal} \citep[cf.][]{BrunBetzRaU2016}. An argument reconstruction should both be \begin{itemize} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[(i)] \textbf{systematically correct}, i.e., the reconstructed argument itself is, e.g., transparent, deductively valid, non-circular, or doesn't contain irrelevant premises; and \item[(ii)] \textbf{exegetically adequate}, i.e., the reconstructed argument accounts for the original text, because, e.g., its premises merely reformulate parts of the text, or because its overall inferential structure can be traced within the source text. \end{itemize} The fact that there typically exists -- regarding a specific text -- a trade-off between these two goals is one major reason for the underdetermination of deep argument analysis and the plurality of legitimate reconstructions of a given text \citep[cf.][]{BrunBetzRaU2016}. Against this background, we may finally define the problem of \begin{description} \item[Deep artificial argument analysis:] Describe, analyse and implement an effective computational system for deep argument analysis! \end{description} \subsection{Multi-angular Data} \label{subsec:multi-angle} The DeepA2 framework is built upon a \emph{multi-angular} data structure \citep{TafjordClark2021GPQA} whose dimensions represent the essential components of a comprehensive argumentative analysis (see Section~\ref{subsec:problem}). Structured argumentative data is rendered as plain text \citep[cf.][]{Voigt2014}. The different data dimensions, which are related as shown in Figure~\ref{fig:angles01}, are (with an illustrating example): \begin{small} \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[argument source text (\small S)] \ \newline It is unethical to destroy human embryos. The basic argument supporting this claim just stresses that it is wrong to intentionally kill innocent human beings. \item[verbatim reason statements in source text (\small R)]\ \newline it is wrong to intentionally kill innocent human beings (ref: (1)) \item[verbatim conjectures in the source text (\small J)]\ \newline It is unethical to destroy human embryos (ref: (3)) \item[argument reconstruction (\small A)] {\ \newline (1) It is impermissible to kill innocent human beings.\newline (2) The human embryo is an innocent human being.\newline -- with hypothetical syllogism from (1) (2) --\newline (3) It is impermissible to kill the human embryo.} \item[premises of the reconstructed argument (\small P)]\ \newline It is impermissible to kill innocent human beings $|$ The human embryo is an innocent human being \item[final conclusion of reconstr.\ argument (\small C)]\ \newline It is impermissible to kill the human embryo \item[formalizations of premises (\small F)]\ \newline (x): F x $\rightarrow$ G x $|$ (x): H x $\rightarrow$ F x \item[formalization of conclusion (\small O)]\ \newline (x): H x $\rightarrow$ G x \item[keys for the formalizations' constants (\small K)]\ \newline F: innocent human being $|$ G: must not be killed $|$ H: human embryo \end{description} \end{small} Each record in a DeepA2 dataset contains a source text plus a legitimate comprehensive argumentative analysis, which is, given underdetermination, not necessarily the only compelling reconstruction of the text; moreover, a dataset \emph{may} contain different records with one and the same source text analysed in several ways. So, for example, an alternative, equally legitimate argument reconstruction of the above source text (\textbf{\small S}) may read: \begin{small} \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[argument reconstruction (\small A)] {\ \newline (1) If it is wrong to kill innocent human beings, then it is wrong to kill a human embryo.\newline (2) It is wrong to kill innocent human beings.\newline -- with modus ponens from (1) (2) --\newline (3) It is wrong to kill a human embryo.} \end{description} \end{small} Beyond this structural and functional characterization, DeepA2 is agnostic about the nature and origin of the argumentative data. Synthetically generated, automatically retrieved, manually created datasets as well as translations of other databases are all compatible with the framework and can be used side by side. \begin{figure}[tbp] \centering \input{figs/tikz_angles01} \vspace{-25pt} \caption{Relationships between dimensions of the multi-angular argumentative data.} \label{fig:angles01} \end{figure} \subsection{Generative Modes and Chains} \label{subsec:generative_modes} Given DeepA2's multi-dimensional data structure described in the previous section, a \textbf{generative mode} maps data from some input dimensions to a target dimension. For example, the mode \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ takes a source text (\textbf{\small S}) as input and outputs an argument reconstruction (\textbf{\small A}), the mode \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$}\ reconstructs the argument (\textbf{\small A}) given the verbatim reasons (\textbf{\small R}) and conjectures (\textbf{\small J}). All in all, we define and investigate 21 different generative modes (see Appendix~\ref{app:training_setup}). Every mode represents a task on which a text-to-text model can be trained. By taking some mode's output as another mode's input, modes can be concatenated into \textbf{generative chains}. For example, the output of modes \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ and \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ (reasons and conjectures from source) can be fed into mode \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$}\ to reconstruct an argument. Such generative chains allow us to emulate different strategies (heuristics) for analysing a given argumentative text (see Appendix~\ref{app:gen_chains} for technical details). Three generative chains which model distinct interpretative strategies, taking a source text (\textbf{\small S}) as sole input, are: \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[straight]\ \newline \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} \raggedright \item[hermeneutic cycle]\ \newline \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} \raggedright \item[logical streamlining]\ \newline \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} \raggedright \end{description} While the chain \emph{straight}, where no output ever serves as input to another mode, represents a simple baseline, \emph{hermeneutic cycle} and \emph{logical streamlining} mimic prominent, equally-named methods in argument analysis \citep[cf.][]{BowllKemp2014,BrunBetzRaU2016}. One goes through a hermeneutic cycle, generally speaking, if one revisits a text in view of its previous interpretation, as, for example, in steps \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}, where the source text (\textbf{\small S}) is re-interpreted (identifying reason statements and conjectures) given the previously reconstructed argument (\textbf{\small A}), so as to subsequently re-reconstruct the argument itself (step \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$}). To logically streamline a reconstruction means to rephrase its conclusion or premises in order to make their logico-semantic structure more transparent. Such semantic clarification can be emulated by (i) formalizing a statement (e.g., \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}) and (ii) using the keys (\textbf{\small K}) to retrieve the original statement from the generated logical formulas (such as in \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}), from which the argument can be re-built (step \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}). For evaluation, we append to each generative chain the following sub-chain that formalizes the reconstructed argument: \begin{description} \item[formalization]\ \newline \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$} \raggedright\vspace{1mm} \end{description} A generative chain can be construed as hypergraph on the dimensions of DeepA2's multi-angular datasets, with each of its modes representing a directed hyper-edge. Summing up the number of input dimensions (except \textbf{\small S}) over all modes yields a simple graph centrality measure, which gauges a chain's sophistication. Thus, \emph{straight}, \emph{hermeneutic cycle} and \emph{logical streamlining} display a sophistication of 0, 4, and 11, respectively. \subsection{Metrics} \label{subsec:metrics} As discussed in Section~\ref{subsec:problem}, an argument reconstruction should both be sound and make sense of the text to-be-interpreted. In line with the dual goal of argument analysis, we propose metrics both for the systematic correctness and for the exegetic adequacy of a given analysis. The following metrics measure the degree to which a given generated argument is \emph{systematically correct} \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[\small SYS-PP] 1 if the argument is not a \emph{petitio principii} (i.e., if no premise is identical with its final conclusion), 0 otherwise; \item[\small SYS-RP] 1 if the argument has no \emph{redundant premises} (i.e., if no premise occurs more than once), 0 otherwise; \item[\small SYS-RC] 1 if the argument has no \emph{redundant conclusions} (i.e., if no conclusion -- intermediary or final -- occurs more than once), 0 otherwise; \item[\small SYS-US] 1 if all statements in the argument other than the final conclusion are explicitly \emph{used in an inference}, 0 otherwise; \item[\small SYS-SCH] ratio of sub-arguments which correctly instantiate the explicitly stated \emph{inference scheme} (e.g., hypothetical syllogism); \item[\small SYS-VAL] 1 if the argument is \emph{globally valid} (i.e., if the final conclusion deductively follows from the premises), 0 otherwise; \end{description} All six systematic metrics can be computed automatically ({\small SYS-SCH} tries to parse the argument based on the inference schemes and templates used to construct the synthetic dataset in the first place; {\small SYS-VAL} passes the model-generated formalizations of premises and conclusion to a symbolic theorem prover \citep{de2008z3}; and the remaining metrics check for string identity). Whereas systematic metrics apply primarily to the generated argument (\textbf{\small A}), a reconstruction's interpretative adequacy will also depend on how reasons (\textbf{\small R}) and conjectures (\textbf{\small J}) coherently link the argument's components to the original text. As a first set of \emph{exegetic metrics}, we thus propose \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[\small EXE-MEQ] 1 if the reasons and conjectures are \emph{mutually exclusive verbatim quotes} from the source text, 0 otherwise; \item[\small EXE-RSS] semantic similiarity \citep[BLEURT, see][]{sellam2020bleurt} of each reason statement and its counterpart premise in the reconstructed argument (if such exists, -1 otherwise); \item[\small EXE-JSS] semantic similiarity (see {\small EXE-RSS}) of each conjecture statement and its counterpart in the reconstructed argument (if such exists, -1 otherwise). \end{description} Each source text presents (more or less faithfully) an underlying target argument, which in turn marks some of the text's statements as `target' reasons, others as `target' conjectures. The following two metrics assess the degree to which a comprehensive argumentative analysis correctly predicts (\textbf{\small R}, \textbf{\small J}) those target reasons and conjectures. \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[\small EXE-PPR] predictive performance (F1-score) for identifying (target) reason statements in the source text; \item[\small EXE-PPJ] predictive performance (F1-score) for identifying (target) conjecture statements in the source text. \end{description} An argument's final conclusion may be implicit or explicit in a given text. The ability to fully exploit a text can be measured by verifying whether the reconstructed argument's final conclusion is implicit (= prediction) if and only if the target argument's one is. \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[\small EXE-TE] text exploitation, as measured by ability (F1-score) to reconstruct arguments with explicit final conclusions (prediction) if and only if the target final conclusions are explicit. \end{description} \subsection{Models} \label{subsec:models} Any text-to-text language model is compatible with the proposed DeepA2 framework. We refer to models used within the framework as \textbf{ArgumentAnalyst}. In this study, we train and evaluate the transformer model T5 \citep{raffel2020exploring} with 770M parameters as implemented by \cite{wolf-etal-2020-transformers}. \subsection{Limitations} In the DeepA2 framework, arguments are reconstructed from relatively short and isolated texts, disregarding both the broader context of the argument and domain-specific background knowledge. This limits the framework, as presented here, in important ways: Implicit premises that are explicated in an argument reconstruction can neither be checked for plausibility nor for agreement with the author's broader convictions. In addition, the framework cannot assess an argument's dialectic function in a wider debate. It seems worthwhile to explore according extensions of the framework in future research. \section{Datasets} \label{sec:datasets} For the experiments reported below, we synthetically create two artificial argument analysis corpora that comply with the DeepA2 framework (see also Appendix~\ref{app:aaac}): \textbf{\small AAAC01} and \textbf{\small AAAC02}. In addition, we translate the synthetic \emph{RuleTaker} \citep{Clark2020_TransSoftReas} and the manually compiled \emph{EntailmentBank} \citep{dalvi2021explaining} datasets into our framework. In argument analysis, one proceeds \emph{from} a source text \emph{to} its reconstruction. Creating the synthetic corpora, we reverse-engineer this process: \emph{Step 1.} We sample, first of all, a possibly complex argument (\textbf{\small A}) from a set of valid inference schemes. In doing so, we use a multi-step templating strategy \citep[inspired by][]{betz2020critical} to translate symbolic forms into natural language schemes (which were generated by local domain experts) and to substitute natural language terms for placeholders. Premises (\textbf{\small P}), conclusion (\textbf{\small C}) and their formalization (\textbf{\small F, O, K}) are side-products of such a construction of an argument. \emph{Step 2.} Given the fully explicit argument (\textbf{\small A}), we compose a text (\textbf{\small S}) that presents the argument in a more or less transparent and faithful way. Such text creation involves: rendering the argument tree as a linear story, leaving out premises or conclusions (implicit premises and conclusions), inserting irrelevant material (distractors), using templates that obfuscate the logical form of a sentence, limiting the use of premise and conclusion indicators (such as ``therefore''), applying rule-based and automatic paraphrasing. In composing the argumentative text (\textbf{\small S}), we may record its reasons (\textbf{\small R}) and conjectures (\textbf{\small J}). Given the synthetic and controlled nature of our dataset, which involved eliciting rule templates from a group of local domain experts, all data is assumed to be correct by \emph{construction}. As an additional check of correctness on the logic of our examples, we ran a symbolic theorem prover \citep{de2008z3} over the argument formalizations to verify their validity. To ensure the fluency of the underlying language templates, all templates were hand verified by the authors. Our two datasets {\small AAAC01} and {\small AAAC02} differ in the following ways: \begin{enumerate} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item predicates and names are sampled from different, disjunct domains (texts are about, e.g., allergies and family relations versus, e.g., badminton and cooking) to test a model's robustness to lexical diversity \citep{RozenShwartzEtAl2019}; \item similarly, {\small AAAC01} applies automatic paraphrasing \cite{Vamsi2021} to the final source text whereas {\small AAAC02} doesn't; \item {\small AAAC02} allows for imprecise renditions of logical formulas, while {\small AAAC01} sticks to plain formulations to test robustness to variations in description of rules. \end{enumerate} Each dataset contains diverse texts and arguments. Broadly speaking, data records may differ in terms of properties of the argument (step 1 above) and properties of the argument's presentation (step 2). Along these two dimensions, we define five homogeneous subsets of the data: \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[simple inference:] arguments with a single inference step that neither involves negation nor compositional predicates; \item[complex inference:] arguments with four inference steps that heavily rely on syntactically intricate schemes (e.g., transposition, or de Morgan); \item[plain presentation:] all premises and conclusions are explicit in the source text which, in addition, contains no distractors; \item[mutilated presentation:] at least two premises and one conclusion are implicit, while the text contains two distractors and explicitly states the final conclusion; \item[C\&M:] the argument's inference is complex, plus the text contains at least two distractors. \end{description} The \emph{RuleTaker} and \emph{EntailmentBank} datasets contain multi-hop inference trees (\textbf{\small A}). To import these into the DeepA2 framework, we create source texts (\textbf{\small S}) for the given arguments by means of simple templates (such as ``\{\emph{theory}\} All this entails: \{\emph{hypothesis}\}'') and record reasons (\textbf{\small R}) and conjectures (\textbf{\small J}) on the fly. Unlike {\small AAAC} and \emph{EntailmentBank}, \emph{RuleTaker} \citep[as updated in][]{tafjord2020proofwriter} contains an equal share of arguments for which (i) the conclusion follows from the premises, (ii) the conclusion contradicts the premises, (iii) the conclusion is independent of the premises. \section{Experiments and Results} \label{sec:experiments} \paragraph{As first and main experiment} we train our base model (see Section~\ref{subsec:models}) on the {\small AAAC01} corpus, and evaluate the resulting ArgumentAnalyst model out-of-domain on {\small AAAC02}. ArgumentAnalyst undergoes multi-task training on 21 generative modes, which are interpreted as sequence-to-sequence tasks (the training set-up is further described in Appendix~\ref{app:training_setup}). The evaluation of ArgumentAnalyst on {\small AAAC02} proceeds in two steps: (1.) prediction: produces output in accordance with 16 different generative chains (Appendix~\ref{app:gen_chains}); (2.) metrics application: assesses the quality of the generated output by means of the systematic and exegetic metrics of the DeepA2 framework (see Section~\ref{subsec:metrics}). \begin{table*}[htbp] \begin{small} \begin{tabularx}{\linewidth}{l *{12}{Y}} \toprule {} & \multicolumn{6}{c}{\emph{systematic metrics} (\textbf{\small SYS-*})} & \multicolumn{6}{c}{\emph{exegetic metrics} (\textbf{\small EXE-*})} \\ \cmidrule(r){2-7} \cmidrule(r){8-13} chain&\textbf{\small PP}&\textbf{\small RP} & \textbf{\small RC} & \textbf{\small US} & \textbf{\small SCH} & \textbf{\small VAL} & \textbf{\small MEQ} & \textbf{\small RSS} & \textbf{\small JSS} & \textbf{\small PPR} & \textbf{\small PPJ} & \textbf{\small TE} \\ \midrule straight & .95 & .97 & .96 & .96 & .33 & .73 & .80 & -.08 & -.10 & .93 & .93 & .63 \\ herm.\ cy. & .95 & .98 & .95 & .93 & .31 & .72 & .82 & .16 & .12 & .93 & .92 & .71 \\ logic.\ str. & .95 & .97 & .96 & .95 & .32 & .72 & .82 & .11 & .00 & .93 & .92 & .69 \\ pooling & 1.0 & 1.0 & 1.0 & 1.0 & .73 & 1.0 & 1.0 & .26 & .29 & .96 & .96 & .97 \\ \textit{oracle} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{.30} & \textit{.37} & \textit{1.0} & \textit{1.0} & \textit{1.0} \\ \bottomrule \end{tabularx} \end{small} \caption{Performance of ArgumentAnalyst on the {\small AAAC02} data as measured by systematic and exegetic metrics. Rows display results for three illustrative generative chains (\emph{straight}, \emph{hermeneutic cycle}, \emph{logical streamlining}), for the item-wise best performing generative chain out of all 16 chains (\emph{pooling}), and for oracle performance (\emph{oracle}), which one obtains by applying the metrics to the target data itself.} \label{table:main_results} \end{table*} Table~\ref{table:main_results} reports the ability of ArgumentAnalyst to generate systematically correct and exegetically adequate argument reconstructions. We obtain similar global results with the three chains \emph{straight}, \emph{hermeneutic cycle}, and \emph{logical streamlining}, whose generated reconstructions mainly differ in terms of internal coherence ({\small EXE-RSS}, {\small EXE-JSS}) and text exploitation ({\small EXE-TE}). However, the different generative chains complement each other, as shown by \emph{pooling}, which does not only outperform individual chains, but nearly attains oracle performance. \begin{table}[htbp] \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{ArgAn}\textsubscript{EB}} & \multicolumn{2}{c}{\emph{ArgAn}\textsubscript{AAAC,EB}} & \emph{EntWr}\\ \cmidrule(l){2-3} \cmidrule(l){4-5} steps & straight & herm.\ cycle & straight & herm.\ cycle & {} \\ \midrule 1 & .863 & .866 & .816 & .871 & .951 \\ 2 & .798 & .815 & .813 & .826 & .886 \\ 3 & .812 & .815 & .826 & .806 & .858 \\ 4 & .757 & .791 & .820 & .822 & .838 \\ $\geq$ 5 & .795 & .811 & .786 & .773 & .742 \\ any & .819 & .830 & .816 & .834 & .879 \\ \bottomrule\end{tabularx} \caption{Predictive performance of ArgumentAnalyst ({\emph{ArgAn}\textsubscript{EB}}, \emph{ArgAn}\textsubscript{AAAC,EB}) and EntailmentWriter (\emph{EntWr}) for identifying reason statements in an input text (metric {\small SYS-PPR}) on the \emph{EntailmentBank task2} dataset.} \label{table:ent_bank} \end{small} \end{table} Moreover, ArgumentAnalyst produces much better reconstructions of simple inferences and plain presentations -- compared to complex inferences and mutilated presentations, i.e., difficult problems (cf.\ Table~\ref{table:main_subsets} in App.~\ref{app:add_results}). In addition, within one and the same subset, substantial differences show up between the three generative chains. Globally speaking, \emph{hermeneutic cycle} outperforms the other two chains for difficult problems. \smallskip \noindent \emph{Is {ArgumentAnalyst} capable of reliable self-evaluation?} We have \textbf{validated the logic metric} ({\small SYS-VAL}), which passes on a self-generated formalization of the reconstructed argument to a theorem prover, in three ways: First of all, ArgumentAnalyst correctly recognizes \emph{target} arguments as valid (with accuracy 92.7\%), which has been verified by running the formalization subchain on target data. Secondly, virtually every generated argument with all-correct scheme instantiations (i.e., {\small SYS-SCH} $=1$) is also -- and correctly -- recognized as logically valid. Thirdly, a manual analysis (\textbf{human-in-the-loop}) of 100 generated arguments with incorrect scheme instantiation (i.e., {\small SYS-SCH} $<1$) reveals a high rate of false negatives: roughly one half of all inferences that are not automatically identified as an instantiation of the given scheme actually do correctly instantiate it. The accordingly \emph{adjusted} global ratio of correct scheme instantiations (Table~\ref{table:main_results}) equals roughly 0.65 (rather than 0.31--0.33), which is consistent with the ratio of logically valid arguments being 0.72--0.73. \smallskip \noindent \emph{Do reconstructed arguments exhibit basic semantic flaws?} Regarding the full dataset, ArgumentAnalyst produces nearly \textbf{flawless argument reconstructions}, committing basic errors (petitio, redundancy, unused statements) only very rarely (Table~\ref{table:main_results}). And even for very difficult problems, two thirds of all generated arguments display no basic flaw whatsoever (Table~\ref{table:main_subsets}, {\small SYS-PP \& SYS-RP \& SYS-RC \& SYS-US}). \smallskip \noindent \emph{Are reconstructed arguments logically valid?} Roughly 70\% of all arguments generated by one of the three chains are logically valid (Table~\ref{table:main_results}). More importantly, though, for virtually every source text in the dataset, there is at least one chain (out of 16) which reconstructs the text as a valid argument (\emph{pooling}). Given that logical validity can be automatically assessed, the \emph{pooled} system may thus \textbf{guarantee to yield a valid reconstruction}. Concerning different problem types (Table~\ref{table:main_subsets}), \emph{hermeneutic cycle} clearly outperforms the other chains as soon as the problem gets difficult. Additional analysis shows that ArgumentAnalyst can also \textbf{cope with underdetermination}, as 68\% of all generated arguments whose final conclusion differs ($\textrm{BLEU} \leq .8$) from the target argument's one -- i.e., arguments that are not reconstructed as expected given the target data -- are still logically valid. \smallskip \noindent \emph{Are the generated interpretations internally coherent?} The generative chain \emph{hermeneutic cycle} yields comprehensive argument reconstructions where premises (\textbf{\small P}) and conclusions (\textbf{\small C}) fit much better to detected reasons (\textbf{\small R}) and conjectures (\textbf{\small J}) than \emph{straight} or \emph{logical streamlining} ({\small EXE-RSS, EXE-JSS}). This holds globally (Table~\ref{table:main_results}), as well as for easy, and for difficult problems (Table~\ref{table:main_subsets}). Note that the \emph{oracle} baseline for metrics {\small EXE-RSS, EXE-JSS} is well below 1, which reflects the fact that source texts may present arguments in highly mutilated ways; it is nearly attained by \emph{pooling} the 16 different generative chains (Table~\ref{table:main_results}). \smallskip \noindent \emph{Can ArgumentAnalyst detect reasons and conjectures, and fully exploit the text?} The evaluation demonstrates that reason/conjecture detection on {\small AAAC02} is a relatively easy task ({\small EXE-PPR, EXE-PPJ}). In contrast, fully exploiting a text (i.e., generating an argument with implicit final conclusion if and only if the underlying target argument has an implicit final conclusion, {\small EXE-TE}) is seemingly more challenging (Table~\ref{table:main_results}). Again, \emph{hermeneutic cycle} achieves best text exploitation, performing, however, clearly below \emph{oracle} baseline -- which may simply reflect the degree of underdetermination in the {\small AAAC02} corpus. \paragraph{In a second experiment} we train two models on the imported \emph{EntailmentBank} (\emph{task1} and \emph{task2}) dataset (see Section~\ref{sec:datasets}), namely: (1.) our base model (T5), which yields Argument\-Analyst\textsubscript{EB}; (2.) the ArgumentAnalyst model pretrained on {\small AAAC02} \citep[resulting in an intermediary pre-training set-up similar to][]{phang2018sentence,Geva2020InjectingNR}, which yields ArgumentAnalyst\textsubscript{AAAC,EB}. Since the \emph{EntailmentBank} data doesn't contain formalizations, we can only train on 14 modes, which are interpreted as sequence-to-sequence tasks (see Appendix~\ref{app:training_setup}). We evaluate the models on \emph{task2} of \emph{EntailmentBank} only, which contains problems with a relatively large number of distractors, and proceed in two steps as before: prediction (with 11 different generative chains) and metrics application. \citet{dalvi2021explaining} report the ability of \emph{EntailmentWriter} (a fine-tuned T5-11b model) to correctly distinguish relevant premises of an argument from distractors in terms of a F1-score, which corresponds to our metric {\small EXE-PPR}. That's why the sole focus in this second experiment is on {\small EXE-PPR}. Table~\ref{table:ent_bank} describes the ability of ArgumentAnalyst models to correctly tell apart relevant premises from mere distractors in the \emph{EntailmentBank task2} dataset for two generative chains (\emph{straight}, which directly outputs reason statements, and \emph{hermeneutic cycle}, which tries to reconstruct the argument first and uses both source text and argument to identify reasons), and compares this with the performance of \emph{EntailmentWriter} \citep[scores from][]{dalvi2021explaining}. The results, shown separately for arguments with a specific number of inference steps, let us draw three conclusions: First, \emph{ArgumentAnalyst} outperforms \emph{EntailmentWriter} on difficult problems with more than 4 inference steps / sub-arguments. Second, using the sophisticated chain \emph{hermeneutic cycle} improves predictive performance compared to the simple \emph{straight} chain. Third, the chain \emph{hermeneutic cycle} (unlike \emph{straight}) generally benefits from intermediary pre-training on {\small AAAC} -- caveat: not so for arguments with more than 4 steps. This latter observation might be due to the fact that the {\small AAAC02} corpus, by construction, doesn't contain arguments with more than 4 steps, so that pre-training biases the model towards shorter arguments. \paragraph{In a third experiment} we explore the following hypothesis: \begin{description} \item[Informative higher-order evidence.] The degree to which ArgumentAnalyst struggles in reconstructing a given argument (presented in the source text) as logically valid is a reliable indicator for whether the original argument is fallacious or not. \end{description} To test this hypothesis, we apply ArgumentAnalyst (trained on {\small AAAC02}, see above) to the \emph{RuleTaker} data as imported into the DeepA2 framework (see Section~\ref{sec:datasets}): ArgumentAnalyst produces -- by means of 13 generative chains -- comprehensive reconstructions, to which the systematic and exegetic metrics are applied. \emph{RuleTaker} contains an equal share of arguments whose conclusions follow from (label=valid), contradict (label=contradiction), or are independent of (label=neutral) the corresponding premises. Now, informative higher-order evidence would allow us to correctly predict these labels. And this is exactly what we observe: First, if reconstructions of one and the same source text which are independently generated with different chains agree (disagree), then the original argument tends to be valid (invalid). Second, by training simple classifiers on our argumentative metrics and further properties of the reconstructions, we robustly achieve a predictive accuracy 10\% above the random baseline. While this is far below the SOTA results of tailor-made RuleTaker \citep{Clark2020_TransSoftReas} and ProofWriter \citep{tafjord2020proofwriter} models on this data, our findings nonetheless confirm the above hypothesis. \section{Conclusion} \label{sec:conclusion} In this paper, we have presented and implemented a multi-angular, modular framework for deep argument analysis (DeepA2). It allows for defining a large variety of generative modes by combining different dimensions of the data. These modes, in turn, can be concatenated into complex generative chains. ArgumentAnalyst -- a text-to-text model set up and trained within the DeepA2 framework -- yields plausible reconstructions of argumentative texts. Our empirical findings vindicate the overall framework and highlight the following \textbf{advantages of a multi-angular, modular design} in general: First of all, modular chains may emulate established, well-proven, typically piece-meal, scholarly techniques for text analysis (heuristics), which hence may provide \textbf{normative, methodological guidance} in setting up NLP systems. Secondly, by defining and implementing different modular chains, and investigating the plurality of generated solutions, one can systematically \textbf{explore the system's uncertainty as well as the tasks's underdetermination}. Thirdly, monitoring the system during modular computation yields diagnostically useful information (e.g., intermediary results) which not only describes the model's performance on the given problem, but which additionally allows us -- as \textbf{higher-order evidence} -- to characterize (e.g., classify) the original problem in the first place. Fourthly, breaking down a complex task into sub-tasks with intermediary results that can be further processed and re-combined helps to \textbf{overcome input size limitations} of neural language models. Fifthly, modular generation with meaningful modes allows users to follow the system, comprehend generated solutions, verify sub-steps and detect errors -- the NLP system becomes a \textbf{transparent, explainable AI} \citep{Miller2019ExplanationIA}. Finally, modular NLP systems as described by DeepA2 may be connected to a user-interface which promises \textbf{fine-grained interactive control} of modular generations and seamless cognitive cooperation of AI and human experts in analysing texts. \section{Synthetic Argument Data} \label{app:aaac} The {\small AAAC} datasets used in this study are publicly available via Huggingface's Hub -- {\small \url{https://huggingface.co/datasets/debatelab/aaac}} -- where the construction of the datasets is documented meticulously. A synthetically generated {\small AAAC} record, which nicely illustrates the underdetermination of argument reconstruction, with two implicit premises, one distracting statement and a simple (one-step) argument (formatted as presented to the model): \begin{footnotesize}\ttfamily \noindent\textit{source:} It is not the case that Tracy is not an admirer of Fullerton and Tracy has seen La Habra. Plus, if someone loves Chico, then they haven't visited Monterey, owing to the fact that loving Laguna Beach is sufficient for not having visited Monterey. \noindent\textit{reasons:} loving Laguna Beach is sufficient for not having visited Monterey (ref: (2)) \noindent\textit{conjectures:} if someone loves Chico, then they haven't visited Monterey (ref: (4)) \noindent\textit{argdown:}\newline (1) If someone is an admirer of Chico, then they are an admirer of Laguna Beach or a visitor of Stockton.\newline (2) If someone admires Laguna Beach, then they haven't visited Monterey.\newline (3) If someone has visited Stockton, then they haven't visited Monterey.\newline --\newline with generalized dilemma (neg variant) from (1) (2) (3)\newline --\newline (4) If someone admires Chico, then they haven't visited Monterey. \noindent\textit{premises:} If someone is an admirer of Chico, then they are an admirer of Laguna Beach or a visitor of Stockton. (ref: (1)) | If someone admires Laguna Beach, then they haven't visited Monterey. (ref: (2)) | If someone has visited Stockton, then they haven't visited Monterey. (ref: (3)) \noindent\textit{conclusion:} If someone admires Chico, then they haven't visited Monterey. (ref: (4)) \noindent\textit{premises\_form:} (x): Fx -> (G x v H x) (ref: (1)) | (x): G x -> not I x (ref: (2)) | (x): H x -> not I x (ref: (3)) \noindent\textit{conclusion\_form:} (x): F x -> not I x (ref: (4)) \noindent\textit{keys:} F: admirer of Chico | G: admirer of Laguna Beach | H: visitor of Stockton | I: visitor of Monterey \end{footnotesize} \section{Training Set-up} \label{app:training_setup} By interpreting a generative mode as a sequence-to-sequence task, we may translate a multi-angular DeepA2 dataset (e.g., {\small AAAC01}) into a multi-task sequence-to-sequence format, on which a sequence-to-sequence model can be trained. For each record in the multi-angular DeepA2 dataset, we randomly sample 14 modes in accordance with the weights provided in Table~\ref{table:all_generative_modes} and add, for each mode, a corresponding sequence-to-sequence record to the training data. This results, for {\small AAAC01}, in a sequence-to-sequence training dataset with $14\times 16.000$ records. \begin{table}[tb] \centering \begin{small} \begin{tabularx}{\linewidth}{@{}p{0.20\linewidth}@{}Y@{}Y|p{0.17\linewidth}@{}Y@{}Y|p{0.23\linewidth}@{}Y@{}Y@{}} \toprule mode & w\textsubscript 1 & w\textsubscript{2} & mode & w\textsubscript 1 & w\textsubscript{2} & mode & w\textsubscript 1 & w\textsubscript{2} \\ \midrule \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew6}{\scriptsize$\mathbf{P\,C\,O\!\leadsto\!{F}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$} & .2 & .2 & \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$} & .2 & .2 & \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$} & .7 & -- & \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$} & .7 & -- & & & \\ \bottomrule \end{tabularx} \end{small} \caption{21 generative modes with corresponding weights in {\small AAAC} (w\textsubscript 1) and \emph{EntailmentBank} (w\textsubscript 2) training data.} \label{table:all_generative_modes} \end{table} Our models (base model T5-large with 770M parameters, and pretrained ArgumentAnalyst) are trained with batch-size 2 and learning rate 0.00001. For {\small AAAC01}, eval loss starts to increase at epoch 8; with \emph{EntailmentBank} data, eval loss increases from epoch 2 onwards. \section{Iterative Prediction with Generative Chains} \label{app:gen_chains} Generative chains are implemented with a dynamic dictionary (9 keys, corresp.\ to the dimensions of DeepA2 data), which is initialized with the source text, provides input for the generative modes, and is updated after each generative step with the mode's generated output. Output is generated with beam search decoding and beam width 2. \begin{table}[tb] \centering \begin{small} \begin{tabularx}{\linewidth}{@{}lp{0.68\linewidth}c@{\hspace{3pt}}c@{}} \toprule \# & {mode sequence} & len. & soph. \\ \midrule \textbf{1} & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} & 3 & 0 \smallskip\\ 2 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,J\!\leadsto\!{A}}$} & 3 & 1 \smallskip\\ 3 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\!\leadsto\!{A}}$} & 3 & 1 \smallskip\\ 4 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 2 \smallskip\\ 5 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 6 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 7 & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 8 & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ \textbf{9} & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 4 & 4 \smallskip\\ 10 & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 4 & 4 \smallskip\\ 11 & \parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} } & 7 & 8 \smallskip\\ 12 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}} & 9 & 11 \smallskip\\ \textbf{13} &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}} & 9 & 11 \smallskip\\ 14 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} }\vspace{1pt} & 15 & 20 \smallskip\\ 15 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} } & 11 & 18 \smallskip\\ 16 & \parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\,C\,O\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} } & 12 & 21 \\ \bottomrule \end{tabularx} \end{small} \caption{16 generative chains (without final formalization sub-sequences) evaluated in this study. The illustrative chains highlighted in the main paper are \#1 (straight), \#9 (hermeneutic cycle), and \#13 (logical streamlining).} \label{table:all_generative_chains_app} \end{table} Table~\ref{table:all_generative_chains_app} displays all generative chains we resort to in this study, all of which are used in the \textit{first experiment}. The \textit{second experiment} makes use of chains 1--11. The \textit{third experiment} deploys chains 1--13. \section{Additional Results} \label{app:add_results} Table~\ref{table:main_subsets} assesses ArgumentAnalyst's reconstructions on specific subsets of the {\small AAAC02} dataset (defined in Section~\ref{sec:datasets}) for three representative generative chains. \begin{table}[tb] \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} {} & {\small simple} & {\small compl.} & {\small plain} & {\small mutil.} & {\small C\&M} \\ chain & {\scriptsize N=1274} & {\scriptsize N=180} & {\scriptsize N=330} & {\scriptsize N=114} & {\scriptsize N=70} \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-PP \& SYS-RP \& SYS-RC \& SYS-US}} \\ straight & .95 & .72 & .98 & .61 & .69 \\ herm.\ c. & .94 & .68 & .96 & .67 & .61 \\ log.\ str. & .95 & .68 & .98 & .64 & .61 \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-VAL}} \\ straight & .84 & .48 & .88 & .40 & .34 \\ herm.\ c. & .83 & .56 & .84 & .49 & .50 \\ log.\ str. & .82 & .47 & .86 & .46 & .37 \\\midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-RSS}} \\ straight & .03 & -.25 & .05 & -.31 & -.30 \\ herm.\ c. & .20 & .08 & .15 & .08 & .11 \\ log.\ str. & .17 & -.01 & .13 & .01 & -.06 \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-JSS}} \\ straight & .06 & -.32 & .10 & -.37 & -.37 \\ herm.\ c. & .23 & -.06 & .21 & -.03 & -.21 \\ log.\ str. & .13 & -.26 & .07 & -.26 & -.40 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst on specific subsets (columns) of the {\small AAAC02} data as measured by selected systematic and exegetic metrics (sub-tables). Rows display results for three illustrative generative chains (\emph{straight}, \emph{hermeneutic cycle}, \emph{logical streamlining}).} \label{table:main_subsets} \end{small} \end{table} Table~\ref{table:main_results_app} details the performance of ArgumentAnalyst on the entire {\small AAAC02} dataset as measured by tailor-made argumentative metrics. Table~\ref{table:main_results_app_oos} shows the corresponding performance on out-of -sample eval data {\small AAAC01}. \begin{table*} \begin{small} \begin{tabularx}{\linewidth}{l *{12}{Y}} \toprule {} & \multicolumn{6}{c}{\emph{systematic metrics} (\textbf{\scriptsize SYS-*})} & \multicolumn{6}{c}{\emph{exegetic metrics} (\textbf{\scriptsize EXE-*})} \\ \cmidrule(r){2-7} \cmidrule(r){8-13} chain & \textbf{\scriptsize PP} & \textbf{\scriptsize RP} & \textbf{\scriptsize RC} & \textbf{\scriptsize US} & \textbf{\scriptsize SCH} & \textbf{\scriptsize VAL} & \textbf{\scriptsize MEQ} & \textbf{\scriptsize RSS} & \textbf{\scriptsize JSS} & \textbf{\scriptsize PPR} & \textbf{\scriptsize PPJ} & \textbf{\scriptsize TE} \\ \midrule \#1 & 0.95 & 0.97 & 0.96 & 0.96 & 0.33 & 0.73 & 0.80 & -0.08 & -0.10 & 0.93 & 0.93 & 0.63 \\ \#2 & 0.95 & 0.97 & 0.94 & 0.94 & 0.33 & 0.71 & 0.80 & -0.09 & 0.04 & 0.93 & 0.93 & 0.67 \\ \#3 & 0.95 & 0.98 & 0.95 & 0.93 & 0.31 & 0.70 & 0.80 & 0.10 & -0.11 & 0.93 & 0.93 & 0.62 \\ \#4 & 0.94 & 0.97 & 0.94 & 0.92 & 0.30 & 0.70 & 0.80 & 0.12 & -0.00 & 0.93 & 0.93 & 0.66 \\ \#5 & 0.94 & 0.97 & 0.95 & 0.91 & 0.30 & 0.70 & 0.83 & 0.13 & 0.05 & 0.94 & 0.93 & 0.69 \\ \#6 & 0.94 & 0.97 & 0.95 & 0.93 & 0.31 & 0.70 & 0.83 & 0.10 & 0.03 & 0.94 & 0.93 & 0.67 \\ \#7 & 0.93 & 0.97 & 0.95 & 0.92 & 0.29 & 0.70 & 0.83 & 0.13 & 0.05 & 0.93 & 0.92 & 0.68 \\ \#8 & 0.94 & 0.97 & 0.95 & 0.93 & 0.30 & 0.69 & 0.83 & 0.10 & 0.02 & 0.93 & 0.92 & 0.67 \\ \#9 & 0.95 & 0.98 & 0.95 & 0.93 & 0.31 & 0.72 & 0.82 & 0.16 & 0.12 & 0.93 & 0.92 & 0.71 \\ \#10 & 0.96 & 0.98 & 0.96 & 0.94 & 0.32 & 0.71 & 0.82 & 0.14 & 0.09 & 0.93 & 0.92 & 0.69 \\ \#11 & 0.96 & 0.98 & 0.96 & 0.93 & 0.32 & 0.71 & 0.82 & 0.15 & 0.11 & 0.93 & 0.92 & 0.71 \\ \#12 & 0.93 & 0.95 & 0.94 & 0.94 & 0.32 & 0.71 & 0.81 & -0.17 & -0.08 & 0.93 & 0.92 & 0.68 \\ \#13 & 0.95 & 0.97 & 0.96 & 0.95 & 0.32 & 0.72 & 0.82 & 0.11 & -0.00 & 0.93 & 0.92 & 0.69 \\ \#14 & 0.93 & 0.95 & 0.94 & 0.94 & 0.32 & 0.70 & 0.81 & -0.18 & -0.14 & 0.93 & 0.92 & 0.66 \\ \#15 & 0.92 & 0.96 & 0.94 & 0.95 & 0.33 & 0.71 & 0.81 & -0.20 & -0.19 & 0.93 & 0.92 & 0.65 \\ \#16 & 0.92 & 0.96 & 0.94 & 0.94 & 0.33 & 0.72 & 0.81 & -0.20 & -0.19 & 0.93 & 0.92 & 0.65 \\ \bottomrule \end{tabularx} \end{small} \caption{Performance of ArgumentAnalyst for systematic and exegetic metrics on the entire OOD eval data ({\small AAAC02}). Rows display mean results for each of the 16 generative chains.} \label{table:main_results_app} \end{table*} \begin{table*} \begin{small} \begin{tabularx}{\linewidth}{l *{12}{Y}} \toprule {} & \multicolumn{6}{c}{\emph{systematic metrics} (\textbf{\scriptsize SYS-*})} & \multicolumn{6}{c}{\emph{exegetic metrics} (\textbf{\scriptsize EXE-*})} \\ \cmidrule(r){2-7} \cmidrule(r){8-13} chain & \textbf{\scriptsize PP} & \textbf{\scriptsize RP} & \textbf{\scriptsize RC} & \textbf{\scriptsize US} & \textbf{\scriptsize SCH} & \textbf{\scriptsize VAL} & \textbf{\scriptsize MEQ} & \textbf{\scriptsize RSS} & \textbf{\scriptsize JSS} & \textbf{\scriptsize PPR} & \textbf{\scriptsize PPJ} & \textbf{\scriptsize TE} \\ \midrule \#1 & 0.97 & 0.98 & 0.97 & 0.98 & 0.61 & 0.87 & 0.78 & 0.08 & 0.13 & 0.95 & 0.95 & 0.64 \\ \#2 & 0.97 & 0.98 & 0.96 & 0.97 & 0.60 & 0.87 & 0.78 & 0.09 & 0.24 & 0.95 & 0.95 & 0.68 \\ \#3 & 0.96 & 0.98 & 0.96 & 0.97 & 0.58 & 0.86 & 0.78 & 0.26 & 0.12 & 0.95 & 0.95 & 0.64 \\ \#4 & 0.95 & 0.98 & 0.95 & 0.96 & 0.57 & 0.85 & 0.78 & 0.26 & 0.20 & 0.95 & 0.95 & 0.67 \\ \#5 & 0.96 & 0.98 & 0.95 & 0.96 & 0.57 & 0.84 & 0.80 & 0.27 & 0.27 & 0.96 & 0.95 & 0.70 \\ \#6 & 0.97 & 0.98 & 0.96 & 0.96 & 0.58 & 0.84 & 0.80 & 0.26 & 0.24 & 0.96 & 0.95 & 0.69 \\ \#7 & 0.95 & 0.98 & 0.96 & 0.96 & 0.57 & 0.86 & 0.79 & 0.27 & 0.26 & 0.95 & 0.94 & 0.71 \\ \#8 & 0.96 & 0.98 & 0.96 & 0.96 & 0.57 & 0.85 & 0.79 & 0.26 & 0.25 & 0.95 & 0.94 & 0.70 \\ \#9 & 0.97 & 0.99 & 0.97 & 0.97 & 0.59 & 0.88 & 0.79 & 0.31 & 0.36 & 0.96 & 0.95 & 0.78 \\ \#10 & 0.97 & 0.99 & 0.97 & 0.97 & 0.60 & 0.87 & 0.79 & 0.30 & 0.34 & 0.96 & 0.95 & 0.77 \\ \#11 & 0.97 & 0.99 & 0.97 & 0.97 & 0.60 & 0.87 & 0.79 & 0.31 & 0.35 & 0.96 & 0.95 & 0.77 \\ \#12 & 0.95 & 0.97 & 0.95 & 0.96 & 0.54 & 0.84 & 0.79 & 0.17 & 0.25 & 0.96 & 0.94 & 0.75 \\ \#13 & 0.97 & 0.99 & 0.97 & 0.97 & 0.61 & 0.87 & 0.79 & 0.29 & 0.32 & 0.96 & 0.95 & 0.76 \\ \#14 & 0.95 & 0.97 & 0.95 & 0.96 & 0.54 & 0.84 & 0.79 & 0.16 & 0.24 & 0.96 & 0.94 & 0.74 \\ \#15 & 0.94 & 0.97 & 0.95 & 0.96 & 0.54 & 0.85 & 0.79 & 0.15 & 0.18 & 0.96 & 0.95 & 0.73 \\ \#16 & 0.94 & 0.97 & 0.95 & 0.95 & 0.54 & 0.85 & 0.79 & 0.15 & 0.19 & 0.96 & 0.95 & 0.73 \\ \bottomrule \end{tabularx} \end{small} \caption{Performance of ArgumentAnalyst for systematic and exegetic metrics on the entire OOS eval data ({\small AAAC01}). Rows display mean results for each of the 16 generative chains.} \label{table:main_results_app_oos} \end{table*} Distinguishing four mutually exclusive subsets of {\small AAAC02}, Tables~\ref{table_main_subsets1}--\ref{table_main_subsets4} detail the the quality of ArgumentAnalyst's reconstruction for easy and difficult problems. Tables~\ref{table_main_subsets_oos1}--\ref{table_main_subsets_oos4} present the corresponding out-of-sample performance on the equally partitioned {\small AAAC01} dataset (eval split). \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-PP $\&$ SYS-RP $\&$ SYS-RC $\&$ SYS-US}} \\ \#1 & 0.95 & 0.72 & 0.98 & 0.61 & 0.69 \\ \#2 & 0.93 & 0.66 & 0.96 & 0.59 & 0.60 \\ \#3 & 0.92 & 0.69 & 0.96 & 0.68 & 0.73 \\ \#4 & 0.92 & 0.66 & 0.95 & 0.69 & 0.60 \\ \#5 & 0.92 & 0.68 & 0.95 & 0.59 & 0.61 \\ \#6 & 0.93 & 0.66 & 0.97 & 0.68 & 0.59 \\ \#7 & 0.92 & 0.67 & 0.96 & 0.62 & 0.64 \\ \#8 & 0.92 & 0.66 & 0.95 & 0.64 & 0.66 \\ \#9 & 0.94 & 0.68 & 0.96 & 0.67 & 0.61 \\ \#10 & 0.94 & 0.73 & 0.98 & 0.68 & 0.77 \\ \#11 & 0.94 & 0.69 & 0.98 & 0.66 & 0.73 \\ \#12 & 0.93 & 0.60 & 0.95 & 0.57 & 0.50 \\ \#13 & 0.95 & 0.68 & 0.98 & 0.64 & 0.61 \\ \#14 & 0.92 & 0.57 & 0.93 & 0.58 & 0.49 \\ \#15 & 0.92 & 0.66 & 0.95 & 0.59 & 0.56 \\ \#16 & 0.92 & 0.64 & 0.95 & 0.56 & 0.60 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst for selected systematic metric (\textbf{\scriptsize SYS-PP $\&$ SYS-RP $\&$ SYS-RC $\&$ SYS-US}) on specific subsets (columns) of the OOD eval data.} \label{table_main_subsets1} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-VAL}} \\ \#1 & 0.84 & 0.48 & 0.88 & 0.40 & 0.34 \\ \#2 & 0.82 & 0.54 & 0.84 & 0.47 & 0.46 \\ \#3 & 0.82 & 0.44 & 0.87 & 0.39 & 0.36 \\ \#4 & 0.81 & 0.48 & 0.83 & 0.44 & 0.43 \\ \#5 & 0.82 & 0.44 & 0.85 & 0.45 & 0.37 \\ \#6 & 0.81 & 0.46 & 0.85 & 0.42 & 0.41 \\ \#7 & 0.83 & 0.44 & 0.82 & 0.46 & 0.49 \\ \#8 & 0.80 & 0.44 & 0.83 & 0.40 & 0.40 \\ \#9 & 0.83 & 0.56 & 0.84 & 0.49 & 0.50 \\ \#10 & 0.82 & 0.50 & 0.85 & 0.46 & 0.43 \\ \#11 & 0.82 & 0.48 & 0.84 & 0.46 & 0.41 \\ \#12 & 0.81 & 0.47 & 0.84 & 0.42 & 0.37 \\ \#13 & 0.82 & 0.47 & 0.86 & 0.46 & 0.37 \\ \#14 & 0.80 & 0.48 & 0.82 & 0.41 & 0.40 \\ \#15 & 0.82 & 0.45 & 0.84 & 0.50 & 0.33 \\ \#16 & 0.83 & 0.52 & 0.85 & 0.46 & 0.43 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst for selected systematic metric (\textbf{\scriptsize SYS-VAL}) on specific subsets (columns) of the OOD eval data.} \label{table_main_subsets2} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-RSS}} \\ \#1 & 0.03 & -0.25 & 0.05 & -0.31 & -0.30 \\ \#2 & 0.02 & -0.27 & 0.07 & -0.33 & -0.31 \\ \#3 & 0.15 & -0.03 & 0.12 & -0.01 & -0.06 \\ \#4 & 0.16 & 0.01 & 0.12 & -0.01 & 0.04 \\ \#5 & 0.18 & 0.04 & 0.13 & 0.04 & 0.06 \\ \#6 & 0.17 & -0.04 & 0.12 & -0.02 & -0.09 \\ \#7 & 0.18 & 0.05 & 0.14 & 0.03 & 0.08 \\ \#8 & 0.16 & -0.02 & 0.12 & -0.02 & -0.07 \\ \#9 & 0.20 & 0.08 & 0.15 & 0.08 & 0.11 \\ \#10 & 0.19 & 0.04 & 0.15 & 0.05 & -0.01 \\ \#11 & 0.21 & 0.04 & 0.15 & 0.07 & -0.03 \\ \#12 & -0.14 & -0.20 & -0.12 & -0.23 & -0.25 \\ \#13 & 0.17 & -0.01 & 0.13 & 0.01 & -0.06 \\ \#14 & -0.17 & -0.22 & -0.16 & -0.23 & -0.26 \\ \#15 & -0.19 & -0.23 & -0.24 & -0.24 & -0.23 \\ \#16 & -0.19 & -0.23 & -0.24 & -0.25 & -0.24 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst for selected exegetic metrics (\textbf{\scriptsize EXE-RSS}) on specific subsets (columns) of the OOD eval data.} \label{table_main_subsets3} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-JSS}} \\ \#1 & 0.06 & -0.32 & 0.10 & -0.37 & -0.37 \\ \#2 & 0.16 & -0.17 & 0.19 & -0.12 & -0.26 \\ \#3 & 0.02 & -0.32 & 0.03 & -0.42 & -0.33 \\ \#4 & 0.12 & -0.17 & 0.13 & -0.14 & -0.19 \\ \#5 & 0.15 & -0.11 & 0.15 & -0.08 & -0.18 \\ \#6 & 0.16 & -0.14 & 0.15 & -0.22 & -0.22 \\ \#7 & 0.16 & -0.11 & 0.16 & -0.10 & -0.18 \\ \#8 & 0.15 & -0.18 & 0.14 & -0.19 & -0.27 \\ \#9 & 0.23 & -0.06 & 0.21 & -0.03 & -0.21 \\ \#10 & 0.23 & -0.12 & 0.21 & -0.15 & -0.27 \\ \#11 & 0.25 & -0.13 & 0.20 & -0.11 & -0.27 \\ \#12 & 0.06 & -0.36 & 0.04 & -0.28 & -0.47 \\ \#13 & 0.13 & -0.26 & 0.07 & -0.26 & -0.40 \\ \#14 & -0.02 & -0.39 & -0.07 & -0.31 & -0.48 \\ \#15 & -0.08 & -0.41 & -0.16 & -0.36 & -0.49 \\ \#16 & -0.08 & -0.37 & -0.15 & -0.35 & -0.45 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst for selected exegetic metric (\textbf{\scriptsize EXE-JSS}) on specific subsets (columns) of the OOD eval data.} \label{table_main_subsets4} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-PP $\&$ SYS-RP $\&$ SYS-RC $\&$ SYS-US}} \\ \#1 & 0.98 & 0.78 & 1.00 & 0.75 & 0.76 \\ \#2 & 0.97 & 0.77 & 0.99 & 0.70 & 0.73 \\ \#3 & 0.95 & 0.79 & 0.96 & 0.77 & 0.74 \\ \#4 & 0.95 & 0.76 & 0.96 & 0.69 & 0.73 \\ \#5 & 0.97 & 0.75 & 0.98 & 0.66 & 0.74 \\ \#6 & 0.96 & 0.77 & 0.98 & 0.73 & 0.78 \\ \#7 & 0.96 & 0.73 & 0.96 & 0.71 & 0.72 \\ \#8 & 0.97 & 0.75 & 0.97 & 0.73 & 0.74 \\ \#9 & 0.98 & 0.80 & 0.99 & 0.80 & 0.70 \\ \#10 & 0.98 & 0.78 & 0.99 & 0.80 & 0.73 \\ \#11 & 0.98 & 0.78 & 0.99 & 0.80 & 0.71 \\ \#12 & 0.97 & 0.71 & 0.97 & 0.70 & 0.67 \\ \#13 & 0.98 & 0.81 & 0.99 & 0.76 & 0.78 \\ \#14 & 0.96 & 0.73 & 0.96 & 0.70 & 0.69 \\ \#15 & 0.97 & 0.72 & 0.96 & 0.70 & 0.68 \\ \#16 & 0.97 & 0.72 & 0.96 & 0.68 & 0.68 \\ \bottomrule \end{tabularx} \caption{Performance of ArgumentAnalyst for selected systematic metric (\textbf{\scriptsize SYS-PP $\&$ SYS-RP $\&$ SYS-RC $\&$ SYS-US}) on specific subsets (columns) of the OOS eval data.} \label{table_main_subsets_oos1} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-VAL}} \\ \#1 & 0.97 & 0.68 & 0.96 & 0.74 & 0.74 \\ \#2 & 0.97 & 0.68 & 0.97 & 0.73 & 0.71 \\ \#3 & 0.94 & 0.70 & 0.94 & 0.72 & 0.71 \\ \#4 & 0.95 & 0.65 & 0.94 & 0.68 & 0.71 \\ \#5 & 0.96 & 0.59 & 0.95 & 0.65 & 0.62 \\ \#6 & 0.95 & 0.62 & 0.96 & 0.69 & 0.63 \\ \#7 & 0.94 & 0.66 & 0.94 & 0.66 & 0.71 \\ \#8 & 0.95 & 0.67 & 0.95 & 0.69 & 0.69 \\ \#9 & 0.97 & 0.65 & 0.97 & 0.72 & 0.69 \\ \#10 & 0.97 & 0.67 & 0.97 & 0.68 & 0.72 \\ \#11 & 0.97 & 0.70 & 0.97 & 0.68 & 0.74 \\ \#12 & 0.95 & 0.63 & 0.95 & 0.72 & 0.70 \\ \#13 & 0.97 & 0.68 & 0.95 & 0.73 & 0.73 \\ \#14 & 0.95 & 0.63 & 0.94 & 0.72 & 0.69 \\ \#15 & 0.95 & 0.65 & 0.94 & 0.75 & 0.71 \\ \#16 & 0.95 & 0.65 & 0.95 & 0.73 & 0.71 \\ \bottomrule \end{tabularx} \caption{Performance of ArgumentAnalyst for selected systematic metric (\textbf{\scriptsize SYS-VAL}) on specific subsets (columns) of the OOS eval data.} \label{table_main_subsets_oos2} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-RSS}} \\ \#1 & 0.19 & -0.16 & 0.11 & -0.07 & -0.18 \\ \#2 & 0.21 & -0.13 & 0.10 & -0.05 & -0.15 \\ \#3 & 0.30 & 0.11 & 0.17 & 0.22 & 0.06 \\ \#4 & 0.29 & 0.16 & 0.16 & 0.24 & 0.16 \\ \#5 & 0.32 & 0.18 & 0.19 & 0.23 & 0.18 \\ \#6 & 0.31 & 0.11 & 0.18 & 0.19 & 0.07 \\ \#7 & 0.30 & 0.15 & 0.17 & 0.25 & 0.16 \\ \#8 & 0.30 & 0.12 & 0.17 & 0.24 & 0.08 \\ \#9 & 0.33 & 0.23 & 0.19 & 0.30 & 0.23 \\ \#10 & 0.33 & 0.20 & 0.19 & 0.27 & 0.16 \\ \#11 & 0.33 & 0.21 & 0.19 & 0.28 & 0.16 \\ \#12 & 0.20 & 0.06 & 0.11 & 0.16 & 0.04 \\ \#13 & 0.33 & 0.12 & 0.19 & 0.26 & 0.07 \\ \#14 & 0.20 & 0.06 & 0.10 & 0.16 & 0.03 \\ \#15 & 0.18 & 0.04 & 0.07 & 0.14 & 0.00 \\ \#16 & 0.18 & 0.04 & 0.07 & 0.11 & 0.02 \\ \bottomrule \end{tabularx} \caption{Performance of ArgumentAnalyst for selected exegetic metrics (\textbf{\scriptsize EXE-RSS}) on specific subsets (columns) of the OOS eval data.} \label{table_main_subsets_oos3} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-JSS}} \\ \#1 & 0.35 & -0.14 & 0.36 & -0.09 & -0.13 \\ \#2 & 0.40 & 0.02 & 0.39 & 0.10 & 0.02 \\ \#3 & 0.30 & -0.15 & 0.29 & -0.08 & -0.15 \\ \#4 & 0.36 & 0.03 & 0.33 & 0.08 & -0.02 \\ \#5 & 0.41 & 0.15 & 0.39 & 0.17 & 0.11 \\ \#6 & 0.40 & 0.04 & 0.38 & 0.10 & -0.01 \\ \#7 & 0.39 & 0.12 & 0.37 & 0.15 & 0.06 \\ \#8 & 0.39 & 0.08 & 0.38 & 0.10 & -0.02 \\ \#9 & 0.47 & 0.16 & 0.42 & 0.31 & 0.13 \\ \#10 & 0.47 & 0.11 & 0.42 & 0.26 & 0.02 \\ \#11 & 0.47 & 0.11 & 0.42 & 0.26 & 0.02 \\ \#12 & 0.40 & -0.01 & 0.35 & 0.14 & -0.08 \\ \#13 & 0.45 & 0.03 & 0.36 & 0.21 & -0.01 \\ \#14 & 0.38 & -0.00 & 0.30 & 0.15 & -0.05 \\ \#15 & 0.30 & -0.04 & 0.22 & 0.07 & -0.07 \\ \#16 & 0.30 & -0.03 & 0.22 & 0.11 & -0.06 \\ \bottomrule \end{tabularx} \caption{Performance of ArgumentAnalyst for selected exegetic metric (\textbf{\scriptsize EXE-JSS}) on specific subsets (columns) of the OOS eval data.} \label{table_main_subsets_oos4} \end{small} \end{table} \section*{Acknowledgments} We're indebted to Christian Voigt for his critical and constructive feedback throughout the DeepA2 project. % \section{Synthetic Argument Data} \label{app:aaac} The {\small AAAC} datasets used in this study are publicly available via Huggingface's Hub -- {\small \url{https://huggingface.co/datasets/debatelab/aaac}} -- where the construction of the datasets is documented meticulously. A synthetically generated {\small AAAC} record, which nicely illustrates the underdetermination of argument reconstruction, with two implicit premises, one distracting statement and a simple (one-step) argument (formatted as presented to the model): \begin{footnotesize}\ttfamily \noindent\textit{source:} It is not the case that Tracy is not an admirer of Fullerton and Tracy has seen La Habra. Plus, if someone loves Chico, then they haven't visited Monterey, owing to the fact that loving Laguna Beach is sufficient for not having visited Monterey. \noindent\textit{reasons:} loving Laguna Beach is sufficient for not having visited Monterey (ref: (2)) \noindent\textit{conjectures:} if someone loves Chico, then they haven't visited Monterey (ref: (4)) \noindent\textit{argdown:}\newline (1) If someone is an admirer of Chico, then they are an admirer of Laguna Beach or a visitor of Stockton.\newline (2) If someone admires Laguna Beach, then they haven't visited Monterey.\newline (3) If someone has visited Stockton, then they haven't visited Monterey.\newline --\newline with generalized dilemma (neg variant) from (1) (2) (3)\newline --\newline (4) If someone admires Chico, then they haven't visited Monterey. \noindent\textit{premises:} If someone is an admirer of Chico, then they are an admirer of Laguna Beach or a visitor of Stockton. (ref: (1)) | If someone admires Laguna Beach, then they haven't visited Monterey. (ref: (2)) | If someone has visited Stockton, then they haven't visited Monterey. (ref: (3)) \noindent\textit{conclusion:} If someone admires Chico, then they haven't visited Monterey. (ref: (4)) \noindent\textit{premises\_form:} (x): Fx -> (G x v H x) (ref: (1)) | (x): G x -> not I x (ref: (2)) | (x): H x -> not I x (ref: (3)) \noindent\textit{conclusion\_form:} (x): F x -> not I x (ref: (4)) \noindent\textit{keys:} F: admirer of Chico | G: admirer of Laguna Beach | H: visitor of Stockton | I: visitor of Monterey \end{footnotesize} \section{Training Set-up} \label{app:training_setup} By interpreting a generative mode as a sequence-to-sequence task, we may translate a multi-angular DeepA2 dataset (e.g., {\small AAAC01}) into a multi-task sequence-to-sequence format, on which a sequence-to-sequence model can be trained. For each record in the multi-angular DeepA2 dataset, we randomly sample 14 modes in accordance with the weights provided in Table~\ref{table:all_generative_modes} and add, for each mode, a corresponding sequence-to-sequence record to the training data. This results, for {\small AAAC01}, in a sequence-to-sequence training dataset with $14\times 16.000$ records. \begin{table}[tb] \centering \begin{small} \begin{tabularx}{\linewidth}{@{}p{0.20\linewidth}@{}Y@{}Y|p{0.17\linewidth}@{}Y@{}Y|p{0.23\linewidth}@{}Y@{}Y@{}} \toprule mode & w\textsubscript 1 & w\textsubscript{2} & mode & w\textsubscript 1 & w\textsubscript{2} & mode & w\textsubscript 1 & w\textsubscript{2} \\ \midrule \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew6}{\scriptsize$\mathbf{P\,C\,O\!\leadsto\!{F}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$} & .2 & .2 & \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$} & .2 & .2 & \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$} & .7 & -- & \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$} & .7 & -- & & & \\ \bottomrule \end{tabularx} \end{small} \caption{21 generative modes with corresponding weights in {\small AAAC} (w\textsubscript 1) and \emph{EntailmentBank} (w\textsubscript 2) training data.} \label{table:all_generative_modes} \end{table} Our models (base model T5-large with 770M parameters, and pretrained ArgumentAnalyst) are trained with batch-size 2 and learning rate 0.00001. For {\small AAAC01}, eval loss starts to increase at epoch 8; with \emph{EntailmentBank} data, eval loss increases from epoch 2 onwards. \section{Iterative Prediction with Generative Chains} \label{app:gen_chains} Generative chains are implemented with a dynamic dictionary (9 keys, corresp.\ to the dimensions of DeepA2 data), which is initialized with the source text, provides input for the generative modes, and is updated after each generative step with the mode's generated output. Output is generated with beam search decoding and beam width 2. \begin{table}[tb] \centering \begin{small} \begin{tabularx}{\linewidth}{@{}lp{0.68\linewidth}c@{\hspace{3pt}}c@{}} \toprule \# & {mode sequence} & len. & soph. \\ \midrule \textbf{1} & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} & 3 & 0 \smallskip\\ 2 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,J\!\leadsto\!{A}}$} & 3 & 1 \smallskip\\ 3 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\!\leadsto\!{A}}$} & 3 & 1 \smallskip\\ 4 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 2 \smallskip\\ 5 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 6 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 7 & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 8 & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ \textbf{9} & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 4 & 4 \smallskip\\ 10 & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 4 & 4 \smallskip\\ 11 & \parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} } & 7 & 8 \smallskip\\ 12 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}} & 9 & 11 \smallskip\\ \textbf{13} &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}} & 9 & 11 \smallskip\\ 14 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} }\vspace{1pt} & 15 & 20 \smallskip\\ 15 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} } & 11 & 18 \smallskip\\ 16 & \parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\,C\,O\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} } & 12 & 21 \\ \bottomrule \end{tabularx} \end{small} \caption{16 generative chains (without final formalization sub-sequences) evaluated in this study. The illustrative chains highlighted in the main paper are \#1 (straight), \#9 (hermeneutic cycle), and \#13 (logical streamlining).} \label{table:all_generative_chains_app} \end{table} Table~\ref{table:all_generative_chains_app} displays all generative chains we resort to in this study, all of which are used in the \textit{first experiment}. The \textit{second experiment} makes use of chains 1--11. The \textit{third experiment} deploys chains 1--13. \section{Additional Results} \label{app:add_results} Table~\ref{table:main_subsets} assesses ArgumentAnalyst's reconstructions on specific subsets of the {\small AAAC02} dataset (defined in Section~\ref{sec:datasets}) for three representative generative chains. \begin{table}[tb] \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} {} & {\small simple} & {\small compl.} & {\small plain} & {\small mutil.} & {\small C\&M} \\ chain & {\scriptsize N=1274} & {\scriptsize N=180} & {\scriptsize N=330} & {\scriptsize N=114} & {\scriptsize N=70} \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-PP \& SYS-RP \& SYS-RC \& SYS-US}} \\ straight & .95 & .72 & .98 & .61 & .69 \\ herm.\ c. & .94 & .68 & .96 & .67 & .61 \\ log.\ str. & .95 & .68 & .98 & .64 & .61 \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-VAL}} \\ straight & .84 & .48 & .88 & .40 & .34 \\ herm.\ c. & .83 & .56 & .84 & .49 & .50 \\ log.\ str. & .82 & .47 & .86 & .46 & .37 \\\midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-RSS}} \\ straight & .03 & -.25 & .05 & -.31 & -.30 \\ herm.\ c. & .20 & .08 & .15 & .08 & .11 \\ log.\ str. & .17 & -.01 & .13 & .01 & -.06 \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-JSS}} \\ straight & .06 & -.32 & .10 & -.37 & -.37 \\ herm.\ c. & .23 & -.06 & .21 & -.03 & -.21 \\ log.\ str. & .13 & -.26 & .07 & -.26 & -.40 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst on specific subsets (columns) of the {\small AAAC02} data as measured by selected systematic and exegetic metrics (sub-tables). Rows display results for three illustrative generative chains (\emph{straight}, \emph{hermeneutic cycle}, \emph{logical streamlining}).} \label{table:main_subsets} \end{small} \end{table} Table~\ref{table:main_results_app} details the performance of ArgumentAnalyst on the entire {\small AAAC02} dataset as measured by tailor-made argumentative metrics. Table~\ref{table:main_results_app_oos} shows the corresponding performance on out-of -sample eval data {\small AAAC01}. \begin{table*} \begin{small} \begin{tabularx}{\linewidth}{l *{12}{Y}} \toprule {} & \multicolumn{6}{c}{\emph{systematic metrics} (\textbf{\scriptsize SYS-*})} & \multicolumn{6}{c}{\emph{exegetic metrics} (\textbf{\scriptsize EXE-*})} \\ \cmidrule(r){2-7} \cmidrule(r){8-13} chain & \textbf{\scriptsize PP} & \textbf{\scriptsize RP} & \textbf{\scriptsize RC} & \textbf{\scriptsize US} & \textbf{\scriptsize SCH} & \textbf{\scriptsize VAL} & \textbf{\scriptsize MEQ} & \textbf{\scriptsize RSS} & \textbf{\scriptsize JSS} & \textbf{\scriptsize PPR} & \textbf{\scriptsize PPJ} & \textbf{\scriptsize TE} \\ \midrule \#1 & 0.95 & 0.97 & 0.96 & 0.96 & 0.33 & 0.73 & 0.80 & -0.08 & -0.10 & 0.93 & 0.93 & 0.63 \\ \#2 & 0.95 & 0.97 & 0.94 & 0.94 & 0.33 & 0.71 & 0.80 & -0.09 & 0.04 & 0.93 & 0.93 & 0.67 \\ \#3 & 0.95 & 0.98 & 0.95 & 0.93 & 0.31 & 0.70 & 0.80 & 0.10 & -0.11 & 0.93 & 0.93 & 0.62 \\ \#4 & 0.94 & 0.97 & 0.94 & 0.92 & 0.30 & 0.70 & 0.80 & 0.12 & -0.00 & 0.93 & 0.93 & 0.66 \\ \#5 & 0.94 & 0.97 & 0.95 & 0.91 & 0.30 & 0.70 & 0.83 & 0.13 & 0.05 & 0.94 & 0.93 & 0.69 \\ \#6 & 0.94 & 0.97 & 0.95 & 0.93 & 0.31 & 0.70 & 0.83 & 0.10 & 0.03 & 0.94 & 0.93 & 0.67 \\ \#7 & 0.93 & 0.97 & 0.95 & 0.92 & 0.29 & 0.70 & 0.83 & 0.13 & 0.05 & 0.93 & 0.92 & 0.68 \\ \#8 & 0.94 & 0.97 & 0.95 & 0.93 & 0.30 & 0.69 & 0.83 & 0.10 & 0.02 & 0.93 & 0.92 & 0.67 \\ \#9 & 0.95 & 0.98 & 0.95 & 0.93 & 0.31 & 0.72 & 0.82 & 0.16 & 0.12 & 0.93 & 0.92 & 0.71 \\ \#10 & 0.96 & 0.98 & 0.96 & 0.94 & 0.32 & 0.71 & 0.82 & 0.14 & 0.09 & 0.93 & 0.92 & 0.69 \\ \#11 & 0.96 & 0.98 & 0.96 & 0.93 & 0.32 & 0.71 & 0.82 & 0.15 & 0.11 & 0.93 & 0.92 & 0.71 \\ \#12 & 0.93 & 0.95 & 0.94 & 0.94 & 0.32 & 0.71 & 0.81 & -0.17 & -0.08 & 0.93 & 0.92 & 0.68 \\ \#13 & 0.95 & 0.97 & 0.96 & 0.95 & 0.32 & 0.72 & 0.82 & 0.11 & -0.00 & 0.93 & 0.92 & 0.69 \\ \#14 & 0.93 & 0.95 & 0.94 & 0.94 & 0.32 & 0.70 & 0.81 & -0.18 & -0.14 & 0.93 & 0.92 & 0.66 \\ \#15 & 0.92 & 0.96 & 0.94 & 0.95 & 0.33 & 0.71 & 0.81 & -0.20 & -0.19 & 0.93 & 0.92 & 0.65 \\ \#16 & 0.92 & 0.96 & 0.94 & 0.94 & 0.33 & 0.72 & 0.81 & -0.20 & -0.19 & 0.93 & 0.92 & 0.65 \\ \bottomrule \end{tabularx} \end{small} \caption{Performance of ArgumentAnalyst for systematic and exegetic metrics on the entire OOD eval data ({\small AAAC02}). Rows display mean results for each of the 16 generative chains.} \label{table:main_results_app} \end{table*} \begin{table*} \begin{small} \begin{tabularx}{\linewidth}{l *{12}{Y}} \toprule {} & \multicolumn{6}{c}{\emph{systematic metrics} (\textbf{\scriptsize SYS-*})} & \multicolumn{6}{c}{\emph{exegetic metrics} (\textbf{\scriptsize EXE-*})} \\ \cmidrule(r){2-7} \cmidrule(r){8-13} chain & \textbf{\scriptsize PP} & \textbf{\scriptsize RP} & \textbf{\scriptsize RC} & \textbf{\scriptsize US} & \textbf{\scriptsize SCH} & \textbf{\scriptsize VAL} & \textbf{\scriptsize MEQ} & \textbf{\scriptsize RSS} & \textbf{\scriptsize JSS} & \textbf{\scriptsize PPR} & \textbf{\scriptsize PPJ} & \textbf{\scriptsize TE} \\ \midrule \#1 & 0.97 & 0.98 & 0.97 & 0.98 & 0.61 & 0.87 & 0.78 & 0.08 & 0.13 & 0.95 & 0.95 & 0.64 \\ \#2 & 0.97 & 0.98 & 0.96 & 0.97 & 0.60 & 0.87 & 0.78 & 0.09 & 0.24 & 0.95 & 0.95 & 0.68 \\ \#3 & 0.96 & 0.98 & 0.96 & 0.97 & 0.58 & 0.86 & 0.78 & 0.26 & 0.12 & 0.95 & 0.95 & 0.64 \\ \#4 & 0.95 & 0.98 & 0.95 & 0.96 & 0.57 & 0.85 & 0.78 & 0.26 & 0.20 & 0.95 & 0.95 & 0.67 \\ \#5 & 0.96 & 0.98 & 0.95 & 0.96 & 0.57 & 0.84 & 0.80 & 0.27 & 0.27 & 0.96 & 0.95 & 0.70 \\ \#6 & 0.97 & 0.98 & 0.96 & 0.96 & 0.58 & 0.84 & 0.80 & 0.26 & 0.24 & 0.96 & 0.95 & 0.69 \\ \#7 & 0.95 & 0.98 & 0.96 & 0.96 & 0.57 & 0.86 & 0.79 & 0.27 & 0.26 & 0.95 & 0.94 & 0.71 \\ \#8 & 0.96 & 0.98 & 0.96 & 0.96 & 0.57 & 0.85 & 0.79 & 0.26 & 0.25 & 0.95 & 0.94 & 0.70 \\ \#9 & 0.97 & 0.99 & 0.97 & 0.97 & 0.59 & 0.88 & 0.79 & 0.31 & 0.36 & 0.96 & 0.95 & 0.78 \\ \#10 & 0.97 & 0.99 & 0.97 & 0.97 & 0.60 & 0.87 & 0.79 & 0.30 & 0.34 & 0.96 & 0.95 & 0.77 \\ \#11 & 0.97 & 0.99 & 0.97 & 0.97 & 0.60 & 0.87 & 0.79 & 0.31 & 0.35 & 0.96 & 0.95 & 0.77 \\ \#12 & 0.95 & 0.97 & 0.95 & 0.96 & 0.54 & 0.84 & 0.79 & 0.17 & 0.25 & 0.96 & 0.94 & 0.75 \\ \#13 & 0.97 & 0.99 & 0.97 & 0.97 & 0.61 & 0.87 & 0.79 & 0.29 & 0.32 & 0.96 & 0.95 & 0.76 \\ \#14 & 0.95 & 0.97 & 0.95 & 0.96 & 0.54 & 0.84 & 0.79 & 0.16 & 0.24 & 0.96 & 0.94 & 0.74 \\ \#15 & 0.94 & 0.97 & 0.95 & 0.96 & 0.54 & 0.85 & 0.79 & 0.15 & 0.18 & 0.96 & 0.95 & 0.73 \\ \#16 & 0.94 & 0.97 & 0.95 & 0.95 & 0.54 & 0.85 & 0.79 & 0.15 & 0.19 & 0.96 & 0.95 & 0.73 \\ \bottomrule \end{tabularx} \end{small} \caption{Performance of ArgumentAnalyst for systematic and exegetic metrics on the entire OOS eval data ({\small AAAC01}). Rows display mean results for each of the 16 generative chains.} \label{table:main_results_app_oos} \end{table*} Distinguishing four mutually exclusive subsets of {\small AAAC02}, Tables~\ref{table_main_subsets1}--\ref{table_main_subsets4} detail the the quality of ArgumentAnalyst's reconstruction for easy and difficult problems. Tables~\ref{table_main_subsets_oos1}--\ref{table_main_subsets_oos4} present the corresponding out-of-sample performance on the equally partitioned {\small AAAC01} dataset (eval split). \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-PP $\&$ SYS-RP $\&$ SYS-RC $\&$ SYS-US}} \\ \#1 & 0.95 & 0.72 & 0.98 & 0.61 & 0.69 \\ \#2 & 0.93 & 0.66 & 0.96 & 0.59 & 0.60 \\ \#3 & 0.92 & 0.69 & 0.96 & 0.68 & 0.73 \\ \#4 & 0.92 & 0.66 & 0.95 & 0.69 & 0.60 \\ \#5 & 0.92 & 0.68 & 0.95 & 0.59 & 0.61 \\ \#6 & 0.93 & 0.66 & 0.97 & 0.68 & 0.59 \\ \#7 & 0.92 & 0.67 & 0.96 & 0.62 & 0.64 \\ \#8 & 0.92 & 0.66 & 0.95 & 0.64 & 0.66 \\ \#9 & 0.94 & 0.68 & 0.96 & 0.67 & 0.61 \\ \#10 & 0.94 & 0.73 & 0.98 & 0.68 & 0.77 \\ \#11 & 0.94 & 0.69 & 0.98 & 0.66 & 0.73 \\ \#12 & 0.93 & 0.60 & 0.95 & 0.57 & 0.50 \\ \#13 & 0.95 & 0.68 & 0.98 & 0.64 & 0.61 \\ \#14 & 0.92 & 0.57 & 0.93 & 0.58 & 0.49 \\ \#15 & 0.92 & 0.66 & 0.95 & 0.59 & 0.56 \\ \#16 & 0.92 & 0.64 & 0.95 & 0.56 & 0.60 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst for selected systematic metric (\textbf{\scriptsize SYS-PP $\&$ SYS-RP $\&$ SYS-RC $\&$ SYS-US}) on specific subsets (columns) of the OOD eval data.} \label{table_main_subsets1} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-VAL}} \\ \#1 & 0.84 & 0.48 & 0.88 & 0.40 & 0.34 \\ \#2 & 0.82 & 0.54 & 0.84 & 0.47 & 0.46 \\ \#3 & 0.82 & 0.44 & 0.87 & 0.39 & 0.36 \\ \#4 & 0.81 & 0.48 & 0.83 & 0.44 & 0.43 \\ \#5 & 0.82 & 0.44 & 0.85 & 0.45 & 0.37 \\ \#6 & 0.81 & 0.46 & 0.85 & 0.42 & 0.41 \\ \#7 & 0.83 & 0.44 & 0.82 & 0.46 & 0.49 \\ \#8 & 0.80 & 0.44 & 0.83 & 0.40 & 0.40 \\ \#9 & 0.83 & 0.56 & 0.84 & 0.49 & 0.50 \\ \#10 & 0.82 & 0.50 & 0.85 & 0.46 & 0.43 \\ \#11 & 0.82 & 0.48 & 0.84 & 0.46 & 0.41 \\ \#12 & 0.81 & 0.47 & 0.84 & 0.42 & 0.37 \\ \#13 & 0.82 & 0.47 & 0.86 & 0.46 & 0.37 \\ \#14 & 0.80 & 0.48 & 0.82 & 0.41 & 0.40 \\ \#15 & 0.82 & 0.45 & 0.84 & 0.50 & 0.33 \\ \#16 & 0.83 & 0.52 & 0.85 & 0.46 & 0.43 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst for selected systematic metric (\textbf{\scriptsize SYS-VAL}) on specific subsets (columns) of the OOD eval data.} \label{table_main_subsets2} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-RSS}} \\ \#1 & 0.03 & -0.25 & 0.05 & -0.31 & -0.30 \\ \#2 & 0.02 & -0.27 & 0.07 & -0.33 & -0.31 \\ \#3 & 0.15 & -0.03 & 0.12 & -0.01 & -0.06 \\ \#4 & 0.16 & 0.01 & 0.12 & -0.01 & 0.04 \\ \#5 & 0.18 & 0.04 & 0.13 & 0.04 & 0.06 \\ \#6 & 0.17 & -0.04 & 0.12 & -0.02 & -0.09 \\ \#7 & 0.18 & 0.05 & 0.14 & 0.03 & 0.08 \\ \#8 & 0.16 & -0.02 & 0.12 & -0.02 & -0.07 \\ \#9 & 0.20 & 0.08 & 0.15 & 0.08 & 0.11 \\ \#10 & 0.19 & 0.04 & 0.15 & 0.05 & -0.01 \\ \#11 & 0.21 & 0.04 & 0.15 & 0.07 & -0.03 \\ \#12 & -0.14 & -0.20 & -0.12 & -0.23 & -0.25 \\ \#13 & 0.17 & -0.01 & 0.13 & 0.01 & -0.06 \\ \#14 & -0.17 & -0.22 & -0.16 & -0.23 & -0.26 \\ \#15 & -0.19 & -0.23 & -0.24 & -0.24 & -0.23 \\ \#16 & -0.19 & -0.23 & -0.24 & -0.25 & -0.24 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst for selected exegetic metrics (\textbf{\scriptsize EXE-RSS}) on specific subsets (columns) of the OOD eval data.} \label{table_main_subsets3} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-JSS}} \\ \#1 & 0.06 & -0.32 & 0.10 & -0.37 & -0.37 \\ \#2 & 0.16 & -0.17 & 0.19 & -0.12 & -0.26 \\ \#3 & 0.02 & -0.32 & 0.03 & -0.42 & -0.33 \\ \#4 & 0.12 & -0.17 & 0.13 & -0.14 & -0.19 \\ \#5 & 0.15 & -0.11 & 0.15 & -0.08 & -0.18 \\ \#6 & 0.16 & -0.14 & 0.15 & -0.22 & -0.22 \\ \#7 & 0.16 & -0.11 & 0.16 & -0.10 & -0.18 \\ \#8 & 0.15 & -0.18 & 0.14 & -0.19 & -0.27 \\ \#9 & 0.23 & -0.06 & 0.21 & -0.03 & -0.21 \\ \#10 & 0.23 & -0.12 & 0.21 & -0.15 & -0.27 \\ \#11 & 0.25 & -0.13 & 0.20 & -0.11 & -0.27 \\ \#12 & 0.06 & -0.36 & 0.04 & -0.28 & -0.47 \\ \#13 & 0.13 & -0.26 & 0.07 & -0.26 & -0.40 \\ \#14 & -0.02 & -0.39 & -0.07 & -0.31 & -0.48 \\ \#15 & -0.08 & -0.41 & -0.16 & -0.36 & -0.49 \\ \#16 & -0.08 & -0.37 & -0.15 & -0.35 & -0.45 \\ \bottomrule\end{tabularx} \caption{Performance of ArgumentAnalyst for selected exegetic metric (\textbf{\scriptsize EXE-JSS}) on specific subsets (columns) of the OOD eval data.} \label{table_main_subsets4} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-PP $\&$ SYS-RP $\&$ SYS-RC $\&$ SYS-US}} \\ \#1 & 0.98 & 0.78 & 1.00 & 0.75 & 0.76 \\ \#2 & 0.97 & 0.77 & 0.99 & 0.70 & 0.73 \\ \#3 & 0.95 & 0.79 & 0.96 & 0.77 & 0.74 \\ \#4 & 0.95 & 0.76 & 0.96 & 0.69 & 0.73 \\ \#5 & 0.97 & 0.75 & 0.98 & 0.66 & 0.74 \\ \#6 & 0.96 & 0.77 & 0.98 & 0.73 & 0.78 \\ \#7 & 0.96 & 0.73 & 0.96 & 0.71 & 0.72 \\ \#8 & 0.97 & 0.75 & 0.97 & 0.73 & 0.74 \\ \#9 & 0.98 & 0.80 & 0.99 & 0.80 & 0.70 \\ \#10 & 0.98 & 0.78 & 0.99 & 0.80 & 0.73 \\ \#11 & 0.98 & 0.78 & 0.99 & 0.80 & 0.71 \\ \#12 & 0.97 & 0.71 & 0.97 & 0.70 & 0.67 \\ \#13 & 0.98 & 0.81 & 0.99 & 0.76 & 0.78 \\ \#14 & 0.96 & 0.73 & 0.96 & 0.70 & 0.69 \\ \#15 & 0.97 & 0.72 & 0.96 & 0.70 & 0.68 \\ \#16 & 0.97 & 0.72 & 0.96 & 0.68 & 0.68 \\ \bottomrule \end{tabularx} \caption{Performance of ArgumentAnalyst for selected systematic metric (\textbf{\scriptsize SYS-PP $\&$ SYS-RP $\&$ SYS-RC $\&$ SYS-US}) on specific subsets (columns) of the OOS eval data.} \label{table_main_subsets_oos1} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize SYS-VAL}} \\ \#1 & 0.97 & 0.68 & 0.96 & 0.74 & 0.74 \\ \#2 & 0.97 & 0.68 & 0.97 & 0.73 & 0.71 \\ \#3 & 0.94 & 0.70 & 0.94 & 0.72 & 0.71 \\ \#4 & 0.95 & 0.65 & 0.94 & 0.68 & 0.71 \\ \#5 & 0.96 & 0.59 & 0.95 & 0.65 & 0.62 \\ \#6 & 0.95 & 0.62 & 0.96 & 0.69 & 0.63 \\ \#7 & 0.94 & 0.66 & 0.94 & 0.66 & 0.71 \\ \#8 & 0.95 & 0.67 & 0.95 & 0.69 & 0.69 \\ \#9 & 0.97 & 0.65 & 0.97 & 0.72 & 0.69 \\ \#10 & 0.97 & 0.67 & 0.97 & 0.68 & 0.72 \\ \#11 & 0.97 & 0.70 & 0.97 & 0.68 & 0.74 \\ \#12 & 0.95 & 0.63 & 0.95 & 0.72 & 0.70 \\ \#13 & 0.97 & 0.68 & 0.95 & 0.73 & 0.73 \\ \#14 & 0.95 & 0.63 & 0.94 & 0.72 & 0.69 \\ \#15 & 0.95 & 0.65 & 0.94 & 0.75 & 0.71 \\ \#16 & 0.95 & 0.65 & 0.95 & 0.73 & 0.71 \\ \bottomrule \end{tabularx} \caption{Performance of ArgumentAnalyst for selected systematic metric (\textbf{\scriptsize SYS-VAL}) on specific subsets (columns) of the OOS eval data.} \label{table_main_subsets_oos2} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-RSS}} \\ \#1 & 0.19 & -0.16 & 0.11 & -0.07 & -0.18 \\ \#2 & 0.21 & -0.13 & 0.10 & -0.05 & -0.15 \\ \#3 & 0.30 & 0.11 & 0.17 & 0.22 & 0.06 \\ \#4 & 0.29 & 0.16 & 0.16 & 0.24 & 0.16 \\ \#5 & 0.32 & 0.18 & 0.19 & 0.23 & 0.18 \\ \#6 & 0.31 & 0.11 & 0.18 & 0.19 & 0.07 \\ \#7 & 0.30 & 0.15 & 0.17 & 0.25 & 0.16 \\ \#8 & 0.30 & 0.12 & 0.17 & 0.24 & 0.08 \\ \#9 & 0.33 & 0.23 & 0.19 & 0.30 & 0.23 \\ \#10 & 0.33 & 0.20 & 0.19 & 0.27 & 0.16 \\ \#11 & 0.33 & 0.21 & 0.19 & 0.28 & 0.16 \\ \#12 & 0.20 & 0.06 & 0.11 & 0.16 & 0.04 \\ \#13 & 0.33 & 0.12 & 0.19 & 0.26 & 0.07 \\ \#14 & 0.20 & 0.06 & 0.10 & 0.16 & 0.03 \\ \#15 & 0.18 & 0.04 & 0.07 & 0.14 & 0.00 \\ \#16 & 0.18 & 0.04 & 0.07 & 0.11 & 0.02 \\ \bottomrule \end{tabularx} \caption{Performance of ArgumentAnalyst for selected exegetic metrics (\textbf{\scriptsize EXE-RSS}) on specific subsets (columns) of the OOS eval data.} \label{table_main_subsets_oos3} \end{small} \end{table} \begin{table} \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{inference}} & \multicolumn{2}{c}{\emph{presentation}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} chain & simple & complex & plain & mutilat. & C\&M \\ \midrule \multicolumn{6}{c}{\textbf{\scriptsize EXE-JSS}} \\ \#1 & 0.35 & -0.14 & 0.36 & -0.09 & -0.13 \\ \#2 & 0.40 & 0.02 & 0.39 & 0.10 & 0.02 \\ \#3 & 0.30 & -0.15 & 0.29 & -0.08 & -0.15 \\ \#4 & 0.36 & 0.03 & 0.33 & 0.08 & -0.02 \\ \#5 & 0.41 & 0.15 & 0.39 & 0.17 & 0.11 \\ \#6 & 0.40 & 0.04 & 0.38 & 0.10 & -0.01 \\ \#7 & 0.39 & 0.12 & 0.37 & 0.15 & 0.06 \\ \#8 & 0.39 & 0.08 & 0.38 & 0.10 & -0.02 \\ \#9 & 0.47 & 0.16 & 0.42 & 0.31 & 0.13 \\ \#10 & 0.47 & 0.11 & 0.42 & 0.26 & 0.02 \\ \#11 & 0.47 & 0.11 & 0.42 & 0.26 & 0.02 \\ \#12 & 0.40 & -0.01 & 0.35 & 0.14 & -0.08 \\ \#13 & 0.45 & 0.03 & 0.36 & 0.21 & -0.01 \\ \#14 & 0.38 & -0.00 & 0.30 & 0.15 & -0.05 \\ \#15 & 0.30 & -0.04 & 0.22 & 0.07 & -0.07 \\ \#16 & 0.30 & -0.03 & 0.22 & 0.11 & -0.06 \\ \bottomrule \end{tabularx} \caption{Performance of ArgumentAnalyst for selected exegetic metric (\textbf{\scriptsize EXE-JSS}) on specific subsets (columns) of the OOS eval data.} \label{table_main_subsets_oos4} \end{small} \end{table} \section{Synthetic Argument Data} \label{app:aaac} A synthetically generated {\small AAAC} record, which nicely illustrates the underdetermination of argument reconstruction, with two implicit premises, one distracting statement and a simple (one-step) argument (formatted as presented to the model): \begin{scriptsize}\ttfamily \noindent\textit{source:} It is not the case that Tracy is not an admirer of Fullerton and Tracy has seen La Habra. Plus, if someone loves Chico, then they haven't visited Monterey, owing to the fact that loving Laguna Beach is sufficient for not having visited Monterey. \noindent\textit{reasons:} loving Laguna Beach is sufficient for not having visited Monterey (ref: (2)) \noindent\textit{conjectures:} if someone loves Chico, then they haven't visited Monterey (ref: (4)) \noindent\textit{argdown:}\newline (1) If someone is an admirer of Chico, then they are an admirer of Laguna Beach or a visitor of Stockton.\newline (2) If someone admires Laguna Beach, then they haven't visited Monterey.\newline (3) If someone has visited Stockton, then they haven't visited Monterey.\newline --\newline with generalized dilemma (neg variant) from (1) (2) (3)\newline --\newline (4) If someone admires Chico, then they haven't visited Monterey. \noindent\textit{premises:} If someone is an admirer of Chico, then they are an admirer of Laguna Beach or a visitor of Stockton. (ref: (1)) | If someone admires Laguna Beach, then they haven't visited Monterey. (ref: (2)) | If someone has visited Stockton, then they haven't visited Monterey. (ref: (3)) \noindent\textit{conclusion:} If someone admires Chico, then they haven't visited Monterey. (ref: (4)) \noindent\textit{premises\_form:} (x): Fx -> (G x v H x) (ref: (1)) | (x): G x -> not I x (ref: (2)) | (x): H x -> not I x (ref: (3)) \noindent\textit{conclusion\_form:} (x): F x -> not I x (ref: (4)) \noindent\textit{keys:} F: admirer of Chico | G: admirer of Laguna Beach | H: visitor of Stockton | I: visitor of Monterey \end{scriptsize} \section{Training Set-up} \label{app:training_setup} By interpreting a generative mode as a sequence-to-sequence task, we may translate a multi-angular DeepA2 dataset (e.g., {\small AAAC01}) into a multi-task sequence-to-sequence format, on which a sequence-to-sequence model can be trained. For each record in the multi-angular DeepA2 dataset, we randomly sample 14 modes in accordance with the weights provided in Table~\ref{table:all_generative_modes} and add, for each mode, a corresponding sequence-to-sequence record to the training data. This results, for {\small AAAC01}, in a sequence-to-sequence training dataset with $14\times 16.000$ records. \begin{table}[htbp] \centering \begin{tabularx}{\linewidth}{@{}p{0.20\linewidth}@{}Y@{}Y|p{0.17\linewidth}@{}Y@{}Y|p{0.23\linewidth}@{}Y@{}Y@{}} \toprule mode & w\textsubscript 1 & w\textsubscript{2} & mode & w\textsubscript 1 & w\textsubscript{2} & mode & w\textsubscript 1 & w\textsubscript{2} \\ \midrule \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew6}{\scriptsize$\mathbf{P\,C\,O\!\leadsto\!{F}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$} & 1. & 1. & \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$} & 1. & 1. & \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} & 1. & 1. & \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$} & .2 & .2 & \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$} & .2 & .2 & \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$} & .7 & -- \\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$} & .7 & -- & \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$} & .7 & -- & & & \\ \bottomrule \end{tabularx} \caption{21 generative modes with corresponding weights in {\small AAAC} (w\textsubscript 1) and \emph{EntailmentBank} (w\textsubscript 2) training data.} \label{table:all_generative_modes} \end{table} Our models (base model T5-large with 770M parameters, and pretrained ArgumentAnalyst) are trained with batch-size 2 and learning rate 0.00001. For {\small AAAC01}, eval loss starts to increase at epoch 8; with \emph{EntailmentBank} data, eval loss increases from epoch 2 onwards. \section{Iterative Prediction with Generative Chains} \label{app:gen_chains} Generative chains are implemented with a dynamic dictionary (9 keys, corresp.\ to the dimensions of DeepA2 data), which is initialized with the source text, provides input for the generative modes, and is updated after each generative step with the mode's generated output. Output is generated with beam search decoding and beam width 2. \begin{table}[htb] \centering \begin{small} \begin{tabularx}{\linewidth}{@{}l@{\hspace{5pt}}p{0.75\linewidth}r@{\hspace{3pt}}r@{}} \toprule \# & {mode sequence} & l. & s. \\ \midrule \textbf{1} & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} & 3 & 0 \smallskip\\ 2 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,J\!\leadsto\!{A}}$} & 3 & 1 \smallskip\\ 3 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\!\leadsto\!{A}}$} & 3 & 1 \smallskip\\ 4 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 2 \smallskip\\ 5 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 6 & \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,J\!\leadsto\!{R}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 7 & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ 8 & \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,R\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 3 & 3 \smallskip\\ \textbf{9} & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} & 4 & 4 \smallskip\\ 10 & \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} & 4 & 4 \smallskip\\ 11 & \parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{S\,R\,J\!\leadsto\!{A}}$} } & 7 & 8 \smallskip\\ 12 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}} & 9 & 11 \smallskip\\ \textbf{13} &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}} & 9 & 11 \smallskip\\ 14 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} }\vspace{1pt} & 15 & 20 \smallskip\\ 15 &\parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} } & 11 & 18 \smallskip\\ 16 & \parbox[t]{\linewidth}{ \raggedright \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\,C\,O\!\leadsto\!{F}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{F\,K\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} } & 12 & 21 \\ \bottomrule \end{tabularx} \end{small} \caption{16 generative chains (without final formalization sub-sequences) evaluated in this study. The illustrative chains highlighted in the main paper are \#1 (straight), \#9 (hermeneutic cycle), and \#13 (logical streamlining).} \label{table:all_generative_chains_app} \end{table} Table~\ref{table:all_generative_chains_app} displays all generative chains we resort to in this study, all of which are used in the \textit{first experiment}. The \textit{second experiment} makes use of chains 1--11. The \textit{third experiment} deploys chains 1--13. \section{Introduction} Argumentative text analysis is an interpretation method for clarifying arguments \citep{Fisher:2004cq}. Being studied in argumentation theory, logic, or epistemology, it is widely taught and applied as a key critical thinking skill in, e.g., law \citep{Alexy:1989rh}, the humanities \citep{Bruce:2011iy}, social sciences \citep{Fairclough2012}, policy advice \citep{HanssonHirschHadornRaUBook2016}, or public debate \citep{Beck_Neupane_Carroll_2019}. This paper presents a computational approach for \emph{deep argument analysis}, i.e., for \textbf{reconstructing natural-language arguments} from a given text, as in the following example \citep[adapted from][]{sep-stem-cells}: \noindent \begin{tabular}{@{}c@{}c@{}c@{}} \small{\textbf{source text}}&$\leadsto$&\small{\textbf{reconstructed argument}}\\ \begin{minipage}{.48\linewidth} \fontsize{9}{10}\selectfont It is unethical to destroy human embryos. The most basic argument supporting this claim just stresses that it is wrong to intentionally kill innocent human beings. \end{minipage}&& \begin{minipage}{.47\linewidth} \fontsize{9}{10}\selectfont (P1) It is impermissible to kill innocent human beings. (P2) The human embryo is an innocent human being. (C) \textsc{Thus}: It is impermissible to kill the human embryo. \end{minipage} \end{tabular}\medskip The literature on argument reconstruction \citep[cf.][]{Feldman1998,Scholz2000,Lau:2011st,BowllKemp2014,Brun2014-BRURAF,BrunBetzRaU2016} characterizes deep argument analysis as: \begin{itemize} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item a complex task involving a variety of \textbf{sub-tasks}, such as identifying reasons and conclusions in a text, formalizing sentences, checking validity of an inference, logical streamlining, or explicating implicit premises. \item a non-conservative, \textbf{creative task} that goes beyond mere text annotation and essentially generates a new, more transparent text. \item an \textbf{iterative process} through which reconstructions are built and revised step-by-step, and the solution space is gradually explored. \item a hermeneutical task, guided by the \textbf{principle of charity}, which urges one to come up with an interpretation (reconstruction) as strong and plausible as possible. \item assuming a \textbf{normative background theory} about what constitutes a strong and plausible argument in the first place. \item being affected by \textbf{severe underdetermination}, both in terms of the process and the final outcome; in particular, there typically exist rival, yet equally legitimate reconstructions of one and the same text. \end{itemize} Given these special characteristics, \emph{deep argument analysis} poses many challenges for machine models of natural language understanding. In this paper, we introduce a novel modular modeling approach for analysing complex argumentation that builds on recent pre-trained text2text transformers \cite{raffel2020exploring}. Our approach -- DeepA2 (illustrated in Figure~\ref{fig:basic_design}) -- works by systematically decomposing a complex reconstruction problem to smaller text2text sub-tasks (see Section~\ref{sec:framework}), which allows for emulating the types of interpretation strategies and heuristics studied in argument theory. Referring to the different components of a comprehensive argumentative analysis, we may also define tailor-made metrics for assessing argument reconstructions. To demonstrate the benefits of our approach, we construct a new argumentation dataset ({\small AAAC}) that exhibits several complex \emph{interpretive dimensions}, show how to map other existing datasets into our framework (Section~\ref{sec:datasets}), and train and evaluate our main model, referred to as \textbf{ArgumentAnalyst}, within DeepA2 (Section~\ref{sec:experiments}). \begin{figure*} \begin{center} \input{figs/basic_design_tacl} \end{center} \caption{Example text-to-text tasks for deep argument analysis, defined by DeepA2.} \label{fig:basic_design} \end{figure*} Our empirical results show: 1. ArgumentAnalyst generates -- out-of-domain -- semantically meaningful argument reconstructions, 70\% of which are logically valid. By pooling alternative reconstructions, virtually every source text in the synthetic dataset can be reconstructed as a valid argument. 2. Modular generation chains which emulate iterative reconstruction strategies are highly successful: they yield, in particular, a more coherent interpretation of an argumentative text, exploit the text more thoroughly, and generally outperform one-step generation as soon as problems become difficult. 3. ArgumentAnalyst outperforms \emph{EntailmentWriter} \citep{dalvi2021explaining} on difficult \emph{EntailmentBank} problems with respect to telling apart relevant premises from distractors. 4. ArgumentAnalyst generates reliable higher-order evidence \citep{christensen2010higher} which can be used for diagnosing logical fallacies -- despite the fact that ArgumentAnalyst is maximally charitable and is trained to reconstruct any input whatsoever as a logically valid argument, even if the input argument, taken at face value, \emph{is} painstakingly fallacious. In concluding this paper, we sum-up and interpret these findings as general vindication of DeepA2's modular, multi-angular design (Section~\ref{sec:conclusion}). \section{Related Work} Taking \textbf{transformers as soft reasoners}, recent work, pioneered by \citet{Clark2020_TransSoftReas}, has shown that pre-trained language models (PTLMs) possess basic deductive and abductive reasoning capabilities on diverse domains \citep{banerjee2020self,betz2020critical,Bostrom2021FlexibleOF}, but are equally prone to fallacies and biases \citep{kassner2020negated,talmor2020olmpics}. Besides drawing the correct conclusion, transformers are able to generate correct reasoning chains that justify an answer, which in turn further increases answer accuracy \citep{saha2020prover,tafjord2020proofwriter,gontier2020measuring,Saha2021multiPRoverGM,dalvi2021explaining}. \textbf{Neural semantic parsing} uses sequence models to \emph{formalize} natural language sentences \citep{Kamath2019ASO}. \citet{Shin2021ConstrainedLM} show that PTLMs are zero-shot parsers, and that intermediate steps which rephrase and streamline the original input before parsing it to a formal language improve accuracy. \textbf{Argument mining} is an active research field that studies computational methods for retrieving argumentative components from a text corpus \citep{Wachsmuth2017BuildingAA,Moens:2018zt,Potthast2019ArgumentSA,LawrenceReed2020}. Recently, work in this field has started to use PTLMs: \citet{EinDor2020CorpusWA} and \citet{Gretz2020ALD} succeed in retrieving relevant pro- or con-arguments for a given topic from a large corpus with a fine-tuned BERT model \citep{Devlin2019BERTPO}. Using BERT, \citet{BarHaim2020FromAT} map argumentative texts to key points that succinctly summarize the argument's gist. \citet{Akiki2020ExploringAR} explore abstractive argument retrieval by means of text generation with GPT2 \citep{Radford2019}. Similarly, \citet{Syed2021GeneratingIC} deploy BART \citep{lewis2019bart} to generate conclusions of argumentative texts on a challenging corpus compiled from Reddit and various online debate corpora. \citet{Rodrigues2020ReproductionAR}, revisiting the argument comprehension task \citep{HabernalEtAl2014,Habernal2018TheAR}, demonstrate that identifying implicit premises -- and deep argument analysis \emph{a fortiori} -- remains a hard, unsolved task. Recently, \citet{Chakrabarty2021ImplicitPG} have shown that augmenting training data with discourse-aware commonsense knowledge improves the plausibility of automatically identified implicit premises. Such a knowledge-driven perspective is orthogonal to, and may eventually complement the logical approach adopted in this paper. \section{Framework} \label{sec:framework} \subsection{Problem Definition} \label{subsec:problem} Deep argument analysis of a given text seeks to answer the following \textbf{central question}: Can we make sense of the text as a presentation of a rational argument? And if so, what exactly is the argument; and how precisely is it related to the text? In carrying out a deep argument analysis, one explicates, rephrases and rebuilds -- even repairs -- the text's argument in one's own words. That is why deep argument analysis is also referred to as \emph{rational reconstruction} \citep[cf.][]{sep-carnap-suppD}. The reconstructed argument forms, together with details about its logical properties and about its relation to the source text, a \emph{comprehensive argumentative analysis} of a text. The latter can be seen as an interpretative hypothesis that is abductively inferred from a source text by means of an inference to the best explanation. Here is another example that illustrates how far a reconstruction may deviate from the original text that presents the argument \citep[adapted from][]{BrunBetzRaU2016}: \noindent \begin{tabular}{@{}c@{}c@{}c@{}} \small{\textbf{source text}}&$\leadsto$&\small{\textbf{reconstructed argument}}\\ \begin{minipage}{.48\linewidth} \fontsize{9}{10}\selectfont So, the researcher's central dilemma exists in an especially acute form in psychology: either the animal is not like us, in which case there is no reason for performing the experiment; or else the animal is like us, in which case we ought not to perform on the animal an experiment that would be considered outrageous if performed on one of us. \end{minipage}&& \begin{minipage}{.47\linewidth} \fontsize{9}{10}\selectfont (P1) If the animal is not like us, it is wrong to perform the experiment. (P2) If the animal is like us, it is wrong to perform the experiment. (C) \textsc{Thus} (with \emph{classical di\-lemma}): It is wrong to perform the experiment. \end{minipage} \end{tabular}\medskip A compelling argumentative analysis yields (i) a rational argument that is (ii) closely related to the source text. Deep argument analysis is, accordingly, guided by a \textbf{dual goal} \citep[cf.][]{BrunBetzRaU2016}. An argument reconstruction should both be \begin{itemize} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[(i)] \textbf{systematically correct}, i.e., the reconstructed argument itself is, e.g., transparent, deductively valid, non-circular, or doesn't contain irrelevant premises; and \item[(ii)] \textbf{exegetically adequate}, i.e., the reconstructed argument accounts for the original text, because, e.g., its premises merely reformulate parts of the text, or because its overall inferential structure can be traced within the source text. \end{itemize} The fact that there typically exists -- regarding a specific text -- a trade-off between these two goals is one major reason for the underdetermination of deep argument analysis and the plurality of legitimate reconstructions of a given text \citep[cf.][]{BrunBetzRaU2016}. Against this background, we may finally define the problem of \begin{description} \item[Deep artificial argument analysis:] Describe, analyse and implement an effective computational system for deep argument analysis! \end{description} \subsection{Multi-angular Data} \label{subsec:multi-angle} The DeepA2 framework is built upon a \emph{multi-angular} data structure \citep{TafjordClark2021GPQA} whose dimensions represent the essential components of a comprehensive argumentative analysis (see Section~\ref{subsec:problem}). Structured argumentative data is rendered as plain text \citep[cf.][]{Voigt2014}. The different data dimensions, which are related as shown in Figure~\ref{fig:angles01}, are (with an illustrating example): \begin{small} \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[argument source text (\small S)] \ \newline It is unethical to destroy human embryos. The basic argument supporting this claim just stresses that it is wrong to intentionally kill innocent human beings. \item[verbatim reason statements in source text (\small R)]\ \newline it is wrong to intentionally kill innocent human beings (ref: (1)) \item[verbatim conjectures in the source text (\small J)]\ \newline It is unethical to destroy human embryos (ref: (3)) \item[argument reconstruction (\small A)] {\ \newline (1) It is impermissible to kill innocent human beings.\newline (2) The human embryo is an innocent human being.\newline -- with hypothetical syllogism from (1) (2) --\newline (3) It is impermissible to kill the human embryo.} \item[premises of the reconstructed argument (\small P)]\ \newline It is impermissible to kill innocent human beings $|$ The human embryo is an innocent human being \item[final conclusion of reconstr.\ argument (\small C)]\ \newline It is impermissible to kill the human embryo \item[formalizations of premises (\small F)]\ \newline (x): F x $\rightarrow$ G x $|$ (x): H x $\rightarrow$ F x \item[formalization of conclusion (\small O)]\ \newline (x): H x $\rightarrow$ G x \item[keys for the formalizations' constants (\small K)]\ \newline F: innocent human being $|$ G: must not be killed $|$ H: human embryo \end{description} \end{small} Each record in a DeepA2 dataset contains a source text plus a legitimate comprehensive argumentative analysis, which is, given underdetermination, not necessarily the only compelling reconstruction of the text; moreover, a dataset \emph{may} contain different records with one and the same source text analysed in several ways. So, for example, an alternative, equally legitimate argument reconstruction of the above source text (\textbf{\small S}) may read: \begin{small} \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[argument reconstruction (\small A)] {\ \newline (1) If it is wrong to kill innocent human beings, then it is wrong to kill a human embryo.\newline (2) It is wrong to kill innocent human beings.\newline -- with modus ponens from (1) (2) --\newline (3) It is wrong to kill a human embryo.} \end{description} \end{small} Beyond this structural and functional characterization, DeepA2 is agnostic about the nature and origin of the argumentative data. Synthetically generated, automatically retrieved, manually created datasets as well as translations of other databases are all compatible with the framework and can be used side by side. \begin{figure}[tbp] \centering \input{figs/tikz_angles01} \vspace{-25pt} \caption{Relationships between dimensions of the multi-angular argumentative data.} \label{fig:angles01} \end{figure} \subsection{Generative Modes and Chains} \label{subsec:generative_modes} Given DeepA2's multi-dimensional data structure described in the previous section, a \textbf{generative mode} maps data from some input dimensions to a target dimension. For example, the mode \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ takes a source text (\textbf{\small S}) as input and outputs an argument reconstruction (\textbf{\small A}), the mode \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$}\ reconstructs the argument (\textbf{\small A}) given the verbatim reasons (\textbf{\small R}) and conjectures (\textbf{\small J}). All in all, we define and investigate 21 different generative modes (see Appendix~\ref{app:training_setup}). Every mode represents a task on which a text-to-text model can be trained. By taking some mode's output as another mode's input, modes can be concatenated into \textbf{generative chains}. For example, the output of modes \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ and \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$}\ (reasons and conjectures from source) can be fed into mode \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$}\ to reconstruct an argument. Such generative chains allow us to emulate different strategies (heuristics) for analysing a given argumentative text (see Appendix~\ref{app:gen_chains} for technical details). Three generative chains which model distinct interpretative strategies, taking a source text (\textbf{\small S}) as sole input, are: \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[straight]\ \newline \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\!\leadsto\!{J}}$} \raggedright \item[hermeneutic cycle]\ \newline \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$} \raggedright \item[logical streamlining]\ \newline \colorbox{colbrew1}{\scriptsize$\mathbf{S\!\leadsto\!{A}}$}\ \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}\ \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}\ \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$} \raggedright \end{description} While the chain \emph{straight}, where no output ever serves as input to another mode, represents a simple baseline, \emph{hermeneutic cycle} and \emph{logical streamlining} mimic prominent, equally-named methods in argument analysis \citep[cf.][]{BowllKemp2014,BrunBetzRaU2016}. One goes through a hermeneutic cycle, generally speaking, if one revisits a text in view of its previous interpretation, as, for example, in steps \colorbox{colbrew2}{\scriptsize$\mathbf{S\,A\!\leadsto\!{R}}$}\ \colorbox{colbrew3}{\scriptsize$\mathbf{S\,A\!\leadsto\!{J}}$}, where the source text (\textbf{\small S}) is re-interpreted (identifying reason statements and conjectures) given the previously reconstructed argument (\textbf{\small A}), so as to subsequently re-reconstruct the argument itself (step \colorbox{colbrew1}{\scriptsize$\mathbf{R\,J\!\leadsto\!{A}}$}). To logically streamline a reconstruction means to rephrase its conclusion or premises in order to make their logico-semantic structure more transparent. Such semantic clarification can be emulated by (i) formalizing a statement (e.g., \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{C\,O\!\leadsto\!{K}}$}) and (ii) using the keys (\textbf{\small K}) to retrieve the original statement from the generated logical formulas (such as in \colorbox{colbrew4}{\scriptsize$\mathbf{O\,K\!\leadsto\!{C}}$}), from which the argument can be re-built (step \colorbox{colbrew1}{\scriptsize$\mathbf{P\,C\!\leadsto\!{A}}$}). For evaluation, we append to each generative chain the following sub-chain that formalizes the reconstructed argument: \begin{description} \item[formalization]\ \newline \colorbox{colbrew5}{\scriptsize$\mathbf{A\!\leadsto\!{P}}$}\ \colorbox{colbrew4}{\scriptsize$\mathbf{A\!\leadsto\!{C}}$}\ \colorbox{colbrew6}{\scriptsize$\mathbf{P\!\leadsto\!{F}}$}\ \colorbox{colbrew7}{\scriptsize$\mathbf{C\,P\,F\!\leadsto\!{O}}$}\ \colorbox{colbrew8}{\scriptsize$\mathbf{P\,F\,C\,O\!\leadsto\!{K}}$} \raggedright\vspace{1mm} \end{description} A generative chain can be construed as hypergraph on the dimensions of DeepA2's multi-angular datasets, with each of its modes representing a directed hyper-edge. Summing up the number of input dimensions (except \textbf{\small S}) over all modes yields a simple graph centrality measure, which gauges a chain's sophistication. Thus, \emph{straight}, \emph{hermeneutic cycle} and \emph{logical streamlining} display a sophistication of 0, 4, and 11, respectively. \subsection{Metrics} \label{subsec:metrics} As discussed in Section~\ref{subsec:problem}, an argument reconstruction should both be sound and make sense of the text to-be-interpreted. In line with the dual goal of argument analysis, we propose metrics both for the systematic correctness and for the exegetic adequacy of a given analysis. The following metrics measure the degree to which a given generated argument is \emph{systematically correct} \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[\small SYS-PP] 1 if the argument is not a \emph{petitio principii} (i.e., if no premise is identical with its final conclusion), 0 otherwise; \item[\small SYS-RP] 1 if the argument has no \emph{redundant premises} (i.e., if no premise occurs more than once), 0 otherwise; \item[\small SYS-RC] 1 if the argument has no \emph{redundant conclusions} (i.e., if no conclusion -- intermediary or final -- occurs more than once), 0 otherwise; \item[\small SYS-US] 1 if all statements in the argument other than the final conclusion are explicitly \emph{used in an inference}, 0 otherwise; \item[\small SYS-SCH] ratio of sub-arguments which correctly instantiate the explicitly stated \emph{inference scheme} (e.g., hypothetical syllogism); \item[\small SYS-VAL] 1 if the argument is \emph{globally valid} (i.e., if the final conclusion deductively follows from the premises), 0 otherwise; \end{description} All six systematic metrics can be computed automatically ({\small SYS-SCH} tries to parse the argument based on the inference schemes and templates used to construct the synthetic dataset in the first place; {\small SYS-VAL} passes the model-generated formalizations of premises and conclusion to a symbolic theorem prover \citep{de2008z3}; and the remaining metrics check for string identity). Whereas systematic metrics apply primarily to the generated argument (\textbf{\small A}), a reconstruction's interpretative adequacy will also depend on how reasons (\textbf{\small R}) and conjectures (\textbf{\small J}) coherently link the argument's components to the original text. As a first set of \emph{exegetic metrics}, we thus propose \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[\small EXE-MEQ] 1 if the reasons and conjectures are \emph{mutually exclusive verbatim quotes} from the source text, 0 otherwise; \item[\small EXE-RSS] semantic similiarity \citep[BLEURT, see][]{sellam2020bleurt} of each reason statement and its counterpart premise in the reconstructed argument (if such exists, -1 otherwise); \item[\small EXE-JSS] semantic similiarity (see {\small EXE-RSS}) of each conjecture statement and its counterpart in the reconstructed argument (if such exists, -1 otherwise). \end{description} Each source text presents (more or less faithfully) an underlying target argument, which in turn marks some of the text's statements as `target' reasons, others as `target' conjectures. The following two metrics assess the degree to which a comprehensive argumentative analysis correctly predicts (\textbf{\small R}, \textbf{\small J}) those target reasons and conjectures. \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[\small EXE-PPR] predictive performance (F1-score) for identifying (target) reason statements in the source text; \item[\small EXE-PPJ] predictive performance (F1-score) for identifying (target) conjecture statements in the source text. \end{description} An argument's final conclusion may be implicit or explicit in a given text. The ability to fully exploit a text can be measured by verifying whether the reconstructed argument's final conclusion is implicit (= prediction) if and only if the target argument's one is. \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[\small EXE-TE] text exploitation, as measured by ability (F1-score) to reconstruct arguments with explicit final conclusions (prediction) if and only if the target final conclusions are explicit. \end{description} \subsection{Models} \label{subsec:models} Any text-to-text language model is compatible with the proposed DeepA2 framework. We refer to models used within the framework as \textbf{ArgumentAnalyst}. In this study, we train and evaluate the transformer model T5 \citep{raffel2020exploring} with 770M parameters as implemented by \cite{wolf-etal-2020-transformers}. \subsection{Limitations} In the DeepA2 framework, arguments are reconstructed from relatively short and isolated texts, disregarding both the broader context of the argument and domain-specific background knowledge. This limits the framework, as presented here, in important ways: Implicit premises that are explicated in an argument reconstruction can neither be checked for plausibility nor for agreement with the author's broader convictions. In addition, the framework cannot assess an argument's dialectic function in a wider debate. It seems worthwhile to explore according extensions of the framework in future research. \section{Datasets} \label{sec:datasets} For the experiments reported below, we synthetically create two artificial argument analysis corpora that comply with the DeepA2 framework (see also Appendix~\ref{app:aaac}): \textbf{\small AAAC01} and \textbf{\small AAAC02}. In addition, we translate the synthetic \emph{RuleTaker} \citep{Clark2020_TransSoftReas} and the manually compiled \emph{EntailmentBank} \citep{dalvi2021explaining} datasets into our framework. In argument analysis, one proceeds \emph{from} a source text \emph{to} its reconstruction. Creating the synthetic corpora, we reverse-engineer this process: \emph{Step 1.} We sample, first of all, a possibly complex argument (\textbf{\small A}) from a set of valid inference schemes. In doing so, we use a multi-step templating strategy \citep[inspired by][]{betz2020critical} to translate symbolic forms into natural language schemes (which were generated by local domain experts) and to substitute natural language terms for placeholders. Premises (\textbf{\small P}), conclusion (\textbf{\small C}) and their formalization (\textbf{\small F, O, K}) are side-products of such a construction of an argument. \emph{Step 2.} Given the fully explicit argument (\textbf{\small A}), we compose a text (\textbf{\small S}) that presents the argument in a more or less transparent and faithful way. Such text creation involves: rendering the argument tree as a linear story, leaving out premises or conclusions (implicit premises and conclusions), inserting irrelevant material (distractors), using templates that obfuscate the logical form of a sentence, limiting the use of premise and conclusion indicators (such as ``therefore''), applying rule-based and automatic paraphrasing. In composing the argumentative text (\textbf{\small S}), we may record its reasons (\textbf{\small R}) and conjectures (\textbf{\small J}). Given the synthetic and controlled nature of our dataset, which involved eliciting rule templates from a group of local domain experts, all data is assumed to be correct by \emph{construction}. As an additional check of correctness on the logic of our examples, we ran a symbolic theorem prover \citep{de2008z3} over the argument formalizations to verify their validity. To ensure the fluency of the underlying language templates, all templates were hand verified by the authors. Our two datasets {\small AAAC01} and {\small AAAC02} differ in the following ways: \begin{enumerate} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item predicates and names are sampled from different, disjunct domains (texts are about, e.g., allergies and family relations versus, e.g., badminton and cooking) to test a model's robustness to lexical diversity \citep{RozenShwartzEtAl2019}; \item similarly, {\small AAAC01} applies automatic paraphrasing \cite{Vamsi2021} to the final source text whereas {\small AAAC02} doesn't; \item {\small AAAC02} allows for imprecise renditions of logical formulas, while {\small AAAC01} sticks to plain formulations to test robustness to variations in description of rules. \end{enumerate} Each dataset contains diverse texts and arguments. Broadly speaking, data records may differ in terms of properties of the argument (step 1 above) and properties of the argument's presentation (step 2). Along these two dimensions, we define five homogeneous subsets of the data: \begin{description} \setlength{\itemsep}{0mm}\setlength{\parskip}{0mm} \item[simple inference:] arguments with a single inference step that neither involves negation nor compositional predicates; \item[complex inference:] arguments with four inference steps that heavily rely on syntactically intricate schemes (e.g., transposition, or de Morgan); \item[plain presentation:] all premises and conclusions are explicit in the source text which, in addition, contains no distractors; \item[mutilated presentation:] at least two premises and one conclusion are implicit, while the text contains two distractors and explicitly states the final conclusion; \item[C\&M:] the argument's inference is complex, plus the text contains at least two distractors. \end{description} The \emph{RuleTaker} and \emph{EntailmentBank} datasets contain multi-hop inference trees (\textbf{\small A}). To import these into the DeepA2 framework, we create source texts (\textbf{\small S}) for the given arguments by means of simple templates (such as ``\{\emph{theory}\} All this entails: \{\emph{hypothesis}\}'') and record reasons (\textbf{\small R}) and conjectures (\textbf{\small J}) on the fly. Unlike {\small AAAC} and \emph{EntailmentBank}, \emph{RuleTaker} \citep[as updated in][]{tafjord2020proofwriter} contains an equal share of arguments for which (i) the conclusion follows from the premises, (ii) the conclusion contradicts the premises, (iii) the conclusion is independent of the premises. \section{Experiments and Results} \label{sec:experiments} \paragraph{As first and main experiment} we train our base model (see Section~\ref{subsec:models}) on the {\small AAAC01} corpus, and evaluate the resulting ArgumentAnalyst model out-of-domain on {\small AAAC02}. ArgumentAnalyst undergoes multi-task training on 21 generative modes, which are interpreted as sequence-to-sequence tasks (the training set-up is further described in Appendix~\ref{app:training_setup}). The evaluation of ArgumentAnalyst on {\small AAAC02} proceeds in two steps: (1.) prediction: produces output in accordance with 16 different generative chains (Appendix~\ref{app:gen_chains}); (2.) metrics application: assesses the quality of the generated output by means of the systematic and exegetic metrics of the DeepA2 framework (see Section~\ref{subsec:metrics}). \begin{table*}[htbp] \begin{small} \begin{tabularx}{\linewidth}{l *{12}{Y}} \toprule {} & \multicolumn{6}{c}{\emph{systematic metrics} (\textbf{\small SYS-*})} & \multicolumn{6}{c}{\emph{exegetic metrics} (\textbf{\small EXE-*})} \\ \cmidrule(r){2-7} \cmidrule(r){8-13} chain&\textbf{\small PP}&\textbf{\small RP} & \textbf{\small RC} & \textbf{\small US} & \textbf{\small SCH} & \textbf{\small VAL} & \textbf{\small MEQ} & \textbf{\small RSS} & \textbf{\small JSS} & \textbf{\small PPR} & \textbf{\small PPJ} & \textbf{\small TE} \\ \midrule straight & .95 & .97 & .96 & .96 & .33 & .73 & .80 & -.08 & -.10 & .93 & .93 & .63 \\ herm.\ cy. & .95 & .98 & .95 & .93 & .31 & .72 & .82 & .16 & .12 & .93 & .92 & .71 \\ logic.\ str. & .95 & .97 & .96 & .95 & .32 & .72 & .82 & .11 & .00 & .93 & .92 & .69 \\ pooling & 1.0 & 1.0 & 1.0 & 1.0 & .73 & 1.0 & 1.0 & .26 & .29 & .96 & .96 & .97 \\ \textit{oracle} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{1.0} & \textit{.30} & \textit{.37} & \textit{1.0} & \textit{1.0} & \textit{1.0} \\ \bottomrule \end{tabularx} \end{small} \caption{Performance of ArgumentAnalyst on the {\small AAAC02} data as measured by systematic and exegetic metrics. Rows display results for three illustrative generative chains (\emph{straight}, \emph{hermeneutic cycle}, \emph{logical streamlining}), for the item-wise best performing generative chain out of all 16 chains (\emph{pooling}), and for oracle performance (\emph{oracle}), which one obtains by applying the metrics to the target data itself.} \label{table:main_results} \end{table*} Table~\ref{table:main_results} reports the ability of ArgumentAnalyst to generate systematically correct and exegetically adequate argument reconstructions. We obtain similar global results with the three chains \emph{straight}, \emph{hermeneutic cycle}, and \emph{logical streamlining}, whose generated reconstructions mainly differ in terms of internal coherence ({\small EXE-RSS}, {\small EXE-JSS}) and text exploitation ({\small EXE-TE}). However, the different generative chains complement each other, as shown by \emph{pooling}, which does not only outperform individual chains, but nearly attains oracle performance. \begin{table}[htbp] \begin{small} \begin{tabularx}{\linewidth}{l *{5}{Y}} \toprule {} & \multicolumn{2}{c}{\emph{ArgAn}\textsubscript{EB}} & \multicolumn{2}{c}{\emph{ArgAn}\textsubscript{AAAC,EB}} & \emph{EntWr}\\ \cmidrule(l){2-3} \cmidrule(l){4-5} steps & straight & herm.\ cycle & straight & herm.\ cycle & {} \\ \midrule 1 & .863 & .866 & .816 & .871 & .951 \\ 2 & .798 & .815 & .813 & .826 & .886 \\ 3 & .812 & .815 & .826 & .806 & .858 \\ 4 & .757 & .791 & .820 & .822 & .838 \\ $\geq$ 5 & .795 & .811 & .786 & .773 & .742 \\ any & .819 & .830 & .816 & .834 & .879 \\ \bottomrule\end{tabularx} \caption{Predictive performance of ArgumentAnalyst ({\emph{ArgAn}\textsubscript{EB}}, \emph{ArgAn}\textsubscript{AAAC,EB}) and EntailmentWriter (\emph{EntWr}) for identifying reason statements in an input text (metric {\small SYS-PPR}) on the \emph{EntailmentBank task2} dataset.} \label{table:ent_bank} \end{small} \end{table} Moreover, ArgumentAnalyst produces much better reconstructions of simple inferences and plain presentations -- compared to complex inferences and mutilated presentations, i.e., difficult problems (cf.\ Table~\ref{table:main_subsets} in App.~\ref{app:add_results}). In addition, within one and the same subset, substantial differences show up between the three generative chains. Globally speaking, \emph{hermeneutic cycle} outperforms the other two chains for difficult problems. \smallskip \noindent \emph{Is {ArgumentAnalyst} capable of reliable self-evaluation?} We have \textbf{validated the logic metric} ({\small SYS-VAL}), which passes on a self-generated formalization of the reconstructed argument to a theorem prover, in three ways: First of all, ArgumentAnalyst correctly recognizes \emph{target} arguments as valid (with accuracy 92.7\%), which has been verified by running the formalization subchain on target data. Secondly, virtually every generated argument with all-correct scheme instantiations (i.e., {\small SYS-SCH} $=1$) is also -- and correctly -- recognized as logically valid. Thirdly, a manual analysis (\textbf{human-in-the-loop}) of 100 generated arguments with incorrect scheme instantiation (i.e., {\small SYS-SCH} $<1$) reveals a high rate of false negatives: roughly one half of all inferences that are not automatically identified as an instantiation of the given scheme actually do correctly instantiate it. The accordingly \emph{adjusted} global ratio of correct scheme instantiations (Table~\ref{table:main_results}) equals roughly 0.65 (rather than 0.31--0.33), which is consistent with the ratio of logically valid arguments being 0.72--0.73. \smallskip \noindent \emph{Do reconstructed arguments exhibit basic semantic flaws?} Regarding the full dataset, ArgumentAnalyst produces nearly \textbf{flawless argument reconstructions}, committing basic errors (petitio, redundancy, unused statements) only very rarely (Table~\ref{table:main_results}). And even for very difficult problems, two thirds of all generated arguments display no basic flaw whatsoever (Table~\ref{table:main_subsets}, {\small SYS-PP \& SYS-RP \& SYS-RC \& SYS-US}). \smallskip \noindent \emph{Are reconstructed arguments logically valid?} Roughly 70\% of all arguments generated by one of the three chains are logically valid (Table~\ref{table:main_results}). More importantly, though, for virtually every source text in the dataset, there is at least one chain (out of 16) which reconstructs the text as a valid argument (\emph{pooling}). Given that logical validity can be automatically assessed, the \emph{pooled} system may thus \textbf{guarantee to yield a valid reconstruction}. Concerning different problem types (Table~\ref{table:main_subsets}), \emph{hermeneutic cycle} clearly outperforms the other chains as soon as the problem gets difficult. Additional analysis shows that ArgumentAnalyst can also \textbf{cope with underdetermination}, as 68\% of all generated arguments whose final conclusion differs ($\textrm{BLEU} \leq .8$) from the target argument's one -- i.e., arguments that are not reconstructed as expected given the target data -- are still logically valid. \smallskip \noindent \emph{Are the generated interpretations internally coherent?} The generative chain \emph{hermeneutic cycle} yields comprehensive argument reconstructions where premises (\textbf{\small P}) and conclusions (\textbf{\small C}) fit much better to detected reasons (\textbf{\small R}) and conjectures (\textbf{\small J}) than \emph{straight} or \emph{logical streamlining} ({\small EXE-RSS, EXE-JSS}). This holds globally (Table~\ref{table:main_results}), as well as for easy, and for difficult problems (Table~\ref{table:main_subsets}). Note that the \emph{oracle} baseline for metrics {\small EXE-RSS, EXE-JSS} is well below 1, which reflects the fact that source texts may present arguments in highly mutilated ways; it is nearly attained by \emph{pooling} the 16 different generative chains (Table~\ref{table:main_results}). \smallskip \noindent \emph{Can ArgumentAnalyst detect reasons and conjectures, and fully exploit the text?} The evaluation demonstrates that reason/conjecture detection on {\small AAAC02} is a relatively easy task ({\small EXE-PPR, EXE-PPJ}). In contrast, fully exploiting a text (i.e., generating an argument with implicit final conclusion if and only if the underlying target argument has an implicit final conclusion, {\small EXE-TE}) is seemingly more challenging (Table~\ref{table:main_results}). Again, \emph{hermeneutic cycle} achieves best text exploitation, performing, however, clearly below \emph{oracle} baseline -- which may simply reflect the degree of underdetermination in the {\small AAAC02} corpus. \paragraph{In a second experiment} we train two models on the imported \emph{EntailmentBank} (\emph{task1} and \emph{task2}) dataset (see Section~\ref{sec:datasets}), namely: (1.) our base model (T5), which yields Argument\-Analyst\textsubscript{EB}; (2.) the ArgumentAnalyst model pretrained on {\small AAAC02} \citep[resulting in an intermediary pre-training set-up similar to][]{phang2018sentence,Geva2020InjectingNR}, which yields ArgumentAnalyst\textsubscript{AAAC,EB}. Since the \emph{EntailmentBank} data doesn't contain formalizations, we can only train on 14 modes, which are interpreted as sequence-to-sequence tasks (see Appendix~\ref{app:training_setup}). We evaluate the models on \emph{task2} of \emph{EntailmentBank} only, which contains problems with a relatively large number of distractors, and proceed in two steps as before: prediction (with 11 different generative chains) and metrics application. \citet{dalvi2021explaining} report the ability of \emph{EntailmentWriter} (a fine-tuned T5-11b model) to correctly distinguish relevant premises of an argument from distractors in terms of a F1-score, which corresponds to our metric {\small EXE-PPR}. That's why the sole focus in this second experiment is on {\small EXE-PPR}. Table~\ref{table:ent_bank} describes the ability of ArgumentAnalyst models to correctly tell apart relevant premises from mere distractors in the \emph{EntailmentBank task2} dataset for two generative chains (\emph{straight}, which directly outputs reason statements, and \emph{hermeneutic cycle}, which tries to reconstruct the argument first and uses both source text and argument to identify reasons), and compares this with the performance of \emph{EntailmentWriter} \citep[scores from][]{dalvi2021explaining}. The results, shown separately for arguments with a specific number of inference steps, let us draw three conclusions: First, \emph{ArgumentAnalyst} outperforms \emph{EntailmentWriter} on difficult problems with more than 4 inference steps / sub-arguments. Second, using the sophisticated chain \emph{hermeneutic cycle} improves predictive performance compared to the simple \emph{straight} chain. Third, the chain \emph{hermeneutic cycle} (unlike \emph{straight}) generally benefits from intermediary pre-training on {\small AAAC} -- caveat: not so for arguments with more than 4 steps. This latter observation might be due to the fact that the {\small AAAC02} corpus, by construction, doesn't contain arguments with more than 4 steps, so that pre-training biases the model towards shorter arguments. \paragraph{In a third experiment} we explore the following hypothesis: \begin{description} \item[Informative higher-order evidence.] The degree to which ArgumentAnalyst struggles in reconstructing a given argument (presented in the source text) as logically valid is a reliable indicator for whether the original argument is fallacious or not. \end{description} To test this hypothesis, we apply ArgumentAnalyst (trained on {\small AAAC02}, see above) to the \emph{RuleTaker} data as imported into the DeepA2 framework (see Section~\ref{sec:datasets}): ArgumentAnalyst produces -- by means of 13 generative chains -- comprehensive reconstructions, to which the systematic and exegetic metrics are applied. \emph{RuleTaker} contains an equal share of arguments whose conclusions follow from (label=valid), contradict (label=contradiction), or are independent of (label=neutral) the corresponding premises. Now, informative higher-order evidence would allow us to correctly predict these labels. And this is exactly what we observe: First, if reconstructions of one and the same source text which are independently generated with different chains agree (disagree), then the original argument tends to be valid (invalid). Second, by training simple classifiers on our argumentative metrics and further properties of the reconstructions, we robustly achieve a predictive accuracy 10\% above the random baseline. While this is far below the SOTA results of tailor-made RuleTaker \citep{Clark2020_TransSoftReas} and ProofWriter \citep{tafjord2020proofwriter} models on this data, our findings nonetheless confirm the above hypothesis. \section{Conclusion} \label{sec:conclusion} In this paper, we have presented and implemented a multi-angular, modular framework for deep argument analysis (DeepA2). It allows for defining a large variety of generative modes by combining different dimensions of the data. These modes, in turn, can be concatenated into complex generative chains. ArgumentAnalyst -- a text-to-text model set up and trained within the DeepA2 framework -- yields plausible reconstructions of argumentative texts. Our empirical findings vindicate the overall framework and highlight the following \textbf{advantages of a multi-angular, modular design} in general: First of all, modular chains may emulate established, well-proven, typically piece-meal, scholarly techniques for text analysis (heuristics), which hence may provide \textbf{normative, methodological guidance} in setting up NLP systems. Secondly, by defining and implementing different modular chains, and investigating the plurality of generated solutions, one can systematically \textbf{explore the system's uncertainty as well as the tasks's underdetermination}. Thirdly, monitoring the system during modular computation yields diagnostically useful information (e.g., intermediary results) which not only describes the model's performance on the given problem, but which additionally allows us -- as \textbf{higher-order evidence} -- to characterize (e.g., classify) the original problem in the first place. Fourthly, breaking down a complex task into sub-tasks with intermediary results that can be further processed and re-combined helps to \textbf{overcome input size limitations} of neural language models. Fifthly, modular generation with meaningful modes allows users to follow the system, comprehend generated solutions, verify sub-steps and detect errors -- the NLP system becomes a \textbf{transparent, explainable AI} \citep{Miller2019ExplanationIA}. Finally, modular NLP systems as described by DeepA2 may be connected to a user-interface which promises \textbf{fine-grained interactive control} of modular generations and seamless cognitive cooperation of AI and human experts in analysing texts.
{ "attr-fineweb-edu": 1.727539, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdeLxaJJQnKrAkH2I
\section*{Abstract} \input{abstract} \section{Introduction} \label{Introduction} At present in competitive sports there are a lot of talented sportsmen and the differences between individual performance are often very small to spot. It catalyses a race condition to be present already in the practising period, thus more and more coaches and players seek finding different means and aids to elaborate and make the preparation for the tournaments always more effective. There are a lot of new technological achievements available in the market. Small electronic devices are capable of measuring various metrics including those that are relevant for the sports, like heart rate and blood temperature and pressure registers, pedometers, speedometers and accelerometers to name a few. Using such devices is more than necessary since the results in a competition and then the final scores may depend on millesimal of millimeters. Another reason why to use measurement devices yielding objective performance metrics is because when sportsmen are overloaded in a performance, with adrenalin in their vein, it is hard if possible for them to spot and fix their failures. In certain types of sports a continuous or prompt feedback is definitely helpful, squash is one of them. Squash is a very rapid ball and racquet game with typically 40-60 hit events per second. Depending on the various surfaces the ball interacts during its flight defines the different shot classes. Some shot classes are very rare due to being tricky to deliver or may occur only in circumstances where the rally may seem already lost. So knowing the detailed statistics of various hits and shot patterns talks about the quality of the sportsmen and are very important information for both the coaches and the squash players. However, these data and their statistical analysis are not available at present because of the paste of squash. Given its fast speed the human processing of events enables the score registration in real-time only, but the recording of shot types and the detailed sequences of the shots are rendered definitely impossible. One possible solution might be to analyse videos of the matches using image processing as it has been shown to work for the tennis~\cite{broadbent_tenniscam}. Though for the squash it turns out that this approach remains difficult even with the use of high speed and high resolution cameras, due to the small size of the ball and the view provided by the cameras. Traditionally cameras are placed behind the court, therefore the players will most often cover the sight of the ball during the match making the reconstruction of ball trajectories an inauspicious problem. To provide reliable statistics by this approach will require human processing and validation so in the end a thorough analysis of the tournament will cost many times of the duration of this sport events in man-hours. In this study we introduce a framework to unhide these information based on the analysis of acoustic data. Playing squash produces characteristic sound patterns. The sound footprint of each rally is a projection of all the details about the strength and the position of the ball hitting various surfaces in the court. Naturally, this pattern, which maintains the natural order of the events, is contaminated by some additional noise. Recording the sound in more directions allows for inverting the problem and for giving statistical statements about where and what type of an events took place in the play. We are focusing on events generated by the ball hits, which serves as a basis for further analysis and the reconstruction of shot patterns or the ball trajectories. Note, the framework to be detailed can be applied to various other types of ball games. The subsequent sections of the paper are structured as follows. Section~\ref{sec:equipment} details the hardware components installed in a squash court to record input. In sections \ref{sec:BID}, \ref{sec:localization} and \ref{sec:classification} mathematical models are presented to detect, localize and classify audio events respectively. The data collection is described and the results are presented in section~\ref{sec:results}. Finally, methods described in this study is compared to the related works of the topic in section~\ref{sec:relwork}. \section{The measurement equipment} \label{sec:equipment} This study is based on the analysis of sound waves generated during the squash play. Among many other, squash is a game where various different sources of sound are present, including the players themselves (their sighing or their shoes squeaking on the floor), the ball hitting surfaces (like the walls, the floor or the racquet) and also external sources (including the ovation of the spectators or sound generated in an adjacent court). Here we focus on audio events related to the ball. When planning the experiments the following constraints had to be investigated and satisfied. The framework should be fast in signal processing point of view, because the target information can be most valuable when in a competitive situation it helps fine tune tactical decisions made by the coach and/or the player. The cost of the equipment should be kept low and the installation of the sensors requires a careful design to prevent them from disrupting the play. As the spatial localization of the ball is one of the fundamental goals a lower bound to the sampling rate is enforced to remain able to differentiate between displaced sound sources. In Fig~\ref{fig:overall} the hardware and software components are sketched. Hardware components include 6~audio sensors, three of which are omnidirectional microphones (Audio Technica ES945) sinking in the floor and the rest of them are cardioid microphones (Audio Technica PRO 45) hanging from the top. Amplification and sampling of the microphone signals are done by a single dedicated sound card (Presonus AudioBox 1818VSL) so that all channels in a sample frame are in synchrony. The highest sampling rate of the sound card is used (96~kHz), so by each new sample the front of a sound wave travels approximately 6~mm. \begin{figure}[!h] \includegraphics[width=0.9\textwidth]{overall.eps} \caption{{\bf A schematic view of the components.} To process audio events in the sqash court a three component architecture was designed. } \label{fig:overall} \end{figure} According to their functionalities software components fall in the following groups. Signal processing is done in the analysis module, which include the detection of the audio events, the classification and the filtering of the detections and after matching event detections of more channels the localization of the sound source. While these steps of signal processing can be done real-time a storage module is also implemented so that the audio of important matches can be recorded. Recording of data helps training of the parameters of the classification algorithms, and it also enables a whole re-analysis of former data with different detectors and/or different classifiers. All output generated by the Analysis module is fed to the output queue. Hardware and software components are triggered and reconfigured via a web services API exposed by the Control interface. Finally, to be able to listen to what is going on in the remote court a Monitoring interface provides a mixed, downsampled and compressed live stream across the web. \section{The ball impact detection} \label{sec:BID} The localization and the classification of ball hits both require the precise identification of the beginning of the corresponding events in the audio streams. The detection of ball impact events is carried out for each audio channels independently and in a parallel fashion, which speeds up the overall performance of the framework significantly. Different detection algorithms of various complexities were investigated two extreme cases are sketched here. The first model assumes that the background noise follows the normal distribution. An event is detected if new input samples deviate from the Gaussian distribution to a certain predefined threshold value. Next for each channels the mean and the variance estimates of a finite subset of the samples are continually updated according to the Welford's algorithm~\cite{welfordvariance}. The second method is an extension of the windowed Gaussian surprise detection by Schauerte and Stiefelhagen~\cite{BayesianSurprise}. The algorithm tackles the problem evaluating the relative entropy~\cite{kullback}. It is first applied in the frequency domain and if there is a detection then a finer scale search is carried out in the time domain. The power spectrum of $w$-sized chunks of windowed data samples is calculated. Between detection regime the series of the power spectra is modelled by a $w$-dimensional Gaussian. The a priori parameters of the distribution are calculated for $n$ elements in the past, and the posteriori parameters are approximated including the new power spectrum. The Kullback Leibler divergence between the a priori and the posteriori distributions exceeds a predefined threshold when a new detection takes place {\small\[ \mathrm{S}_i = \frac{1}{2}\left[ \log \frac{|\Sigma_i|}{|\Sigma'_i|} + \text{Tr}\left(\Sigma_i^{-1} \Sigma'_i\right) - w + \left(\mu'_i - \mu_i\right)^T \Sigma_i^{-1} \left(\mu'_i - \mu_i\right) \right], \]}% where primed parameters correspond to the posteriori distribution. The time resolution at this stage is $w$ and to increase precision a new search is carried out in the time domain evaluating the Kullback Leibler divergence for 1-d data. In order to bootstrap a priori distribution parameters $n$ samples from the former windows are used. \section{The localization of sound events} \label{sec:localization} In this section we lay down a probabilistic model to determine the time and location of an audio event. For a unique event we denote these unknowns $t$ and $\mathbf{r}_\text{ev}$ respectively. The inputs required to find the audio event are the locations of the $N+1$ detectors $\mathbf{r}_i^\text{mike}$ and the timestamps $\tau_i$ when these synchronized detectors sense the event ($0\leq i\leq N$). The probability that microphone $i$ detects an event at $(\mathbf{r}, t)$ is $$p(t_i, r_i) = \frac{1}{\sqrt{2\pi}\sigma_i} \exp -\frac{(ct_i-r_i)^2}{2\sigma_i^2 c^2},$$ where $c$ is the speed of sound, $t_i = \tau_i - t$ is the propagation delay and $r_i = ||\mathbf{r}-\mathbf{r}_i^\text{mike}||$ is the distance between the sound source and the microphone. The uncertainty $\sigma_i$ depends on the characteristics of the microphone, which we will consider constant in the first approximation. By introducing relative delays $\hat\tau_i=\tau_i-\tau_0$ the joint probability of relative delays detected is $$p(\hat\tau_1,\dots\hat\tau_N)=\int\mathrm{d}t_0\, p(t_0, r_0) \prod_{i=1}^{N} p(\hat\tau_i+t_0, r_i).$$ The formula can be rearranged $$p(\hat\tau_1,\dots\hat\tau_N)=\frac{1}{\sqrt{2\pi}^{N+1}\prod_{i=0}^N\sigma_i}\int\mathrm{d}t_0\, e^{-f(t_0)},$$ where $f(t_0)=\sum_{i=0}^N \frac{(c\hat\tau_i + c t_0 - r_i)^2}{2\sigma_i^2 c^2}$ is a quadratic function and in the expression for $p$ the Gaussian integral follows $$\int\mathrm{d}t_0\, e^{-f(t_0)}=\sqrt{\frac{2\pi}{f^{\prime\prime}(t^*_0)}}e^{-f(t^*_0)}.$$ The first order derivative $f^{\prime}$ vanishes in $t^*_0=\Sigma^2\,\sum_{i=0}^N\frac{1}{\sigma_i^2}\left(\frac{r_i}{c}-\hat\tau_i\right)$, where $\Sigma^2=1/\sum_{i=0}^N\frac{1}{\sigma_i^2}$ is introduced for convenience. After substitution of $t^*_0$ we arrive at $$f(t^*_0)=\frac{1}{2}\left\{\sum_{i=0}^N\frac{1}{\sigma_i^2}\left(\frac{r_i}{c}-\hat\tau_i\right)^2 - \Sigma^2 \left[\sum_{i=0}^N\frac{1}{\sigma_i^2}\left(\frac{r_i}{c}-\hat\tau_i\right) \right]^2 \right\}.$$ This formula can be interpreted as a variance formula, which can be rewritten $$f(t^*_0)=\frac{1}{2\Sigma^2} \sum_{i=0}^N\frac{1}{\sigma_i^2} \left[\sum_{j=0}^N\frac{1}{\sigma_j^2} \left(\frac{r_i-r_j}{c}-(\hat\tau_i-\hat\tau_j)\right)\right]^2.$$ A good approximation of the audio event maximizes the likelihood $p$, which at the same time minimizes $f(t^*_0)$, thus we seek the solution of $\nabla_\mathbf{r}f(t^*_0)=0$ equations. In practice $f$ behaves well and its minimum can be found by gradient descent method. Fig~\ref{fig:likelihood} shows a situation, where the ball hit the front wall and 6 microphones detect this event error free. To show the functions behaviour $f$ is evaluated in the floor, in the front wall and in the right side wall. Finding the minimum of $f$ takes less than ten gradient steps. \begin{figure}[!h] \includegraphics[width=0.9\textwidth]{likelihood.eps} \caption{{\bf The visualization of the likelihood function.} The ball hit the front wall. $f(t^*_0)$ can be evaluated in space given the positions of the sensors (marked by white disks) to find its minimum, which indicates where the event took place. (0.5~m from the right corner and 3~m above the floor, marked by a blue disk.)} \label{fig:likelihood} \end{figure} The likelihood based localization model is derived for a noiseless situation, assuming the perfect detection of samples in each channel. In real environment, however, noise is present and the error deviating the detection is exposed in the final result of the localization. In order to track this effect the method was numerically investigated as follows. 10000 points in the volume of the court is selected randomly and the sound propagation is calculated in each six microphones. Next for the ideal detections Gaussian noise is added in all channels, with increasing variation ($\sigma=1, 10, 50$). In Fig~\ref{fig:errorpropagation} the noiseless case is compared to cases with increasing errors. In the figure the cumulative distribution of the error, ie. the difference between the randomly selected point and the location guess by the model is presented. Naturally, by increasing the detection error the error in the position guess is increasing, but the model performs very well, for poor signal detectors the error in localization is in the order of 10~cm. \begin{figure}[!h] \includegraphics[width=0.9\textwidth]{error.eps} \caption[error]{{\bf The cumulative distribution of the localization error.} For a noiseless case most often localization will have an error comparable to the size of the ball. With a bad detector ($\sigma=50$ samples) still the localization is exact in the order of 10~cm.} \label{fig:errorpropagation} \end{figure} \section{Classification} \label{sec:classification} It is the task of the classification module to distinguish between the different sound events according to their origin. Sound events are classified based on the type of the surface that suffered from the impact of the ball. This surface can be the wall, the racquet, the floor or the glass. When the sound does not fit any of these classes, like the squeaking shoes, then it is classified as a false event. The classification enhances the overall performance of the system by two means. First, skipping to localize the false events speeds up the processing. And second, in doubtful situations when the calculated location of the event falls near to multiple possible surfaces, by knowing the type of the surface that suffered from the impact can reinforce the localization. For example a sound event localized a few centimetres above the floor could be generated by a racquet hit close to the floor or by the floor itself. Classification utilizes feed-forward neural networks that had been trained with backpropagation~\cite{Hinton:2012, bugatti2002audio,shao2003applying,wang2007sound}. The training sets are composed of vectors belonging to 5461 audio events, which have been manually labelled. Based on these audio events two types of input were constructed for teaching. In the first case temporal data is used directly. A vector element of the training set $T_1$ is the sequence of the samples around the detections for each channels. \[ T_1 = \left\{ (a_{d-w},\dots,a_d,\dots,a_{d+w}) \right\}, \] where the channel index is dropped and $d$ is a uniq detection and $w$ sets the length of the vector. Given the sampling rate 96~kHz and setting $w=300$ the neural network is taught by $6.25$~millisecond long data. The second feature set $T_2$ is built up of the power spectra. \[ T_2 = \left\{ |\mathcal{F}(a_{d},\dots,a_{d+w})| \right\}, \] where $\mathcal{F}$ denotes the discrete Fourier transform. A single neural network model where all event classes are handled together performed poorly in our case. Therefore, separate discriminative neural network models were built for all four classes (racquet, wall, floor and glass impact) and for both of the training sets. It has also been investigated if any of the input channels introduce discrepancy. In order to discover this effect models were built and trained for each unique channels and another one handling the six channels together. Note, that not all possible combinations of the models were trained due to the fact that some channels poorly detected certain events, for example microphones near the front wall detected glass events very rarely. In the training sets the class of interest was always under-represented. To balance the classifier the SMOTE~\cite{chawla2002smote} algorithm was used, which is a synthetic minority over-sampling technique. A new element is synthesized as follows. The difference between a feature vector from the positive class and one of its $k$ nearest neighbours is computed. The difference is blown by a random number between 0 and 1, to be added to the original feature vector. This technique forces the minority class to become more general, and as a result, the class of interest becomes equally represented like the majority set in the training data. Different network configurations were realized to find that for the direct temporal input a 20 hidden layer network (with 10 neurons in each layer) performed the best, while for the spectra input a 10 hidden layer (each layer with 10 neurons) is the best choice. \section{Analysis} \label{sec:results} In this section the performance of each modules of the framework and the datasets are presented. \subsection{Datasets} \label{sec:datasets} In order to analyse the components of the framework implementing the proposed methods two audio record sets were used. \emph{Audio 1} was recorded on the 18th of May 2016 when a squash player was asked to target front wall shots to specific areas of the wall. This measurement was necessary to increase the cardinality of front wall and racquet hit significantly in the training datasets $T_1$ and $T_2$, and it was also manually processed to be able to validate the operations of the detector and the localization components. \emph{Audio 2} resembles data in a real situation as it contains a seven minutes squash match recorded on the 8th of March 2016. Table~\ref{table:audio} summarizes the details of these audio recordings. \begin{table}[!ht] \centering \caption{{\bf The content of the audio files.}} \begin{tabular}{|c|l|r|r|r|r|r|r|r|} \hline & Class & Ch0& Ch1& Ch2& Ch3& Ch4& Ch5& \cellcolor{gray!20!white} Total \\ \hline \multirow{3}{*}{\begin{turn}{90}Audio 1\end{turn}} & Front wall& 165& 165& 165& 165& 165& 165& \cellcolor{gray!15!white} 990 \\ & Racquet& 166& 166& 166& 166& 166& 166& \cellcolor{gray!15!white} 996 \\ & \cellcolor{gray!15!white} Total& \cellcolor{gray!15!white} 331& \cellcolor{gray!15!white} 331& \cellcolor{gray!15!white} 331& \cellcolor{gray!15!white} 331& \cellcolor{gray!15!white} 331& \cellcolor{gray!15!white} 331& \cellcolor{gray!15!white}1986 \\ \hline \multirow{7}{*}{\begin{turn}{90}Audio 2\end{turn}} & Front wall& 100& 109& 108& 110& 107& 111& \cellcolor{gray!15!white} 645 \\ & Racquet& 112& 112& 113& 110& 109& 99& \cellcolor{gray!15!white} 655 \\ & Floor& 85& 70& 75& 19& 115& 11& \cellcolor{gray!15!white} 375 \\ & Glass& 46& 20& 24& 15& 62& 11& \cellcolor{gray!15!white} 178 \\ & False event & 227& 274& 254& 264& 456& 147& \cellcolor{gray!15!white} 1622 \\ & \cellcolor{gray!20!white} Total& \cellcolor{gray!15!white} 570& \cellcolor{gray!15!white} 585& \cellcolor{gray!15!white} 574 & \cellcolor{gray!15!white} 518& \cellcolor{gray!15!white} 849& \cellcolor{gray!15!white} 379& \cellcolor{gray!15!white} 3475\\ \hline \end{tabular} \begin{flushleft} The count of events in \emph{Audio 1} and \emph{Audio 2} broken down for each class and each channel. In total 5461 events have been labeled. \end{flushleft} \label{table:audio} \end{table} Training the neural network models require properly labelled datasets. After applying the ball impact detection algorithm to the audio records the timestamps of the detected events were manually categorized as front wall event, racquet event, floor event, glass event or false event. \subsection{Detection Results} \label{sec:resdetector} The performance of the detector is analysed by comparing the timestamp reported by the detector $d_\mathrm{detector}$ and the human readings $d_\mathrm{human}$. For \emph{Audio 1} in Fig~\ref{fig:deterr} the cumulative probability distribution of the time difference is shown for each channel and in Table~\ref{table:deterrch} the average error and its variance are shown grouped by the two event types present in the dataset. One can observe that the detectors in channels \emph{ch4} and \emph{ch5} perform poorly. When estimating the position discarding one of or both of these channels will enhance the precision of the localization. \begin{figure}[!h] \includegraphics[width=0.9\textwidth]{detector_err.eps} \caption{{\bf The error of the detector.} The detection error is defined as the difference between the timestamps generated by the module and read by a human.} \label{fig:deterr} \end{figure} \begin{table}[!ht] \centering \caption{{\bf The class and channelwise error of the detector.}} \begin{tabular}{|l|c|c|} \hline & Front wall & Racquet \\ \hline ch0 & 9.6 $\pm$ 46.0 & -5.8 $\pm$ 63.7 \\ ch1 & 3.1 $\pm$ 1.9 & -9.3 $\pm$ 130.6 \\ ch2 & 3.5 $\pm$ 5.4 & 21.3 $\pm$ 129.3 \\ ch3 & 3.0 $\pm$ 1.9 & 7.3 $\pm$ 39.9 \\ ch4 & 221.4 $\pm$ 476.5 & 116.4 $\pm$ 401.3 \\ ch5 & 210.8 $\pm$ 512.3 & 23.5 $\pm$ 136.2 \\ \hline \end{tabular} \begin{flushleft} The error of the detector algorithm is measured in samples for the various classes and all channels. \end{flushleft} \label{table:deterrch} \end{table} In Table~\ref{table:deterrtype} the error statistics for dataset \emph{Audio 2} is shown. Intensive events, like front wall impacts, can be detected precisely, whereas the detection of milder sounds like a floor or glass impact is less accurate. \begin{table}[!ht] \centering \caption{{\bf Classwise error of the detector.}} \begin{tabular}{|l|c|c|c|} \hline Class & Audio 1 & Audio 2 \\ \hline Front wall & 4.8 $\pm$ 23.3 & 6.9 $\pm$ 19 \\ Racquet & 3.4 $\pm$ 99.8 & 107 $\pm$ 85 \\ Floor & 38.0 $\pm$ 141.1 & 125 $\pm$ 149 \\ Glass & \emph{n.a.} & 183 $\pm$ 173 \\ \hline \end{tabular} \begin{flushleft} The statistics of the dataset \emph{Audio 1} is calculated for 660 events for each class excluding Floor events, counting 24 pieces. For \emph{Audio 2} 200 events were available for each class. \end{flushleft} \label{table:deterrtype} \end{table} The false discovery and the false negative rate of the detector were examined on \emph{Audio 2}. False positives are counted if detector signals for a false event, and false negatives are the missing detections. The results are summarised in Table~\ref{table:detectorconfusion}. \begin{table}[!ht] \centering \caption{{\bf Performance of the detector.}} \begin{tabular}{|c|r|r|r|r|r|r|} \hline False alarm & Ch0 & Ch1 & Ch2 & Ch3& Ch4 & Ch5 \\ \hline FDR & 39\% & 47\% & 44\% & 51\% & 54\%& 39\% \\ FNR& 16\%& 24\%& 22\%& 38\%& 5\% & 43\% \\ \hline \end{tabular} \begin{flushleft} False Discovery Rate (FDR: $\frac{n_\mathrm{fp}}{n_\mathrm{tp} + n_\mathrm{fp}}$) and False Negative Rate (FNR:$\frac{n_\mathrm{fn}}{n_\mathrm{fn} + n_\mathrm{tp}}$) of the detector based on 3475 events. \end{flushleft} \label{table:detectorconfusion} \end{table} \subsection{Classification Results} \label{CP} Approaching the problem at first and to use as much information as possible to teach the neural networks a large training set was constructed of the union of the detections of all the six channels. However, this technique gave poorer results than treating all the channels separately. The different settings of the microphones and the distinct acoustic properties of the squash court at the microphone positions are found to be the reasons of that phenomenon. Eight-fold cross-validation~\cite{arlot2010survey} was used on the datasets to evaluate the performance of the classifiers. Three measures are investigated closer: the accuracy, the precision and the recall. Accuracy (in Fig~\ref{fig:cl_acc}) is the ratio of correct classifications and the total number of cases examined ($\frac{n_\mathrm{tp} + n_\mathrm{tn}}{n}$). Precision (in Fig~\ref{fig:cl_prec}) is the fraction constrained to the relevant cases ($\frac{n_\mathrm{tp}}{n_\mathrm{tp} + n_\mathrm{fp}}$). Recall (in Fig~\ref{fig:cl_rec}) is the fraction of relevant instances that are retrieved ($\frac{n_\mathrm{tp}}{n_\mathrm{tp} + n_\mathrm{fn}}$). \begin{figure}[!h] \centering \includegraphics[width=.9\textwidth]{class_accuracy.eps} \caption{{\bf The classifiers' accuracy.} The classwise accuracy of each channel is presented in $T_1$ (blue) and $T_2$ (red) input sets. Front wall classification gives high accuracy on all channels in both sets. It is interesting to observe that floor classification is more accurate in input $T_2$. Racquet classification performs best on channel 2 in both sets.} \label{fig:cl_acc} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=.9\textwidth]{class_precision.eps} \caption{{\bf The classifiers' precision.} The classwise precision of each channel is presented in $T_1$ (blue) and $T_2$ (red) input sets. Front wall classification gives high precision in input $T_1$. The precision of floor classification is low. Racquet classification still performs best on channel 2. The precision of glass classification is only acceptable on channel 4.} \label{fig:cl_prec} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=.9\textwidth]{class_recall.eps} \caption{{\bf The classifiers' recall.} The classwise recall of each channel is presented in $T_1$ (blue) and $T_2$ (red) input sets. The performance of front wall classification is reliable. The recall of racquet classification is high on channels 1 and 2 in both sets. However, the performance of floor and glass classifications is low.} \label{fig:cl_rec} \end{figure} Table~\ref{table:bestresults} summarises the results of the best classifiers for each class. It can be seen that the classification of the front wall and the racquet events is reliable. However, the precision and the recall of floor and glass events are poor. The reason for it is that these classes are under-represented in the data sets. Whenever $x$, an unseen sample comes, the best classifiers of each class are applied on the new element. The prediction of the class label $\hat{y}$ to which $x$ belongs to is computed by the following formula: \[ \hat{y}= \left\{ \begin{array}{c l} \underset{k\in C}{\arg\max}\bigg\{\frac{f_k(x) - \mathrm{cut}_k}{1-\mathrm{cut}_k} \frac{\mathrm{prec}_k}{\sum_{i\in C}\mathrm{prec}_i}\bigg\}, & \exists k:f_k(x) > \mathrm{cut}_k \\ \text{false event}, & \text{otherwise} \end{array}\right. \] where $C$ is the set of class labels without the class of false events and $f_k(x)$, $\mathrm{cut}_k$ and $\mathrm{prec}_k$ are the confidence, the cutoff value and the precision of the best classifier in class $k$ respectively. \begin{table}[!ht] \centering \caption{{\bf The classwise preformance of the best classifiers.}} \begin{tabular}{|c|c|c|r|r|r|} \hline Class & Channel & Input & Acc & Prec & Rec \\ \hline Front wall& ch4 & $T_1$ & $0.98$ & $0.93$ & $0.88$ \\ Racquet & ch2 & $T_1$ & $0.94$ & $0.81$ & $0.81$ \\ Floor & ch4 & $T_2$ & $0.88$ & $0.53$ & $0.7$ \\ Glass & ch0 & $T_2$ & $0.88$ & $0.63$ & $0.5$\\ \hline \end{tabular} \label{table:bestresults} \end{table} Fig~\ref{fig:cl_labelled} depicts the combined output generated by the detector and the classifier modules. A 1.77~seconds long segment of channel 1 audio samples are grabbed from \emph{Audio 2}. Detections and resolved classes are also shown. From the snapshot one can observe the different intensities of the events. Generally the change in the ball's moment happens when a racquet or a front wall impacts and the sample amplitudes are higher, whereas floor and glass events tend to generate lower intensity and are harder to detect. \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{beginnings.eps} \caption{{\bf Labelled audio signal.} 1.77~second long samples from channel ch1 in \emph{Audio 2}. Detected timestamps and the event classes are marked.} \label{fig:cl_labelled} \end{figure} \subsection{Localization Results} Based on the geometry of the court, the placement of the microphones and using the localization technique detailed in this study for each set of detection timestamps the 3-d position of the source of the event can be estimated. In case not all source channels provide a detection of the event localization is still possible. Four or more corresponding timestamps will yield a 3-d estimate, whereas with three timestamps the localization of events constrained on a surface (e.g. planes like wall or floor) remains possible. In Fig~\ref{fig:l_3d} the located events present in dataset \emph{Audio 1} are shown. In this measurement scenario the player was asked to hit different target areas on the front wall. It was a rapid exercise, as the ball was shot back at once. Only a few times the ball hit the floor, most of the sound is composed of alternating racquet and front wall events. In Fig~\ref{fig:l_mwall} the front wall events are shown. The target areas can be seen clearly, and also it is visible the spots scatter a little more on the left. The reason could be the player being right handed or the fact the target area was hit later during the experiment and the player showed tiredness. \begin{figure}[!h] \centering \includegraphics[width=.9\textwidth]{3d-events.eps} \caption{{\bf The position of impacts.} Visualize the localized events embedded in 3-d.} \label{fig:l_3d} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{mainwall.eps} \caption{{\bf Front wall impacts.} Gray squares embrace the eight target areas.} \label{fig:l_mwall} \end{figure} Measuring the error of the localization method is not straight forward because the ball hitting the main wall does not leave a mark, where the impact happened and there was no means to take pictures of these events. Taking advantage of the geometry of the front wall an error metric can be defined for front wall events. The error $\delta$ is defined by the offset of the approximated location from the plane of the front wall. In Fig~\ref{fig:l_err} the error histogram is shown. The mean of $\delta$ should vanish and the smaller its variance the better the framework located the events. From this exercise one can read the standard deviation is $\sigma(\delta)<3$~cm, which is smaller then the size of the squash ball. \begin{figure}[!h] \centering \includegraphics[width=0.9\textwidth]{hist_off_mainwall.eps} \caption{{\bf The front wall offsets.} The distribution of the offset $\delta$ from the front wall ($\sigma(\delta)\approx 0.02$~m). } \label{fig:l_err} \end{figure} Another way to define the error is based on relying on human readings of the events. In the dataset \emph{Audio 1} all of the sound events were marked by human as well as by the detector algorithm. Localizing the events using both inputs the direct position difference can be investigated. The mean difference between the positions is 11.8~cm and their standard deviation is 39.9~cm. \section{Discussion} \label{Discussion} Our results support that in sports, where the relevant sound patterns are distinguishable, careful signal processing allows the localisation of shots. The described system is optimized for handling events and as a consequences the real-time analysis of data is possible, which is important to give an instant feedback. The framework can be extended to provide higher level statistics of events such as the evolution of shots types. From the wide range of possible applications we highlight three use cases. Firstly, during a match the players can get to know their precision in short time and if is necessary they can change their strategy. Secondly, during practice coaches can track the development of the players hit accuracy. Or thirdly, certain exercises can be defined, which can be automatically and objectively evaluated, without the need for the coach be present during the exercise. \section{Related work} \label{sec:relwork} Squash and soccer were the first sports to be analysed by ways of analysis systems. Formal scientific support for squash emerged at the late 1960s. The current applications of performance analysis techniques in squash are deeply investigated in the book of Stafford et al.~\cite{obe2016current}. One test that was developed by squash coach Geoffry Hunt is the ``Hunt Squash Accuracy Test'' (HSAT)~\cite{williams2014measuring}, that is a reliable method used by coaches to assess shot hitting accuracy. The test is composed of 375 shots across 13 different types of squash strokes and it is evaluated based on a total score expressed as the number of successful shots. Recent technological advances have facilitated the development of sport analytical software such as Dartfish video based motion analysis system~\cite{barris2008review,travassos2013performance}. However, these systems still require a considerable amount of professional assistance. To the best of our knowledge there is no previous research investigating the applicability of sound analysis techniques for squash performance analysis. \section*{Acknowledgements} The hardware components enabling this study are installed at Gold Center's squash court. We thank them for this opportunity and squash coach Shakeel Khan for the fruitful discussions. We thank the support of SmartActive project run by Ericsson Hungary Research and Development Center. \nolinenumbers
{ "attr-fineweb-edu": 2.90625, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdE3xK7Tt522WaQ6E
\section{Introduction}\label{intro} Within the last decades, Japan's economic growth has been greatly influenced by inbound tourism, creating new potential customers to local businesses across the country and touristic attractions. However, both natural disasters such as earthquakes, as well as the diminishing population in rural areas and its concentration in urban areas have led peripheral areas across the country to attempt regional revitalization via tourism \cite[][]{jones2009}. One of these regional revitalization projects, subsidized by the government after a successful social experiment in the early 1990's, was created as a way to establish strong links between road users and local communities. This was called 'Michinoeki', which stands for 'Roadside Station' in Japanese \cite[][]{yokota2006-a}. Michinoeki strives to act as a safe and comfortable space in which road travelers can refresh themselves (parking and restrooms); interact with local community and culture, tourist attractions and recreational activities and facilities; as well as provide travelers with relevant information such as maps, emergency care, et cetera. There is currently a network of \num[group-separator={,}]{1145} facilities of Michinoeki across different areas of Japan \cite[][]{michinoeki}. It is important then, in order to revitalize these areas that these facilities match the needs of travelers and tourists. One notorious example, based in the Kumamoto prefecture in the Kyushu area is 'Michinoeki Shichijo Melon Dome', pictured in Figure \ref{fig:melon}. The local product is, of course, melon, and they have used it to make melon taste 'Melon Pan' bread (which is named after its shape and not the taste), ice cream and also sold on its own. The place also offers other local souvenirs and products, aside from melon-based ones. \begin{figure}[htp] \centering \includegraphics[width=20em]{melon-dome.png} \caption{Michinoeki Shichijo Melon Dome, in Kukichi city, Kumamoto prefecture, Japan \protect\footnotemark} \label{fig:melon} \end{figure} \footnotetext{Shichijo-machi Special Product Center Ltd., Michinoeki Shichijo Melon Dome building, retrieved from \href {http://www.melondome.co.jp/stores_guide/img_stores/melondome_2.jpg}{\path{http://www.melondome.co.jp/stores_guide/img_stores/melondome_2.jpg}}} Studies pertaining to Michinoeki in Japan are scarce, but within the literary review for this study we found that previous studies, such as one involving the classification of Michinoeki in Hokkaido \cite[][]{ogawa2001}, are mostly based on small surveys. There are additionally studies focusing on the implementation of the idea of Michinoeki in different parts of Asia, such as one in Korea \cite[][]{lee2016}, and another in Vietnam and China \cite[][]{yokota2006-b}. In addition to studies on Michinoeki, there has been focus on the use of regional branding for the revitalization of rural areas by using the local brand farm products as a tourist attraction across Japan \cite[][]{jones2009,ohe2013,ohe2008-a,ohe2008-b}. However, in recent years, electronic word-of-mouth (eWOM) has become an important resource for analysis of marketing research and has increasingly been approached by researchers for many products and services \cite[][]{depelsmacker2018,chevalier2006,liu2006}. In addition to this new source of information, Machine Learning approaches, extracting vital information via text mining, among other methodologies of the information age are widely available and increasing in use as well \cite[e.g.][]{he2013,nonaka2012,oconnor2010,Aleman2018ICAROB,Horino2017IEEM,bollen2011,Aleman2017ISIS,nonaka2014icaicta,nonaka2014itmc,nonaka2013,nonaka2010,sakao2009}. These methodologies, when applied to larger databases, provide more trustworthy results than those of statistically small questionnaire samples which can be influenced by the inflexibility of previously posed questions for the customer base. Furthermore, many studies regardless of topic study correlation between the sampled values, leaving to consideration if there is also causation involved. Recently, a Non-Gaussian methodology was presented by Shimizu in 2014, called LiNGAM \cite[][]{shimizu2014}, which allows for a clearer view at the causal relationship between our data. Because of the importance of the subject of rural tourism to the revitalization of the economy across Japan, and the lack in use of better methodology in this field, we propose to use an Entropy-based Support Vector Machine Learning approach to classify and recognize different topics in eWOM related to Michinoeki extracted from Twitter, and then study the causal relationship between the amount of mentions in those topics and the sales of Michinoeki establishments in the Kyushu area of Japan. \section{Methodology}\label{methodology} \subsection{Word Segmentation}\label{segment} For an analysis to be made possible for each word, we segmented the collected Japanese texts without spaces into words using a Japanese morphological analyzer tool called MeCab \cite[][]{kudo2004}. After segmenting the words, we extracted only self-sufficient words. \subsection{Entropy Based Keyword Extraction}\label{entropy} Feature selection of our method is based on the Shannon's entropy (hereinafter referred as entropy) value \cite[][]{shannon1948} of each word. According to information theory, entropy is the expected value of the information content in a signal. Applying this knowledge to the study of words allows us to observe the probability distribution of any given word inside the corpus. For example, a word that keeps reappearing in many different documents will have a high entropy, while a word that only was used in a single text and not in any other documents in the corpus will bear an entropy of zero. This concept is shown in Figure \ref{fig:entropygraphs}. Having previously tagged a sample of texts positive and negative by pertinence to each category, if a word has higher entropy in positive documents than in negative documents by a factor of alpha greater than 1 (\(\alpha > 1\)), then it means its probability distribution is more spread in positive texts, meaning that it is commonly used in positive tagged documents compared to negative ones. \begin{figure}[h] \centering \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{entropyzero.png} \caption{Entropy close to zero.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{entropyhigh.png} \caption{High entropy.} \end{subfigure} \caption{Probabilities of a word \(j\) being contained in a document \(i\).} \label{fig:entropygraphs} \end{figure} To calculate the entropy in a set of documents, for each word \(j\) that appears in each document \(i\), we counted the number of times a word appears in positive comments as \(N_{ijP}\), and the number of times a word appears in negative comments as \(N_{ijN}\). Then, as shown in the formulas below, we calculated the probability of each word appearing in each document shown below as \(P_{ijP}\) (\ref{eq:PijP}) and \(P_{ijN}\) (\ref{eq:PijN}). \begin{equation}\label{eq:PijP} P_{ijP} = \frac{N_{ijP}}{\sum_{i=1}^M N_{ijP}} \end{equation} \begin{equation}\label{eq:PijN} P_{ijN} = \frac{N_{ijN}}{\sum_{i=1}^M N_{ijN}} \end{equation} We then substitute these values in the formula that defines Shannon's Entropy. We calculated the entropy for each word \(j\) in relation to positive documents as \(H_{Pj}\) (\ref{eq:Hpj}), and the entropy for each word \(j\) in relation to negative texts as \(H_{Nj}\) (\ref{eq:Hnj}). That is, all instances of the summation when the probabilities \(P_{ijP}\) or \(P_{ijN}\) are zero and the logarithm of these becomes undefined are substituted as zero into (\ref{eq:Hpj}) and (\ref{eq:Hnj}). \begin{equation}\label{eq:Hpj} H_{Pj} = - \sum_{i=1}^M [P_{ijP}\log_2 P_{ijP}] \end{equation} \begin{equation}\label{eq:Hnj} H_{Nj} = - \sum_{i=1}^M [P_{ijN}\log_2 P_{ijN}] \end{equation} After calculating the positive and negative entropies for each word, we measured their proportion using the mutually independent coefficients \(\alpha\) for positive keywords and \(\alpha'\) for negative keywords, for which we applied several values experimentally. A positive keyword is determined when (\ref{eq:entropy_pos}) is true. \begin{equation}\label{eq:entropy_pos} H_{Pj} > \alpha H_{Nj} \end{equation} \subsection{Topic Classification Using SVM}\label{topics} In machine learning, Support Vector Machines are supervised learning models commonly used for statistical classification or regression \cite[][]{cortes1995}. We implemented this theory in Python using the Support Vector Classifier (SVC) included in the library \textit{scikit}-learn. We also used the mathematics library \textit{numpy}. To evaluate each of our trained machines, we used the K-fold Cross Validation method, which has been proven to provide good results. To evaluate each of our trained machines, we calculated then the Precision, Recall and \(F_1\)-Score values for our predictions. \subsection{Causal Relationship Analysis with LiNGAM} Most previous studies used a linear correlation model to analyze tourism data \cite[e.g.][]{deng2007,koberl2016}, and other traditional methods like the multiple variable regression analysis \cite[][]{thomas2017}, while others have used improved regression models such as the quantile regression to attempt to overcome deficiencies of simple linear correlation models \cite[][]{brida2017}. Correlation suggests an association between two variables. On the other hand, causality shows that one variable directly effects a change in the other. For analysis of tourism industry, it is important to detect cause of sales. Causal structure models have been studied for some time, and there are examples of tourism analysis analyzing predefined causal structures with SEM (Structural Equation Models) \cite[][]{pappas2017}. LiNGAM is currently a widely used model to discover causal structures of continuous-valued data without the need of defining them, under the assumptions that "the data generating process is linear", "there are no unobserved confounders", and "disturbance variables have non-Gaussian distributions of non-zero variances". The LiNGAM model is as follows: \begin{equation}\label{eq:lingam_long} x_i = \sum_{k(j)<k(i)} b_{ij}x_{j} + e \end{equation} Where \(k\) represents the causal order of each variable, and in which we are only considering causal orders in the desired direction. This model can also be expressed as: \begin{equation}\label{eq:lingam1} \vec{x} = \vec{B}\vec{x} + \vec{e} \end{equation} Where the matrix \(\vec{x}\) is comprised of all the measured variables, including in our case the number of tweets and the sales profits; \(e_i\) are continuous latent variables that are exogenous contained in the matrix \(\vec{e}\), and \(b_{ij}\) are the connection strengths from \(x_j\) to \(x_i\). If \(b_{ij}\) is not equal to 0, \(j\) is cause of \(i\). Contrary to this, if \(b_{ij}\) is equal to 0, there is no causality between \(i\) and \(j\), since \(x_i\) would only be comprised of exogenous values (i.e. \(x_i = e_i\)). The matrix \(\vec{B}\) contains all the strength constants and therefore, we must identify it to detect causal structure. Formulation (\ref{eq:lingam2}) can be modified, \begin{equation}\label{eq:lingam2} \vec{x} = (\vec{I} - \vec{B})^{-1} \vec{e} \end{equation} In order to calculate the matrix \(\vec{B}\), \(\vec{x}\) is expressed in the form of the Independent Component Analysis, or ICA \cite[][]{jutten1991,hyvarinen2001} as follows: \begin{equation}\label{eq:lingam3} \vec{x} = \vec{A}_{ICA}\vec{s} \end{equation} Where the ICA matrix \(\vec{A}_{ICA}\) collects the coefficients \(a_{ij}\), and \(\vec{s}\) collects the independent components \(s_j\) respectively. However, the output ICA matrix might result in different permutations at the time of calculation. Based on the ICA matrix \(\vec{A}_{ICA}\), in order to identify a mixing matrix \(\vec{A}\) such that \begin{equation}\label{eq:lingam4} \vec{A} = (\vec{I} - \vec{B})^{-1} \end{equation} The matrix \(\vec{A}_{ICA}\) is expressed as \begin{equation}\label{eq:lingam5} \vec{A}_{ICA} = (\vec{I} - \vec{B})^{-1} \vec{P}\vec{D} = \vec{A}\vec{P}\vec{D} \end{equation} Where \(\vec{P}\) is an unknown permutation matrix and \(\vec{D}\) is an unknown diagonal matrix with no zeros on the diagonal. The separating matrix \(\vec{W}\) is defined as \begin{equation}\label{eq:lingam5} \vec{W} = \vec{A}^{-1} = \vec{I} - \vec{B} \end{equation} Following this, the separating matrix \(\vec{W}\) is estimated up to the permutation \(\vec{P}\), and scaling and sign \(\vec{D}\) of the rows. \begin{equation}\label{eq:lingam6} \vec{W}_{ICA} = \vec{P}\vec{D}\vec{W} = \vec{P}\vec{D} \vec{A}^{-1} \end{equation} However, in LiNGAM, the correct permutation matrix \(\vec{P}\) can be found \cite[][]{shimizu2006}: the correct \(\vec{P}\) is the only one that contains no zeros in the diagonal of \(\vec{D}\vec{W}\), since \(\vec{B}\) should be a matrix that can be permuted to become lower triangular with all zeros on the diagonal and \(\vec{W} = \vec{I} - \vec{B}\). Furthermore, the correct scaling and signs of the independent components can be determined by using the unity on the diagonal of \(\vec{W} = \vec{I} - \vec{B}\). To obtain \(\vec{W}\) it is only necessary to divide the rows of \(\vec{D}\vec{W}\) by its corresponding diagonal elements. Finally, the connection strength matrix \(\vec{B} = \vec{I} - \vec{W}\) may be computed. \section{Experiment Results}\label{experiments} \subsection{Multi-label Topic Classification}\label{exp_topics} Before the analysis using Support Vector Machines, we defined eight topics by manually. In the sample of tweets related to Michinoeki, we noticed most of the tweets could be classified in these eight topics, with multiple labels in some cases, which are shown in Table 1. We made the training data by sampling 1000 posts and classifying them manually into eight topics. We then calculated the entropy value for each word in each category. With an alpha value of 2, (\(\alpha=2\)) we assigned a word as a keyword for a category only if the entropy value for that category was more than twice the entropy values of all other categories for that word. An exception was made for Topic 5: Check-in, for which the keyword extraction process was done heuristically, choosing keywords such as "I'm at", indicating only that the user had gone to that particular Michinoeki, which had sufficiently good results. The topic content and example keywords for each topic obtained from cross comparison of their Entropy values across categories are shown in Table \ref{tab:topics}. \begin{table}[htp] \centering \caption{Topic Classification Content.} \label{tab:topics} \begin{tabular}{|c|l|m{16em}|} \hline \rowcolor[HTML]{C0C0C0} Topic ID & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}Topic content} & Example keywords \\ \hline Topic 1 & Products and Services & shop; set meal; popular; chocolate; ice cream; soft serve; café; \\ \hline Topic 2 & Special Events & exhibition; event; illumination lighting ceremony; Christmas tree \\ \hline Topic 3 & Promotional & N/A(Determined by related service company twitter accounts list) \\ \hline Topic 4 & Traffic and Weather & national highway; attention; road traffic information \\ \hline Topic 5 & Check-in & I'm at; in; Location \\ \hline Topic 6 & Positive Reviews & delicious; Instagram-able; cute; cheap; ate; happy; (*{\textasciiacute}\textsuperscript{\textomega}\textasciigrave*) \\ \hline Topic 7 & Motorcycles & refueling; bike; meeting place; yaeya sticker \\ \hline Topic 8 & Unrelated and Others & JR; train; wagon; rotary; west exit; ride \\ \hline \end{tabular} \end{table} Because all of the tweets related to Topic 3: Promotional are from official accounts, we categorized these automatically using the usernames without the need to train an SVM or extract entropy-based keywords. Regarding the Topic 9: Unrelated and Others, because of the way we extracted the tweets related to 'Michinoeki' which translates to 'Roadside Stations', there were cases where actual railroad stations were mentioned instead. We filtered these cases using entropy based SVM as well. Topic 6 describes positive reviews, which we had found differ from other topics; however, little to no negative reviews were found and thus not enough data was available to train a classifier for negative reviews. To evaluate each of our trained machines, we calculated the \(F_1\)-Score for each using a K-fold Cross Validation methodology, calculating the \(F_1\)-Score from the values of Precision and Recall calculations. The \(F_1\)-Score values calculated for each SVM are shown in Table \ref{tab:scores}. \begin{table}[htp] \centering \caption{Results of the \(F_1\)-Score in each topic.} \label{tab:scores} \begin{tabular}{|c|l|l|} \hline \rowcolor[HTML]{C0C0C0} Topic ID & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}Topic content} & \(F_1\)-Score \\ \hline Topic 1 & Products and Services & 0.85 \\ \hline Topic 2 & Special Events & 0.81 \\ \hline Topic 4 & Traffic and Weather & 0.88 \\ \hline Topic 5 & Check-in & 0.78 \\ \hline Topic 6 & Positive Reviews & 0.79 \\ \hline Topic 7 & Motorcycles & 0.87 \\ \hline Topic 8 & Unrelated and Others & 0.78 \\ \hline \end{tabular} \end{table} By searching by the names of Michinoeki establishments plus the keyword "michinoeki" in Japanese, we collected a total of \num[group-separator={,}]{111142} tweets related to Michinoeki, of which \num[group-separator={,}]{9264} were related to 94 different Michinoeki establishments across the Kyushu area of Japan for which we had sales data. We then automatically classified posts into one of the eight topics shown on above by using Support Vector Machines in a hierarchical manner. We first classified the tweets belonging to official accounts automatically to Topic 3: Promotional; then classified tweets from Topic 5: Check-In for their particular structure; followed by classification of Topics 4, 7 and 8, which are mostly about external factors and topics unrelated directly with Michinoeki content; finally proceeding to determine positive pertinence to each remaining topic by their respective SVM in order of their \(F_1\)-Score. This heuristic hierarchical binary classification method is shown in Figure \ref{fig:hierarchical}. \begin{figure}[htp] \centering \includegraphics[width=25em]{hierarchical.png} \caption{Hierarchical topic classification method.} \label{fig:hierarchical} \end{figure} \subsection{LiNGAM Analysis Results}\label{res_lingam} To evaluate the causal relationship between the number of tweets for each of the topics related to Michinoeki as \(x_i\) and the sales of Michinoeki establishments in the Kyushu area of Japan as y, we applied the LiNGAM causal structure analysis to the classified data from the \num[group-separator={,}]{9264} tweets that were matched to those establishments. Results are shown in Table \ref{tab:lingam}. \begin{table}[htp] \centering \caption{Results of the LiNGAM causality analysis.} \label{tab:lingam} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[HTML]{C0C0C0} \multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}Topic ID} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}Topic content} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\begin{tabular}[c]{@{}c@{}}Connection \\ Strength\end{tabular}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\begin{tabular}[c]{@{}c@{}}Causality\\ Direction\end{tabular}} \\ \hline Topic 1 & Products and Services & 11095.95 & \(x_1 \rightarrow y\) \\ \hline Topic 2 & Special Events & 515.17 & \(x_2 \rightarrow y\) \\ \hline Topic 3 & Promotional & 6738.08 & \(x_3 \rightarrow y\) \\ \hline Topic 4 & Traffic and Weather & -231845.95 & \(x_4 \rightarrow y\) \\ \hline Topic 5 & Check-in & 1724.36 & \(x_5 \rightarrow y\) \\ \hline Topic 6 & Positive Reviews & 15387.70 & \(x_6 \rightarrow y\) \\ \hline Topic 7 & Motorcycles & -7770.03 & \(x_7 \rightarrow y\) \\ \hline \end{tabular} \end{table} \section{Discussion}\label{discussion} According to the LiNGAM causal structure analysis results, the number of tweets mentioning products and services of Michinoeki establishments, positive reviews of the establishment and their products, as well as promotional tweets published by official accounts and mentions of special events all show a causal relationship with the sales of those establishments. This positive causal relationship is thought to be the influence that twitter mentions have to attract more new customers. As for the tweets from Topic 5: Check-in, there is a clear direct influence, since those users are not only stating that they actually visited and purchased from those establishments, but they are also giving free promotion to the users that follow them. Tweets mentioning weather and traffic conditions were observed in our sample to be of mostly expressing inconvenience, which, as is shown by the negative causal constant, has a negative effect on the number of customers that venture to the establishment during those hours, limiting its profit. Tweets in this category also included many complaints about access to the establishments. This shows an opportunity of investment to make clearer signs or routes of access for the affected establishments, as well as marketing campaigns (perhaps using Twitter as well) making their location and access routes more known. One thing to note is that under inspection of our data, many of the places where there are positive reviews of ice cream and soft serve products are leaders in profitability compared to other establishments. Local ingredients, as well as unique recipes are the main focus of Michinoeki establishments in general, but there could be differences in influence for different kinds of products and specialties. Lastly, there is the particular case of the tweets by motorcycle drivers. While Michinoeki establishments strive to be a connection point for road travelers and tourists alike, many bikers use the stations as meeting points only, not contributing to the sales of those establishments. It is necessary in these cases to revise the services provided, such as gasoline stations, for example, so that this untapped customer base can be turned to a positive influence in the future. Another possible strategy could be group campaigns. Most examples in our data of motorcycle drivers were using the place as a meeting point with other motorcycle drivers. Group campaigns or discounts could very well increase their patronage as well as attract more customers in general. \section{Conclusion and Future Work}\label{conclusion} We found that the tweets related to Michinoeki could be classified into eight topics with a well performing hierarchical and heuristic approach for multi-class classification using binary SVM classifiers; for which the feature vectors were extracted heuristically in one case, and mathematically in all the other cases by using an entropy-based keyword extraction method. All of the SVM based classifications performed with an \(F_1\)-Score above 0.78, and the highest performing classifier was that of the Topic 4: Traffic and Weather. Under the assumptions posed by the LiNGAM model ("the data generating process is linear", "there are no unobserved confounders", and "disturbance variables have non-Gaussian distributions of non-zero variances"), we found a causal relationship for all topics and found that most tweets, especially ones praising products, or promoting special events, have a positive influence in sales, with the exception of traffic and weather, and motorcycle travelers, which might be an unexplored market by Michinoeki establishments. However, in future work we will further investigate the assumptions posed by the LiNGAM model in regard to the structure of the data. In future work we will study the influence of specific products, their different marketing strategies across different establishments and their relation to their profits in order to discover potential strategies to increase profit in all Michinoeki establishments. \section*{ACKNOWLEDGMENTS} This research was supported by Japan Construction Information Center Foundation (JACIC). \section*{REFERENCES}
{ "attr-fineweb-edu": 2.279297, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdobxK0wg09FXWFWF
\section{Introduction} \label{sec:introduction} The New York Yankees and Houston Astros played each other in the American League Wild Card game in October 2015, with the winner continuing to the next round of the Major League Baseball playoffs. During and immediately after the game, several Yankees fans took to social media expressing frustration that home plate umpire Eric Cooper was not calling balls and strikes consistently for both teams, thereby putting the Yankees at a marked disadvantage. Even players in the game took exception to Cooper's decision making: after striking out, Yankees catcher Brian McCann argued with Cooper that he was calling strikes on similar pitches when the Astros were pitching but balls when the Yankees were pitching. Figure~\ref{fig:keuchel_tanaka} shows two pitches thrown during the game, one by Astros pitcher Dallas Keuchel and the other by Yankees pitcher Masahiro Tanaka. \begin{figure}[!h] \centering \includegraphics{keuchel_tanaka2.jpg} \caption{Both pitches missed the strike zone (outlined in red) and by rule, should have been called balls. Keuchel's pitch (left) was called a strike while Tanaka's pitch (right) was called a ball. Screenshot source: http://www.fangraphs.com/blogs/how-the-astros-wound-up-with-a-bigger-zone/} \label{fig:keuchel_tanaka} \end{figure} Both pitches were thrown in roughly similar locations, near the bottom-left corner of the \textit{strike zone}, the rectangular region of home plate shown in the figure. According to the official rules, if any part of the pitched ball passes through the strike zone, the umpire ought to call it a strike. Keuchel's pitch barely missed the strike zone while Tanaka's missed by a few inches. As a result, the umpire Cooper ought to have called both pitches a ball. That Cooper did not adhere strictly to the official rules is hardly surprising; previous research has shown umpires' ball/strike decisions may be influenced by the race or ethnicity of the pitcher \citep[see, e.g.,][]{Parsons2011, TainskyMillsWinfree2015}, player status as measured by age or ability \citep[see, e.g.,][]{KimKing2014, Mills2014}, and their previous calls \citep{ChenMoskowitzShue2016}. During the television broadcast of the game, the announcers speculated that the difference in Cooper's strike zone enforcement was the ability of Astros catcher Jason Castro to ``frame'' pitches, catching them in such a way to increases Cooper's chance of calling a strike \citep{Sullivan2015}. Though pitch framing has received attention from the sabermetrics community since 2008, it has generated tremendous interest in the popular press \citep[see, e.g.,][]{Lindbergh2013, Pavlidis2014, Sanchez2015} and among team officials \citep[see, e.g.,][]{Drellich2014, Holt2014} in the last three or four years, due to its apparently large impact on team success. According to \citet{Woodrum2014}, most studies of framing, including the most recent by \citet{JudgePavlidisBrooks2015} for the website Baseball Prospectus, estimate that a good framer can, on average, save his team as many as 25 runs more than the average catcher, over the course of the season. By the traditional heuristic of 10 average runs per win \citep{Cameron2008}, these results suggest that the way a good framer catches a few pitches per game may be worth as many as an additional 2 to 3 wins, relative to the average catcher. Despite the ostensibly large impact framing may have on team success, framing itself has been overlooked and undervalued until just recently. \citet{Sanchez2015} highlights the catcher Jonathan Lucroy, whose framing accounted for about 2 wins in the 2014 and worth about \$14M, writing that ``the most impactful player in baseball today is the game's 17$^{th}$ highest-paid catcher.'' Returning to the two pitches in Figure~\ref{fig:keuchel_tanaka}, Cooper may have been more likely to call the Keuchel pitch a strike because of Castro's framing. However, looking carefully at Figure~\ref{fig:keuchel_tanaka}, we see that the two pitches are quite different, making it difficult to immediately attribute the difference in calls to Castro. First, Keuchel's pitch is much closer to the displayed strike zone than Tanaka's and it was thrown in a 1 -- 0 count while Tanaka's was thrown in a 1 -- 1 count. We also note that the batters, catchers, and pitchers involved in each pitch are, necessarily, different. In particular, Keuchel is a left-handed pitcher and Tanaka is a right-handed pitcher. Any of these factors may have contributed to Cooper being more likely to call Keuchel's pitch a strike. Of course, it could also be the case that Cooper was equally likely to call both pitches a strike and the different calls are simply due to noise. This raises questions: what effect did Castro have on Cooper's called strike probability, over and above factors like the pitch location, count, and the other pitch participants? And what impact does such an effect have on his team's success? Existing attempts to answer these questions fall broadly into two categories: those that do not fit statistical models of the called strike probability and those that do. The first systematic study of framing \citep{Turkenkopf2008} falls into the former category. For each catcher, he counts the number of strikes called on pitches thrown outside an approximate strike zone introduced by \citet{Walsh2007}. \citet{Turkenkopf2008} then took the counts of ``extra strikes'' received by each catcher and converted them into a measure of runs saved using his own valuation of 0.16 runs saved per strike. Missing from this analysis, however, is any consideration of the other players and the umpire involved in the pitch, as well as the context in which the pitch was thrown (e.g. count, run differential, inning, etc.). This omission could overstate the apparent impact of framing since it is not immediately clear that a catcher deserves all of the credit for an extra strike. More recently, \citet{RosalesSpratt2015} proposed an iterative method to distribute credit for a called strike among the batter, catcher, pitcher, and umpire. Unfortunately, many aspects of their model remain proprietary and thus, the statistical properties of their procedure are unknown. The second broad category of framing studies begins by fitting a statistical model of the called strike probability that accounts for the above factors. Armed with such a model, one then estimates the predicted called strike probability with and without the catcher. The difference in these probabilities reflects the catcher's apparent framing effect on that pitch. One then estimates the impact of framing by weighting these effects by the value of ``stealing a strike'' and summing over all pitches caught by a catcher. \citet{Marchi2011} fit a mixed-effects logistic regression model, expressing the log-odds of a called strike as a function of the identities of the pitch participants and interactions between them. This model does not systematically incorporate pitch location, meaning that the resulting estimates of framing effects are confounded by the location just like \citet{Turkenkopf2008}'s. To our knowledge, the most systematic attempt to study framing to date is \citet{JudgePavlidisBrooks2015}. They introduce a mixed-effects probit regression model built in two stages: first, they estimate a baseline called strike probability using a proprietary model that accounts for location, count, handedness of batter, and ballpark. They then fit a probit regression model with a fixed effect for this baseline estimate and random effects for the pitch participants. Underpinning their model is the curious assumption that the probit transformed called strike probability is \textit{linear} in the baseline probability estimate. This assumption can over-leverage pitches with baseline probabilities close to 0 or 1 (e.g. pitches straight over home plate or several inches outside the strike zone) by arbitrarily inflating the associated intercept and slopes in the final probit model. This can potentially result in highly unstable parameter estimates. Both \citet{JudgePavlidisBrooks2015}'s and \citet{Marchi2011}'s models unrealistically assume umpires differ only in some base-rate of calling strikes and that effect of factors like pitch location, count influence, and players is constant across umpires. In light of this, we will proceed by fitting a separate model for each umpire. Before proceeding, we introduce some notation. For a given taken pitch, let $y = 1$ if it is called a strike and let $y = 0$ if it is called a ball. Let $\mathbf{b}, \mathbf{ca}, \mathbf{co}, \mathbf{p}$ and $\mathbf{u}$ be indices corresponding to the batter, catcher, count, pitcher, and umpire for that pitch. Further, let $x$ and $z$ be the horizontal and vertical coordinates of the pitch as it crosses the front plane of home plate, respectively. To accommodate a separate model for each umpire $u$ we introduce vectors $\Theta^{u,B}, \Theta^{u,CA}, \Theta^{u,P},$ and $\Theta^{u,CO}$ to hold the \textit{partial effect} of each batter, catcher, count, and pitcher, respectively, on umpire $u$'s likelihood to call a strike. For each umpire $u,$ we introduce a function of pitch location, $f^{u}(x,z)$, that we will specify in more detail later. At a high level, we model \begin{equation} \log{\left(\frac{\mathbb{P}(y = 1)}{\mathbb{P}(y = 0)}\right)} = \Theta^{\mathbf{u},B}_{\mathbf{b}} + \Theta^{\mathbf{u},CA}_{\mathbf{ca}} + \Theta^{\mathbf{u},P}_{\mathbf{p}} + \Theta^{\mathbf{u},CO}_{\mathbf{co}} + f^{\mathbf{u}}(x,z) \label{eq:general_model} \end{equation} We leverage high-resolution pitch tracking data from the PITCHf/x system, described briefly in Section~\ref{sec:pitchfx}, to estimate how much a catcher influences umpires' chances of calling strikes and how large an impact such effects have on his team's success. In Section~\ref{sec:models}, we introduce several simplifications of the model in Equation~\ref{eq:general_model} that still elicit umpire-to-umpire heterogeneity. All of these models are fit in a hierarchical Bayesian framework, which provides natural uncertainty quantification for our framing estimates. Such quantification, notably absent in previous framing studies, is vital, considering the fact that several teams are making framing-based roster decisions \citep{Drellich2014, Holt2014, Sanchez2015}. We compare the predictive performances of these models in Section~\ref{sec:model_comparison} and assess the extent to which incorporating umpire-specific count and player effects lead to overfitting. We then translate our estimates of catcher effects from the log-odds scale to the more conventional scale of average runs saved. We introduce two metrics in Section~\ref{sec:framing_impact} to estimate the impact framing has on team success. We conclude with a discussion and outline several potential extensions of our modeling efforts. \section{Data and Model} \label{sec:data_model} We begin this section with a brief overview, adapted primarily from \citet{Fast2010} and \citet{SidhuCafo2014}, of our pitch tracking dataset before introducing the hierarchical logistic regression model used to estimate each umpire's called strike probability. \subsection{PITCHf/x Data} \label{sec:pitchfx} In 2006, the TV broadcast company Sportvision began offering the PITCHf/x service to track and digitally record the full trajectory of each pitch thrown using a system of cameras installed in major league ballparks. During the flight of each pitch, these cameras take 27 images of the baseball and the PITCHf/x software fits a quadratic polynomial to the 27 locations to estimate its trajectory \citep{SidhuCafo2014}. This data is transmitted to the MLB Gameday application, which allows fans to follow the game online \citep{Fast2010}. In addition to collecting pitch trajectory data, an MLB Advanced Media employee records game-state information during each pitch. For instance, he or she records the pitch participants (batter, catcher, pitcher, and umpire) as well as the outcome of the pitch (e.g, ball, swinging strike, hit), the outcome of the at-bat (e.g. strikeout, single, home run), and any other game action (e.g. substitutions, baserunners stealing bases). The PITCHf/x system also reports the approximate vertical boundaries of the strike zone for each pitch thrown. Taken together, the pitch tracking data and game-state data provide a high-resolution pitch-by-pitch summary of the game, available through the MLB Gameday API. Though our main interest in this paper is to study framing effects in the 2014 season, we collected all PITCHf/x data from the 2011 to 2015 regular season. In Section~\ref{sec:pitch_location}, we use the data from the 2011 -- 2013 seasons to select the function of pitch location $f^{u}(x,z)$ from Equation~\ref{eq:general_model}. We then fit our model using the 2014 data and in Section~\ref{sec:model_comparison}, we assess our model's predictive performance using data from 2015. In the 2014 season, there were a total of 701,490 pitches, of which 355,293 (50.65\%) were \textit{taken} (i.e. not swung at) and of these taken pitches, 124,642 (35.08\%) were called strikes. Rather than work with all of the taken pitches, we restrict our attention to those pitches that are ``close enough'' to home plate to be ``frameable.'' More precisely, we first approximate a crude ``average rule book strike zone'' by averaging the vertical strike zone boundaries recorded by the PITCHf/x system across all players and all pitches, and then focused on the $N = 308,388$ taken pitches which were within one foot of this approximate strike zone. In all, there were a total of $n_{U} = 93$ umpires, $n_{B} = 1010$ batters, $n_{C} = 101$ catchers, and $n_{P} = 719$ pitchers. \subsection{Adjusting for Pitch Location} \label{sec:pitch_location} Intuitively, pitch location is the main driver of called strike probability. The simplest way to incorporate pitch location into our model would be to include the horizontal and vertical coordinates $(x,z)$ recorded by the PITCHf/x system as linear predictors so that $f^{u}(x,z) = \theta^{u}_{x}x + \theta^{u}_{z}z,$ where $\theta^{u}_{x}$ and $\theta^{u}_{z}$ are parameters to be estimated. While simple, this forces an unrealistic left-to-right and top-to-bottom monotonicity in the called strike probability surface. Another simple approach would be to use a polar coordinate representation, with the origin taken to be the center of the approximate rule book strike zone. While this avoids any horizontal or vertical monotonicity, it assumes that, all else being equal, the probability of a called strike is symmetric around this origin. Such symmetry is not observed empirically, as seen in Figure~\ref{fig:heatMap_2011_2013}, which divides the plane above home plate into 1" squares whose color corresponds to the proportion of pitches thrown in the three year window 2011 -- 2013 that pass through the square that are called strikes. Also shown in Figure~\ref{fig:heatMap_2011_2013} is the average rule book strike zone, demarcated with the dashed line, whose vertical boundaries are the average of the top and bottom boundaries recorded by the PITCHf/x system. If the center of the pitch passes through the region bound by the solid line, then some part of the pitch passes through the approximate strike zone. This heat map is drawn from the umpire's perspective so right handed batters stand to the left of home plate (i.e. negative X values) and left-handed batters stand to the right (i.e. positive X values). We note that the bottom edge of the figure stops 6 inches off of the ground and the left and right edges end 12 inches away from the edges of home plate. Typically, batters stand an additional 12 inches to the right or left of the region displayed. Interestingly, we see that the empirical called strike probability changes rapidly from close to 1 to close to 0 in the span of only a few inches. \begin{figure}[!h] \centering \includegraphics{heatMap_2011_2013.jpg} \caption{Heat map of empirical called strike probabilities, aggregate over the three-year window 2011 -- 2013. The boundary of the approximate 2014 rule book strike zone is shown in dashed line. If the center of the pitch passes through the region bounded by the solid line, some part of the pitch passes through the approximate strike zone. Red = 100\% called strike probability, white = 50\%, and blue = 0\%.} \label{fig:heatMap_2011_2013} \end{figure} Rather than specifying a explicit parametrization in terms of the horizontal and vertical coordinates, we propose using a smoothed estimate of the historical log-odds of a called strike as an implicit parametrization of pitch location. This is very similar to the model of \citet{JudgePavlidisBrooks2015}, who included the estimated called strike probability as a covariate in their probit model. Figure~\ref{fig:pitch_location_handedness} plots the spatial distribution of taken pitches broken down by the batter and pitcher handedness. Once again, the plots are drawn from the umpires' perspective so that a right handed batter stands to the left side of the figure and vice versa. We see immediately that the spatial distribution of taken pitches varies considerably with the combination of batter and pitcher handedness. When the batter and pitcher are of the same handedness, we see a decidedly higher density of ``low and outside'' pitches near the bottom corner of the average rule book strike furthest away from the batter. In contrast, in the matchup between left-handed batters and right-handed pitchers, we see a higher density of pitches thrown to the outside edge of the strike zone further away from the batter. The differences in spatial distribution of pitches seen in Figure~\ref{fig:pitch_location_handedness} motivate us to use a separate smoothed estimate of the historical log-odds of a called strike for each combination of batter and pitcher handedness. Inspired by \citet{Mills2014}, we fit generalized additive models with a logistic link to the data aggregated from 2011 -- 2013, one for each combination of pitcher and batter handedness, hereafter referred to as the ``hGAMs'' or ``historical GAMS.'' These models express the log-odds of a called strike as a smooth function of the pitch location. Figures~\ref{fig:hgam_sz_plots} shows the hGAM forecasted called strike probabilities. Interestingly, we see that for right handed pitchers, the corresponding hGAMs called strike probability surfaces very nearly align with the average rule book strike zone. For left-handed pitchers, however, the hGAMs forecast a high called strike probability several inches to the left of the average rule book strike zone. This is perhaps most prominent for the matchup between right-handed batters and left-handed pitchers. For each taken pitch in 2014 dataset, we used the appropriate hGAM to estimate the historical log-odds that the pitch was called a strike. We then use these estimates as continuous predictors in our model, so that potential player effects and count effects may be viewed as adjustments to these historical baselines. \begin{figure}[!h] \centering \includegraphics{pitch_location_handedness.jpg} \caption{Kernel density estimate of pitch location based on batter and pitcher handedness. Figures drawn from the umpires perspective so right-handed batters stand to the left of the displayed strike zone. Darker regions correspond to a higher density of pitches thrown to those locations.} \label{fig:pitch_location_handedness} \end{figure} \begin{figure}[!h] \centering \includegraphics{hgam_sz_plots.jpg} \caption{hGAM forecasts based on batter and pitcher handedness. Red = 100\% called strike probability, white = 50\%, and blue = 0\%.} \label{fig:hgam_sz_plots} \end{figure} \subsection{Bayesian Logistic Regression Models} \label{sec:models} Before fully specifying our models, we label the 93 umpires $u_{1}, \ldots, u_{93}.$ Consider the $i^{th}$ called pitch and let $y_{i} = 1$ if it is called a strike and $y_{i} = 0$ if it is called a ball. Let $h_{i}$ be a vector of length four, encoding the combination of batter and pitcher handedness on this pitch and let $\textbf{LO}_{i}$ be a vector of length four, containing three zeros and the estimated log-odds of a strike from the appropriate historic GAM based on the batter and pitcher handedness. Letting $x_{i}$ and $z_{i}$ denote the PITCHf/x coordinates of this pitch, we take $f^{u}(x_{i}, z_{i}) = h_{i}^{\top}\Theta^{u}_{0} + \textbf{LO}_{i}^{\top}\Theta^{u}_{LO},$ where $\Theta^{u}_{LO}$ is a vector of length four recording the partial effect of location and $\Theta^{u}_{0}$ is a vector of length four containing an intercept term, one for each combination of batter and pitcher handedness. Finally, let $u(i)$ denote which umpire called this pitch. To place all of the variables in our model on roughly similar scales, we first re-scale the corresponding historical GAM estimates for each combination of batter and pitcher handedness to have standard deviation 1. Finally let $\textbf{CO}_{i}, \textbf{CA}_{i}, \textbf{P}_{i}$ and $\textbf{B}_{i}$ be vectors encoding the count, catcher, pitcher, and batter involved with this pitch, and let $\Theta^{u}_{CO}, \Theta^{u}_{CA}, \Theta^{u}_{P},$ and $\Theta^{u}_{B}$ be vectors containing the partial effect of count, catcher, pitcher, and batter on umpire $u.$ For identifiability, we specify a single catcher, Brayan Pena, and count, 0 -- 0, as baseline values. We can re-write the model from Equation~\ref{eq:general_model} as $$ \log{\left(\frac{\mathbb{P}(y_{i} = 1)}{\mathbb{P}(y_{i} = 0)}\right)} = h_{i}^{\top}\Theta^{u(i)}_{0} + \textbf{LO}_{i}^{\top}\Theta^{u(i)}_{LO} + \textbf{CO}_{i}^{\top}\Theta^{u(i)}_{CO} + \textbf{CA}_{i}^{\top}\Theta^{u(i)}_{CA} + \textbf{P}_{i}^{\top}\Theta^{u(i)}_{P} + \textbf{B}_{i}^{\top}\Theta^{u(i)}_{B} $$ We are now ready to present several simplifications of this general model of gradually increasing complexity. We begin first by assuming that the players and count have no effect on the log-odds of a called strike (i.e. that $\Theta^{u}_{CO}, \Theta^{u}_{CA}, \Theta^{u}_{P},$ and $\Theta^{u}_{B}$ are all equal to the zero vector for each umpire). This model, hereafter referred to as Model 1, assumes that the only relevant predictor of an umpire's ball/strike decision is the pitch location but allows for umpire-to-umpire heterogeneity. We model, \textit{a priori}, \begin{eqnarray*} \Theta^{u_{1}}_{0}, \ldots, \Theta^{u_{93}}_{0} | \Theta_{0} \sim N\left(\Theta_{0}, \tau^{2}_{0}I_{4}\right) \\ \Theta^{u_{1}}_{LO}, \ldots, \Theta^{u_{93}}_{LO} | \Theta_{LO} \sim N\left(\Theta_{LO}, \tau^{2}_{LO}I_{4}\right)\\ \Theta_{0} | \sigma_{0}^{2} \sim N\left(0_{4}, \sigma_{0}^{2}I_{4}\right) \\ \Theta_{LO} | \sigma^{2}_{LO} \sim N\left(\mu_{LO}, \sigma^{2}_{LO}I_{4}\right) \end{eqnarray*} The vector $\mu_{LO}$ is taken to be the vector of standard deviations of the hGAM forecast for each combination of batter and pitcher handedness. In this way, Model 1 centers the prior distribution of the log-odds of a strike at the hGAM forecasted log-odds. We may interpret the parameters $\tau^{2}_{0}$ and $\tau^{2}_{LO}$ as capturing the umpire-to-umpire variability in the intercept and location effects and we may view $\Theta_{0}$ and $\Theta_{LO}$ as the mean intercept and location effects averaged over all umpires. By placing a further level of prior hierarchy on $\Theta_{0}$ and $\Theta_{LO},$ we render the $\Theta_{0}^{u}$'s and $\Theta_{LO}^{u}$'s dependent, both \textit{a priori} and \textit{a posteriori}. In this way, while we are fitting a separate model for each umpire, these models are ``mutually informative'' in the sense that the estimate of umpire $u$'s intercept vector $\Theta^{u}_{0}$ will, for instance, be ``shrunk'' towards the average of all umpires' intercept vectors by an amount controlled by $\tau^{2}_{0}$ and $\tau^{2}_{LO}.$ Further priors on the hyper-parameters $\sigma^{2}_{0}$ and $\sigma^{2}_{LO}$ introduce dependence between the components of $\Theta^{u}_{0}$ and $\Theta^{u}_{LO}$ as well, enabling us to ``borrow strength'' between the four combination of batter and pitcher handedness. While Model 1 essentially estimates a separate called strike probability surface for each umpire, it entirely precludes the possibility of player or count effects. We now consider two successive expansions of Model 1. In Model 2, we incorporate both catcher and count effects that are assumed to be constant across umpires. That is, in Model 2 we assume that all of the the $\Theta^{u}_{CO}$'s are all equal to some common value $\Theta_{CO}$ and all of the $\Theta^{u}_{CA}$'s are equal to some common value $\Theta_{CA}.$ Similarly, in Model 3 we augment Model 2 with constant pitcher effects and constant batter effects. \textit{A priori}, we model $$ \Theta_{CO} | \sigma^{2}_{CO} \sim N\left(0_{11}, \sigma^{2}_{CO}I_{11}\right) $$ and consider similar, zero-mean spherically-symmetric Gaussian priors for $\Theta_{CA}, \Theta_{P}$ and $\Theta_{B},$ while retaining the same prior specification on the $\Theta^{u}_{0}$'s and $\Theta^{u}_{LO}$'s. Though they elaborate on Model 1, Models 2 and 3 still represent a vast simplification to the general model in Equation~\ref{eq:general_model} as they assume that there is no umpire-to-umpire variability in the count or player effects. This leads us to consider Model 4, which builds on Model 2 by allowing umpire-specific count and catcher effects, and Model 5, which includes umpire-specific batter and pitcher effects and corresponds to the general model in Equation~\ref{eq:general_model} We model \begin{eqnarray*} \Theta^{u_{1}}_{CO}, \cdots, \Theta^{u_{93}}_{CO} | \Theta_{CO},\tau_{CO}^{2} \sim N\left(\Theta_{CO}, \tau^{2}_{CO}I_{11}\right) \\ \Theta^{u}_{CO} | \sigma^{2}_{CO} \sim N\left(0_{11}, \sigma^{2}_{CO}I_{11}\right) \end{eqnarray*} and consider similarly structured prior hierarchies for $\Theta^{u}_{CA}, \Theta^{u}_{B}, \Theta^{u}_{P}$ in Models 4 and 5. Throughout, we place independent Inverse Gamma(3,3) hyper-priors on the top-level variance parameters $\sigma^{2}_{0}, \sigma^{2}_{LO}, \sigma^{2}_{CO}, \sigma^{2}_{CA}, \sigma^{2}_{P}$ and $\sigma^{2}_{B}.$ It remains to specify the hyper-parameters $\tau^{2}_{0}, \tau^{2}_{LO}, \tau^{2}_{CO}, \tau^{2}_{CA},\tau^{2}_{P}$ and $\tau^{2}_{B}$ which capture the umpire-to-umpire variability in the intercept, location, count, and player effects. For simplicity, we fix these hyper-parameters to be equal to $0.25$ in the appropriate models. To motivate this choice, consider how two umpires would call a pitch thrown at a location where the historical GAM forecasts a 50\% called strike probability. According to Model 2, the difference in the two umpires' log-odds of a called strike follows a $N\left(0, 2(\tau_{0}^{2} + \tau^{2}_{CO} + \tau^{2}_{CA})\right)$ distribution, \textit{a priori}. Taking $\tau_{0}^{2} = \tau^{2}_{CO} = \tau^{2}_{CA} = 0.25$ reflects a prior belief that there is less than a 10\% chance that one umpire would call a strike 75\% of the time while the other calls it a strike only 25\% of the time. For simplicity, we take $\tau^{2}_{LO} = \tau^{2}_{B} = \tau^{2}_{P} = 0.25$ as well. \section{Model Performance and Comparison} \subsection{Predictive Performance} \label{sec:model_comparison} We fit each model in Stan \citep{Stan} and ran two MCMC chains for each model. All computations were done in R (versions 3.3.2 and later) and the MCMC simulation was carried out in RStan (versions 2.14.1 and later) on a high-performance computing cluster. For each model, after burning-in the first 2000 iterations, the Gelman-Rubin $\hat{R}$ statistic for each parameter was less than 1.1, suggesting convergence. We continued to run the chains, after this burn-in, until each parameter's effective sample size exceeded 1000. For Models 1 and 2, we found that running the sampler for 4000 total iterations was sufficient while for Models 3, 4, and 5, we needed 6000 iterations. The run time of these samplers ranged from just under an hour (Model 1) to 50 hours (Model 5). Using the simulated posterior draws from each model, we can approximate the mean of the posterior predictive distribution of the called strike probability for each pitch in our 2014 dataset. Table~\ref{tab:inSample_error} shows the misclassification and mean square error for Models 1 -- 5 over all pitches from 2014 and over two separate regions, as well as the error for the historical GAM forecasts. Region 1 consists of all pitches thrown within 1.45 inches on either side of the boundary of the average rule book strike zone defined in Section~\ref{sec:pitchfx}. Since the radius of the ball is about 1.45 inches, the pitches in Region 1 are all ``borderline'' calls in the sense that only part of the ball passes through the strike zone but are, by rule, strikes. Region 2 consists of all pitches thrown between 1.45 and 2.9 inches outside the boundary of the average rule book strike zone. These pitches miss the strike zone by an amount between one and two ball's width, and ought to be called balls by the umpire. To compute misclassification error, we used 0.5 as the threshold for a strike. \begin{table}[!h] \centering \caption{In-sample predictive performance for several models} \label{tab:inSample_error} \footnotesize \begin{tabular}{llcccccc} \\\hline ~ & ~ &Model 1 &Model2 & Model 3 & Model 4 & Model 5 & hGAM \\ ~ & $\#$ Parameters & 744 & 855 & 2582 & 11,067 & 171,168 & -- -- \\ \hline Overall & MISS & 0.103 & 0.100 & 0.099 & 0.096 & \bf 0.0856 & 0.105 \\ ~ & MSE & 0.073 & 0.071 & 0.069 & 0.068 & \bf 0.061 & 0.074 \\ Region 1 & MISS & 0.248 & 0.236 & 0.232 & 0.225 & \bf 0.195 & 0.258 \\ ~ & MSE & 0.163 & 0.156 & 0.153 & 0.150 & \bf 0.133 & 0.168 \\ Region 2 & MISS & 0.214 & 0.209 & 0.205 & 0.203 & \bf 0.184 & 0.215 \\ ~ & MSE & 0.153 & 0.149 & 0.146 & 0.144 & \bf 0.129 & 0.156 \\ \hline \end{tabular} \end{table} We see that Models 1 -- 5 outperform the historical GAMs overall and in both Regions 1 and 2. This is hardly surprising, given that the hGAMs were trained on data from 2011 -- 2013 and the other models were trained on the 2014 data. Recall that Model 1 only accounted for pitch location. As we successively incorporating count and catcher (Model 2), and pitcher and batter (Model 3) effects, we find that the overall error drops. Finally, Model 5 has the best performance across the board. This is entirely expected as Model 5 given the tremendous number of parameters. Of course, we would be remiss if we assessed predictive performance only with training data. Table~\ref{tab:outSample_error} compares such out-of-sample predictive performance, by considering pitches from the 2015 season for which the associated batter, catcher, pitcher, and umpire all appeared in our 2014 dataset. \begin{table}[!h] \centering \footnotesize \caption{Out-of-sample predictive performance for several models} \label{tab:outSample_error} \begin{tabular}{llcccccc} \\\hline ~ & ~ &Model 1 &Model2 & Model 3 & Model 4 & Model 5 & hGAM \\ ~ & $\#$ Parameters & 744 & 855 & 2582 & 11,067 & 171,168 & -- -- \\ \hline Overall & MISS & 0.107 & 0.105 & \bf 0.105 & 0.106 & 0.106 & 0.109 \\ ~ & MSE & 0.075 & 0.074 & \bf0.074 & 0.075 & 0.074 & 0.076 \\ Region 1 & MISS & 0.256 & 0.245 & \bf0.244 & 0.248 & 0.246 & 0.267 \\ ~ &MSE & 0.167 & 0.162 & \bf0.161 & 0.163 & 0.162 & 0.173 \\ Region 2 & MISS & 0.236 & 0.232 & \bf0.231 & 0.233 & 0.234 & 0.237 \\ ~ &MSE & 0.169 & 0.166 & \bf0.165 & 0.166 & 0.165 & 0.170 \\ \hline \end{tabular} \end{table} Now we see that Model 3 has the best out-of-sample performance overall and in Regions 1 and 2. The fact that Models 4 and 5 have worse out-of-sample performance, despite having very good in-sample performance is a clear indication that these two over-parametrized models have overfit the data. One could argue, however, that comparing predictive performance on 2015 data is not the best means of diagnosing overfitting. \citet{Roegelle2014}, \citet{Mills2017a}, and \citet{Mills2017b} have documented year-to-year changes in umpires' strike zone enforcement ever since Major League Baseball began reviewing and grading umpires' decisions in 2009. In Appendix~\ref{app:holdout}, we report the results from a cross-validation study, in which we repeatedly re-fit Models 1 -- 5 using 90\% of the 2014 data and assessing performance on the remaining 10\%, that similarly demonstrates Model 3's superiority. Model 3's superiority over Models 1 and 5 reveals that although accounting for player effects can lead to improved predictions of called strike probabilities, we cannot reliably estimate an individual catchers catcher effects on individual umpires with a single season's worth of data. \subsection{Full Posterior Analysis} \label{sec:full_posterior_analysis} We now examine the posterior samples from Model 3 more carefully. Figure~\ref{fig:catcher_boxplots} shows box plots of the posterior distributions of catcher effects on the log-odds scale for the catchers with the top 10 posterior means, the bottom 10 posterior means, and the middle 10 posterior means. \begin{figure}[!h] \centering \includegraphics{catcher_boxplots.jpg} \caption{Comparative box plots of 30 catcher effects sorted by the posterior mean of their partial effect on the log-odds scale} \label{fig:catcher_boxplots} \end{figure} We see that there are some catchers, like Hank Conger and Buster Posey, whose posterior distributions are entirely supported on the positive axis, indicating that, all else being equal, umpires are more likely to call strikes when they are caught by these catchers as opposed to the baseline catcher, Brayan Pena. On the other extreme, there are some catchers like Tomas Tellis with distinctly negative effects. As we would expect, catchers who appeared very infrequently in our dataset have very wide posterior distributions. For instance, Austin Romine caught only 61 called pitches and his partial effect has the largest posterior variance among all catchers. It is interesting to see that all of the catcher effects, on the log-odds scale, are contained in the interval [-1.5,1.5], despite the prior placing nearly 20\% of its probability outside this interval. The maximum difference on the log-odds scale between the partial effects of any two catchers is 3, with high posterior probability. For context, a change of 3 in log-odds corresponds to a change in probability from 18.24\% to 81.76\%. As it turns out, the posterior distribution of each count effect is also almost entirely supported in the interval [-1.5, 1.5], on the log-odds scale. This would seem to suggest that catcher framing effects are comparable in magnitude to the effect of count. We explore this possibility in much greater detail in Appendix~\ref{app:catcher_count_effects}. Armed with our simulated posterior draws, we can create posterior predictive strike zones for a given batter-pitcher-catcher-umpire matchup. Suppose, for instance, that Madison Bumgarner is pitching to the batter Yasiel Puig, with Buster Posey catching. Figure~\ref{fig:matchup_50_90} shows the 50\% and 90\% contours of the posterior predictive called strike probability for two umpires, Angel Hernandez and Mike DiMuro, and an average umpire in a 2 -- 0 and 0 -- 2 count. Note, if the center of the pitch passes within the region bounded by the dashed gray line in the figure, then some part of the ball passes through the average rule book strike zone, shown in gray. Puig is a right-handed batter, meaning that he stands on the left-hand side of the approximate rule book strike zone, from the umpire's perspective. \begin{figure}[!h] \centering \includegraphics{matchup_50_90.jpg} \caption{50\% and 90\% contours for called strike probability for the Bumgarner-Puig-Posey matchup for different umpires in different counts.} \label{fig:matchup_50_90} \end{figure} Across the board, Hernandez's contours enclose more area than the average umpire's contours and DiMuro's contours enclose less area than the average umpire's. For instance, on a 2 -- 0 count, Hernandez's 50\% contour covers 4.37 sq. ft., the average umpire's covers 3.87 sq. ft., and DiMuro's covers 3.53 sq. ft. The contours on a 0 -- 2 pitch are much smaller, indicating that all else being equal, these umpires are less likely to call a strike on an 0 -- 2 pitch than on a 2 -- 0 pitch. Each of the 50\% contours extend several inches to the left or \textit{inside} edge of the approximate rule book strike zone. At the same time, the same contours do not extend nearly as far beyond the right or \textit{outside} edge of the strike zone. This means that, Hernandez, DiMuro, and the average umpires are more likely to call strikes on pitches that miss the inside edge of the strike zone than they are on pitches that miss the outside edge by the same amount. Even the 90\% contour on a 2 -- 0 count extend a few inches past the inside edge of the strike zone, implying that Hernandez, DiMuro, and the average umpire will almost always call a strike that misses the inside edge of a the strike zone so long as it is not too high or low. Interestingly, we see that the leftmost extents of DiMuro's and the average umpire's 90\% contours on a 0 -- 2 pitch nearly align with the dashed boundary on the inside edge. A pitch thrown at this location will barely cross the average rule book strike zone, indicating that at least on the inside edge, DiMuro and umpires on average tend to follow the rule book prescription, calling strikes over 90\% of the time. The same is not true at the top, bottom, or outside edge. For instance, the rightmost extent of average umpires' 90\% contour on a 0 -- 2 pitch lies several inches within the outside edge of the strike zone. So in the space of about four and a half inches, the average umpires' called strike probability drops dramatically from 90\% to 50\%, despite the fact that according to the rule book these pitches should be called a strike. Figure~\ref{fig:matchup_50_90} is largely consistent with the empirical observation that Hernandez tends to call a much more permissive strike zone than DiMuro: Hernandez called 42.67\% of taken pitches strikes (1624 strikes to 2182 balls) and DiMuro called 39.92\% of taken pitches strikes (1220 strikes to 1836 balls). On 2 -- 0 pitches, Hernandez's strike rate increased to 51.44\% (71 strikes to 67 balls) and DiMuro's increased to 48.31\% (57 strikes to 67 balls). \section{Impact of framing} \label{sec:framing_impact} We now turn our attention now to measuring the impact framing has on the game. Formally, let $S$ be a random variable counting the number of runs the pitching team gives up after the current pitch to the end of the half-inning. Using slightly different notation than that in Section~\ref{sec:models}, let $\mathbf{h}$ encode the handedness of the batter and pitcher and let $\mathbf{lo}$ be the estimated log-odds of a called strike from the appropriate historical GAM. Let $\mathbf{b}, \mathbf{ca}, \mathbf{co}, \mathbf{p}$ and $\mathbf{u}$ denote the batter, catcher, count, pitcher, and umpire involved in the pitch. Finally, denote the baseline catcher Brayan Pena by $ca_{0}.$ For compactness, let $\xi = \left(\mathbf{u}, \mathbf{co}, \mathbf{lo}, \mathbf{b}, \mathbf{p}, \mathbf{h} \right)$ and observe that every pitch in our dataset can be identified by the combination $\left(\mathbf{ca}, \xi\right).$ For each catcher $ca,$ let $\mathcal{P}_{ca}$ be the set of all called pitches caught by catcher $ca$: $$ \mathcal{P}_{ca} = \left\{ \left(\mathbf{ca}, \xi \right): \mathbf{ca} = ca \right\}. $$ Finally, let $TAKEN$ be an indicator for the event that the current pitch was taken and let $CALL \in \left\{Ball, Strike\right\}$ be the umpire's ultimate call. We will be interested in the expected value of $S,$ conditioned on $\left(\mathbf{ca}, \xi\right)$, the fact that pitch was taken, and the umpire's call. Assuming, conditioned on the count, the fact that the pitch was taken, and the call, $S$ is independent of pitch location and participants, we have $$ \mathbb{E}[S |\mathbf{ca}, \xi, TAKEN] = \sum_{CALL}{\mathbb{E}[S | COUNT, TAKEN, CALL]\mathbb{P}\left(CALL|\mathbf{ca}, \xi, TAKEN\right)} $$ To determine the expected number of runs given up that can be attributed to a catcher $ca$, we may consider the counter-factual scenario in which the catcher is replaced by the baseline catcher, Brayan Pena, with all other factors remaining the same. In this scenario, the expected number of runs the fielding team gives up in the remaining of the half-inning is $E\left[S | \mathbf{ca} = ca_{0}, \xi, TAKEN, CALL\right].$ We may interpret the difference $$ \mathbb{E}\left[S | \mathbf{ca} = ca, \xi, TAKEN, CALL\right] - \mathbb{E}\left[S | \mathbf{ca} = ca_{0}, \xi, TAKEN, CALL\right] $$ as the average number of runs saved (i.e. negative of runs given up) by catcher $ca$'s framing, relative to the baseline. A straightforward calculation shows that this difference is exactly equal to $$ f(ca, \xi) = \left(\mathbb{P}(Strike |\mathbf{ca} = ca, \xi, TAKEN) - \mathbb{P}(Strike| \mathbf{ca} = ca_{0}, \xi, TAKEN)\right) \times \rho(COUNT), $$ where $$ \rho(COUNT) = \mathbb{E}[S|COUNT, TAKEN, Ball] - \mathbb{E}[S|COUNT, TAKEN, Strike]. $$ We can interpret the difference in called strike probabilities above as catcher $ca$'s \textit{framing effect}: it is precisely how much more the catcher adds to the umpires' called strike probability than the baseline catcher, over and above the other pitch participants, pitch location, and count. We can easily simulate approximate draws from the posterior distribution of this difference using the posterior samples from Model 3. We interpret $\rho$ as the value of a called strike in a given count: it measures how many more runs a team is expected to give up if a taken pitch is called a ball as opposed to a strike. To compute $\rho$,we begin by computing the difference in the average numbers of runs scored after a called ball and after a called strike in each count. For instance, 182,405 0 -- 1 pitches were taken (140,667 balls, 41,738 called strikes) between 2011 and 2014. The fielding team gave up an average of 0.322 runs following a ball on a taken 0 -- 1 pitch, while they only gave up an average of 0.265 runs following a called strike on a taken 0 -- 1 pitch. So conditional on an 0 -- 1 pitch being taken, a called strike saves the fielding team 0.057 runs, on average. Table~\ref{tab:run_values} shows the number of runs scored after a called ball or a called strike for each count, as well as an estimate of $\rho.$ Also shown is the relative proportion of each count among our dataset of taken pitches from 2011 to 2014. We see, for instance, that a called strike is most valuable on a 3 -- 2 pitch but only 2.1\% of the taken pitches in our dataset occurred in a 3 -- 2 count. This calculation is very similar to the seminal run expectancy calculation of \citet{Lindsey1963}, though ours is based solely on count rather than on the number of outs and base-runner configuration. \citet{Albert2010} also computes a count-based run expectancy, through his valuations are derived using the linear weights formula of \citet{ThornPalmer1985} rather than the simple average. See \citet{Albert2015} for a more in-depth discussion of run expectancy. The weighted average run value of a called strike based on Table~\ref{tab:run_values} is 0.11 runs, slightly smaller than the value of 0.14 used by \citet{JudgePavlidisBrooks2015} and much smaller than the 0.161 figure used by \citet{Turkenkopf2008}. The discrepancy stems from the fact that we estimated the run values based only on taken pitches while most other valuations of strikes include swinging strikes and strikes called off of foul balls. It is worth stressing at this point that in our subsequent calculations of framing impact we use the count-based run valuation as opposed to the weighted average value. \begin{table}[!h] \centering \footnotesize \caption{Empirical estimates of run expectancy and run value, with standard errors in parentheses} \label{tab:run_values} \begin{tabular}{ccccc} \hline Count & Ball & Strike & Value of Called Strike $\rho$ & Proportion \\ \hline 0 -- 0 & 0.367 (0.002) & 0.305 (0.002) & 0.062 (0.002) & 36.2\%\\ 0 -- 1 & 0.322 (0.002) & 0.265 (0.004)& 0.057 (0.004) & 12.5\% \\ 0 -- 2 & 0.276 (0.003) & 0.178 (0.007) & 0.098 (0.008) & 5.5 \% \\ 1 -- 0 & 0.427 (0.003) & 0.324 (0.003) & 0.103 (0.005) & 11.5\% \\ 1 -- 1 & 0.364 (0.003) & 0.280 (0.004) & 0.084 (0.005) & 8.8\% \\ 1 -- 2 & 0.302 (0.003) & 0.162 (0.006) & 0.140 (0.006) & 6.9\%\\ 2 -- 0 & 0.571 (0.007) & 0.370 (0.006) & 0.201 (0.009) & 3.9\%\\ 2 -- 1 & 0.468 (0.005) & 0.309 (0.006) & 0.159 (0.008) & 4.0\%\\ 2 -- 2 & 0.383 (0.004) & 0.165 (0.006) & 0.218 (0.007) & 4.8\%\\ 3 -- 0 & 0.786 (0.013) & 0.481 (0.008) & 0.305 (0.015) & 1.9\%\\ 3 -- 1 & 0.730 (0.010) & 0.403 (0.009) & 0.327 (0.014) & 1.8\%\\ 3 -- 2 & 0.706 (0.008) & 0.166 (0.008) & 0.540 (0.011) & 2.1\%\\ \hline \end{tabular} \end{table} With our posterior samples and estimates of $\rho$ in hand, we can simulate draws from the posterior distribution of $f(ca,\xi)$ for each pitch in our dataset. An intuitive measure of the impact of catcher $ca$'s framing, which we denote RS for ``runs saved'' is $$ RS(ca) = \sum_{(\mathbf{ca}, \xi) \in \mathcal{P}_{ca}}{f(ca, \xi)}. $$ The calculation of RS is very similar to the one used by \cite{JudgePavlidisBrooks2015} to estimate the impact framing has on the game. Rather than using fixed baseline catcher, \citet{JudgePavlidisBrooks2015} reports the difference in expected runs saved relative to a hypothetical average catcher. According to their model, Brayan Pena, our baseline catcher, was no different than this average catcher, so our estimates of RS may be compared to the results of \citet{JudgePavlidisBrooks2015}. Table~\ref{tab:runsSaved} shows the top and bottom 10 catchers, along with the number of pitches in our dataset received by the catchers, and the posterior mean, standard deviation, and 95\% credible interval of their RS values. Also shown are \citet{JudgePavlidisBrooks2015}'s estimates of runs saved for the catchers, as well as the number of pitches used in their analysis. \begin{table}[!h] \centering \footnotesize \caption{Top and Bottom10 catchers according to the posterior mean of RS. The column BP contain \citet{JudgePavlidisBrooks2015}'s estimates that appeared on the Baseball Prospectus website} \label{tab:runsSaved} \begin{tabular}{llcccc} \hline Rank & Catcher & Runs Saved (SD) & 95\% Interval & N & BP\\ \hline 1 & Miguel Montero & 25.71 (5.03) &[15.61, 35.09] & 8086 & 11.2 (8172) \\ 2 & Mike Zunino & 22.72 (5.17) &[12.56, 32.31] & 7615 & 20.4 (7457) \\ 3 & Jonathan Lucroy & 19.56 (5.69) &[8.16, 30.49] & 8398 &16.4 (8241) \\ 4 & Hank Conger &19.34 (3.24) &[12.93, 25.65] & 4743 & 23.8 (4768) \\ 5 & Rene Rivera &18.81 (3.69) & [11.63, 25.89] & 5091 & 22.5 (5182) \\ 6 & Buster Posey &17.01 (4.14) & [8.79, 25.01] & 6385 & 23.6 (6190) \\ 7 & Russell Martin & 14.35 (4.41) & [5.85, 22.77] & 6388 & 14.9 (6502) \\ 8 & Brian McCann &14.01 (3.95) & [6.18, 21.66] & 6335 & 9.7 (6471) \\ 9 & Yasmani Grandal & 12.88 (2.98) & [7.18, 18.69] & 4248 & 14.5 (4363)\\ 10 & Jason Castro &12.61 (4.43) & [3.80, 21.08] & 7065 & 11.5 (7261) \\ \hline 92 & Josmil Pinto & -6.49 (1.41) & [-9.32, -3.76] & 1748 & -6.9 (1721)\\ 93 & Welington Castillo & -6.70 (4.28) & [-15.19, 1.78] & 6667 & -15.6 (6661) \\ 94 & Chris Iannetta & -7.50 (4.46) & [-16.18, 1.08] & 6493 & -7.3 (6527) \\ 95 & John Jaso & -7.76 (2.41) & [-12.50, -3.07] & 3172 & -11.3 (2879) \\ 96 & Anthony Recker & -8.37 (2.33) & [-13.29, -3.93] & 2935 & -13 (3102) \\ 97 & Gerald Laird & -8.68 (1.87) & [-12.29, -4.99] & 2378 & -9.6 (2616) \\ 98 & A. J. Ellis & -12.90 (3.79) & [-20.10, -5.38] & 5476 & -12.3 (5345) \\ 99 & Kurt Suzuki & -17.67 (4.25) & -[26.07, -9.35] & 6811 & -19.5 (7110) \\ 100 & Dioner Navarro & -18.81 (4.68) & [-28.00, -9.40] & 6659 & -19.8 (6877) \\ 101 & Jarrod Saltalamacchia & -23.98 (4.35) & [-32.76, -15.87] & 6498 & -34 (6764) \\ \hline \end{tabular} \end{table} According to our model, there is little posterior uncertainty that the framing effects of the top 10 catchers shown in Table~\ref{tab:runsSaved} had a positive impact for their teams, relative to the baseline catcher. Similarly, with the exception of Welington Castillo and Chris Iannetta, we are rather certain that the bottom 10 catchers' framing had an overall negative impact, relative to the baseline. We estimate that Miguel Montero's framing saved his team 25.71 runs on average, relative to the baseline. That is, had he been replaced by the baseline catcher on each of the 8,086 called pitches he received, his team would have given up an additional 25.71 runs, on average. Unsurprisingly, our estimates of framing impact differ from those of \citet{JudgePavlidisBrooks2015}'s model. This is largely due to differences in the model construction, valuation of a called strike, and collection of pitches analyzed. Indeed, in some cases, (e.g. Montero and Rene Rivera), they used more pitches to arrive at their estimates of runs saved while in others, we used more pitches (e.g. Mike Zunino and Jonathan Lucroy). Nevertheless, our estimate are not wholly incompatible with theirs; the correlation between our estimates and theirs is 0.94. Moreover, if we re-scale their estimates to the same number of pitches we consider, we find overwhelmingly that these re-scaled estimates fall within our 95\% posterior credible intervals. \subsection{Catcher Aggregate Framing Effect} \label{sec:safe2} Looking at Table~\ref{tab:runsSaved}, it is tempting to say that Miguel Montero is the best framer. After all, he is estimated to have saved the most expected runs relative to the baseline catcher. We observe, however, that Montero received 8086 called pitches while Conger received only 4743. How much of the difference in the estimated number of runs saved is due to their framing ability and how much to the disparity in the called pitches they received? A naive solution is to re-scale the RS estimates and compare the average number of runs saved on a per-pitch basis. While this accounts for the differences in number of pitches received, it does not address the fact that Montero appeared with different players than Conger and that the spatial distribution of pitches he received is not identical to that of Conger. In other words, even if we convert the results of Table~\ref{tab:runsSaved} to a per-pitch basis, the results would still be confounded by pitch location, count, and pitch participants. To overcome this dependence, we propose to \textit{integrate} $f(ca,\xi)$ over all $\xi$ rather than summing $f(ca,\xi)$ over $\mathcal{P}_{ca}.$ Such a calculation is similar to the spatially aggregate fielding evaluation (SAFE) of \citet{JensenShirleyWyner2009}. They integrated the average number of runs saved by a player successfully fielding a ball put in play against the estimated density of location and velocity of these balls to derive an overall fielding metric un-confounded by dispraise in players' fielding opportunities. We propose to integrate $f(ca, \xi)$ against the empirical distribution of $\xi$ and define catcher $ca$'s ``Catcher Aggregate Framing Effect'' or CAFE to be \begin{equation} \label{eq:safe2} CAFE(ca) = 4000 \times \frac{1}{N}\sum_{\xi}{f(ca, \xi)}, \end{equation} The sum in Equation~\ref{eq:safe2} may be viewed as the number of expected runs catcher $ca$ saves relative to the baseline if he participated in every pitch in our dataset. We then re-scale this quantity to reflect the impact of his framing on 4000 ``average'' pitches. We opted to re-scale by CAFE by 4000 as the average number of called pitches received by catchers who appeared in more than 25 games was just over 3,992. Of course, we could have easily re-scaled by a different amount. Once again, we can use our simulated posterior samples of the $\Theta^{u}$'s to simulate draws from the posterior distribution of CAFE. Table~\ref{tab:CAFE} shows the top and bottom 10 catchers ranked according to the posterior mean of their CAFE value, along with the posterior standard deviation, and 95\% credible interval for their CAFE value. Also shown is the a 95\% interval of each catchers marginal rank according to CAFE. \begin{table}[!h] \centering \footnotesize \caption{Top and Bottom10 catchers according to the posterior mean of $CAFE.$} \label{tab:CAFE} \begin{tabular}{llccccc} \hline Rank & Catcher & Mean (SD) & 95\% Interval & 95\% Rank Interval \\ \hline 1. & Hank Conger & 16.20 (2.72) & [10.84, 21.50] & [1, 11] \\ 2. & Christian Vazquez & 14.33 (2.94) & [8.26, 20.03] &[1, 19] \\ 3. & Rene Rivera & 14.04 (2.76) & [8.75, 19.31] & [1, 18] \\ 4. & Martin Maldonado & 13.24 (3.33) & [6.73, 19.68] & [1, 24] \\ 5. & Miguel Montero & 12.36 (2.42) &[7.50, 16.90] & [2, 22] \\ 6. & Yasmani Grandal & 11.90 (2.76) & [6.56, 17.29] & [2, 27] \\ 7. & Mike Zunino & 11.78 (2.69) & [6.51, 16.74] & [2, 26] \\ 8. & Chris Stewart & 11.63 (3.28) & [5.21, 18.03] & [1, 30] \\ 9. & Buster Posey & 11.16 (2.73) & [5.74, 16.51] & [2, 30] \\ 10. & Francisco Cervelli & 10.45 (3.21) & [4.06, 16.72] & [2, 36] \\ \hline 92. & Jordan Pacheco & -11.73 (3.80) & [-19.26, -4.30] & [68, 98] \\ 93. & Koyie Hill & -11.79 (5.67)& [-22.48, -0.68] & [53, 100] \\ 94. & Josh Phegley & -12.05 (4.66) & [-21.40, -3.20] & [64, 99] \\ 95. & Austin Romine & -12.76 (9.78) & [-32.14, 5.81] & [30, 101] \\ 96. & Jarrod Saltalamacchia & -14.00 (2.53) & [-19.11, -9.26] & [82, 99] \\ 97. & Brett Hayes & -14.04 (4.06) & [-21.51, -5.93] & [73, 100] \\ 98. & Gerald Laird & -14.96 (3.21) & [-21.17, -8.69] & [81, 99] \\ 99. & Josmil Pinto & -15.04 (3.27) & [-21.57, -8.78] & [82, 100] \\ 100. & Carlos Santana & -22.48 (4.63) & [-31.63, -13.26] & [93, 101] \\ 101. &Tomas Telis & -25.06 (3.85) & [-32.41, -17.27] & [98, 101] \\ \hline \end{tabular} \end{table} We see that several of the catchers from Table~\ref{tab:runsSaved} also appear in Table~\ref{tab:CAFE}. The new additions to the top ten, Christian Vazquez, Martin Maldonado, Chris Stewart, and Francisco Cervelli were ranked $13^{th}, 17^{th}, 18^{th}$ and $19^{th}$ according to the RS metric. The fact that they rose so much in the rankings when we integrated over all $\xi$ indicates that their original rankings were driven primarily by the fact that they all received considerably fewer pitches in the 2014 season than the top 10 catchers in Table~\ref{tab:runsSaved}. In particular, Vazquez received 3198 called pitches, Cervelli received 2424, Stewart received 2370, and Maldonado received only 1861. Interestingly, we see that now Hank Conger ranks ahead of Miguel Montero according to the posterior mean CAFE, indicating that the relative rankings in Table~\ref{tab:runsSaved} was driven at least partially by disparities in the pitches the two received than by differences in their framing effects. Though Conger emerges as a slightly better framer than Montero in terms of CAFE, the difference between the two is small, as evidenced by the considerable overlap in their 95\% posterior credible intervals. We find that in 95\% of the posterior samples, Conger had anywhere between the largest and $11^{th}$ largest CAFE. In contrast, we see that in 95\% of our posterior samples, Tomas Tellis's CAFE was among the bottom 3 CAFE values. Interestingly, we find much wider credible intervals for the marginal ranks among the bottom 10 catchers. Some catchers like Koyie Hill and Austin Romine appeared very infrequently in our dataset. To wit, Hill received only 409 called pitches and Romine received only 61. As we might expect, there is considerable uncertainty in our estimate about their framing impact, as indicated by the rather wide credible intervals of their marginal rank. \subsection{Year-to-year reliability of CAFE} \label{sec:safe2_reliability} We now consider how consistent CAFE is over multiple seasons. We re-fit our model using data from the 2012 to 2015 seasons. For each season, we restrict attention to those pitches within one foot of the approximate rule book strike zone from that season. We also use the log-odds from the GAM models trained on all previous seasons so that the model fit to the 2012 data uses GAM forecasts trained only on data from 2011 while the model fit to the 2015 data uses GAM forecasts trained only on data from 2011 to 2014. When computing the values of CAFE, we use the run values given in Table~\ref{tab:run_values} for each season. There were a total of 56 catchers who appeared in all four of these seasons. Table~\ref{tab:year_to_year_CAFE} shows the correlation between their CAFE values over time. \begin{table}[!h] \centering \caption{Correlation of CAFE across multiple seasons.} \label{tab:year_to_year_CAFE} \begin{tabular}{ccccc} \hline ~ & 2012 & 2013 & 2014 & 2015 \\ \hline 2012 & 1.00 & 0.70 & 0.56 & 0.41 \\ 2013 & 0.70 & 1.00 & 0.71 & 0.61 \\ 2014 & 0.56 & 0.71 & 1.00 & 0.58\\ 2015 & 0.41 & 0.61 & 0.58 & 1.00 \\ \hline \end{tabular} \end{table} In light of the non-stationarity in strike zone enforcement across seasons, it is encouraging to find moderate to high correlation between a player's CAFE in one season and the next. In terms of year-to-year reliability, the autocorrelations of 0.5 -- 0.7 place CAFE on par with slugging percentage for batters. Interestingly, the correlations between 2012 CAFE and 2013 CAFE and the correlation between 2013 CAFE and 2014 CAFE are greater than 0.7, but the correlation between 2014 CAFE and 2015 CAFE is somewhat lower, 0.58. While this could just be an artifact of noise, we do note that there was a marked uptick in awareness of framing between the 2014 and 2015 seasons, especially among fans and in the popular press. One possible reason for the drop in correlation might be umpires responding to certain catcher's reputations as elite pitch framers by calling stricter strike zones, a possibility suggested by \citet{Sullivan2016}. \section{Discussion} \label{sec:discussion} We systematically fit models of increasing complexity to estimate the effect a catcher has on an umpire's likelihood of calling a strike over and above factors like the count, pitch location, and other pitch participants. We found evidence that some catchers do exert a substantially positive or negative effect on the umpires but that the magnitude of these effects are about as large as the count effects. Using the model that best balanced fit and generalization, we were able to simulate draws from the posterior predictive distribution of the called strike probability probability of each taken pitch in 2014. For each pitch, we estimated the apparent framing effect of the catcher involved and, following a procedure similar to that of \citet{JudgePavlidisBrooks2015}, we derived an estimate of the impact framing has on the game, RS. Our RS metric is largely consistent with previously reported estimates of the impact of catcher framing, but a distinct advantage is our natural quantification of the estimation uncertainty. We find that there is considerable posterior uncertainty in this metric, making it difficult to estimate precisely the impact a particular catcher's framing had on his team's success. While the construction of RS is intuitive, we argue that it does not facilitate reasonable comparisons of catchers' framing since, by construction, the metric is confounded by the other factors in our model. We propose a new metric CAFE that integrates out the dependence of RS on factors like pitch location, count, and other pitch participants. CAFE compares catchers by computing the impact each catcher's framing would have had had he received every pitch in our dataset. Like RS, there is a considerable uncertainty in our CAFE estimates. While we are able to separate the posterior distributions of CAFE of good framers from bad framers, there is considerable overlap in the posterior distributions of CAFE within these groups, making it difficult to distinguish between the good framers or between the bad framers. Despite this, we find rather high year-to-year correlation in CAFE, though there is a marked drop-off between 2014 and 2015. This coincides with the increased attention on framing in the sports media and sabermetrics community following the 2014 season. One potential explanation for this drop-off is that umpires adjusted their strike zone enforcement when calling pitches caught by catchers with reputations as good framers. Our findings may have several implications for Major League Baseball teams. The uncertainty in both RS and CAFE make it difficult to precisely value pitch framing with any reasonable degree of certainty. For instance, the 95\% credible interval of Jonathan Lucroy's RS is [8.16, 30.49]. Using the heuristic of 10 expected runs per win and $\$7$M per win \citep{Cameron2014, Pollis2013}, our model suggests that Lucroy's framing was worth anywhere between $\$5.7$M and $\$21.34$M. In light of the non-stationarity between seasons and the recent drop-off in correlation in CAFE, it is difficult to forecast the impact that any individual catcher's framing will have into the future. The observed overlaps in the posterior distribution of CAFE means that with a single season's worth of data, we cannot discriminate between good framers with the same certainty that we can separate good framers from bad framers. As a concrete example, our model indicates that both Miguel Montero and Hank Conger were certainly better framers than Jarrod Saltalamacchia, but it cannot tell which of Montero or Conger had a larger, positive impact. \subsection{Extensions} There are several extensions of and improvements to our model that we now discuss. While we have not done so here, one may derive analogous estimates of RS and CAFE for batters and pitchers in a straightforward manner. Our model only considered the count into which a pitch was thrown but there is much more contextual information that we could have included. For instance, \cite{RosalesSpratt2015} have suggested the distance between where a catcher actually receives the pitch and where he sets up his glove before the pitch is thrown could influence an umpire's ball-strike decision making. Such glove tracking data is proprietary but should it be become publicly available, one could include this distance along with its interaction with the catcher indicator into our model. In addition, one could extend our model to include additional game-state information such as the ball park, the number of outs in the half-inning, the configuration of the base-runners, whether or not the home team is batting, and the number of pitches thrown so far in the at-bat. One may argue that umpires tend to call more strikes late in games which are virtually decided (e.g. when the home team leads by 10 runs in the top of the ninth inning) and easily include measures related to the run-differential and time remaining into our model. Expanding our model in these directions may improve the overall predictive performance slightly without dramatically increasing the computational overhead. More substantively, we have treated the umpires' calls as independent events throughout this paper. \citet{ChenMoskowitzShue2016} reported a negative correlation in consecutive calls, after adjusting for location. To account for this negative correlation in consecutive calls, we could augment our model with binary predictors encoding the results of the umpires' previous $k$ calls in the same at-bat, inning, or game. Incorporating this Markov structure to our model would almost certainly improve the overall estimation of called strike probability and may produce slightly smaller estimates of RS and CAFE. At this point, however, it is not \textit{a priori} obvious how large the differences would be or how best to pick $k.$ It is also well-known that pitchers try to throw to different locations based on the count, but we make no attempt to model or exploit this phenomenon. Understanding the effect of pitch sequencing on umpires' decision making (and vice-versa) would also be an interesting line of future research. We incorporated pitch location in a two-step procedure: we started from an already quite good generalized additive model trained with historical data and used the forecasted log-odds of a called strike as a predictor in our logistic regression model. Much more elegant would have been to fit a single semi-parametric model by placing, say, a common Gaussian process prior on the umpire-specific functions of pitch location, $f^{u}(x,z)$ in Equation~\ref{eq:general_model}. We have also not investigated any potential interactions between pitch location, player, and count effects. While we could certainly add interaction terms to the logistic models considered above, doing so vastly increases the number of parameters and may require more thoughtful prior regularization. A more elegant alternative would be to fit a Bayesian ``sum-of-trees'' model using \citet{ChipmanGeorgeMcCulloch2010}'s BART procedure. Such a model would likely result in more accurate called strike probabilities as it naturally incorporates interaction structure. We suspect that this approach might reveal certain locations and counts in which framing is most manifest. Finally, we return to the two pitches from the 2015 American League Wild Card game in Figure~\ref{fig:keuchel_tanaka}. Fitting our model to the 2015 data, we find that Eric Cooper was indeed much more likely to call the Keuchel pitch a strike than the Tanaka pitch (81.72\% vs 62.59\%). Interestingly, the forecasts from the hGAMs underpinning our model were 51.31\% and 50.29\%, respectively. Looking a bit further, had both catchers been replaced by the baseline catcher, our model estimates a called strike probability of 77.58\% for the Keuchel pitch and 61.29\% for the Tanaka pitch, indicating that Astros' catcher Jason Castro's apparent framing effect (4.14\%) was slightly larger than Yankee's catcher Brian McCann's (1.30\%). The rather large discrepancy between the apparent framing effects and the estimated called strike probabilities reveals that we cannot immediately attribute the difference in calls on these pitches solely to differences in the framing abilities of the catchers. Indeed, we note that the two pitches were thrown in different counts: Keuchel's pitch was thrown in a 1 -- 0 count and Tanaka's was thrown in a 1 -- 1 count. In 2015, umpires were much more likely to call strikes in a 1 -- 0 count than they were in a 1 -- 1 count, all else being equal. Interestingly, had the Keuchel and Tanaka pitches been thrown in the same count, our model still estimates that Cooper would be consistently more likely to call the Keuchel pitch a strike, lending some credence to disappointed Yankees' fans' claims that his strike zone enforcement favored the Astros. Ultimately, though, it is not so clear that the differences in calls on the two pitches shown in Figure~\ref{fig:keuchel_tanaka} specifically was driven by catcher framing as much as it was driven by random chance. \newpage
{ "attr-fineweb-edu": 2.548828, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdGA5qsFAf21dCWx_
\section{Introduction} The Bradley-Terry model \cite{BRADLEY01121952,Zermelo1929} has long been used for evaluating paired comparisons, such as games between pairs of teams in which one team or the other wins each game. The model assigns a strength parameter to each team, and the odds ratio associated with the probability of a team winning a game is equal to the ratio of the strengths. These strength parameters can be estimated based on the full set of game results and used to rank teams or make future predictions. The model has been extended by Davidson \cite{Davidson1970} to contests in which a tie or drawn contest is a possible outcome. For many years, such ties were a common occurrence in the sport of ice hockey, but recently tie-breaking methods such as an overtime period played under different rules and/or a shootout in which the teams alternate penalty shots are used to determine a winner. Results in overtime or shootouts can be evaluated differently from wins in regulation play. For instance, since 2006\cite{iihf2006} competitions organized by International Ice Hockey Federation (IIHF) have awarded three points in the standings to a team winning in regulation, two points for a win in overtime or a shootout, one point for a loss in overtime or a shootout, and no points for a loss in regulation, and many leagues have followed suit. Compared to the prior system which awarded two points for a win, one for a tie, and none for a loss, which effectively treated a tie as half a win and half a loss, the four-outcome system treats an overtime/shootout win as $2/3$ of a win and $1/3$ of a loss.\footnote{We do not consider non-zero-sum point systems such as that used in association football (soccer) which awards three points for a win and one for a draw, so that drawn matches are only worth two points total rather than three. Likewise, the National Hockey League awards all wins two points and overtime/shootout losses one point; this 2-2-1-0 system awards three total points for games which go into overtime, but only two for games decided in regulation.} One possible approach to either of these situations (games with three or four outcomes) is to use standard Bradley-Terry and assign fractional wins as appropriate to the point system (see, e.g., \cite{Whelan2019}). However, this is unsatisfying, as it provides no way to assign a probability for a future game to end in a tie or overtime or shootout result. In this paper, we instead consider a generalization of the tie model of \cite{Davidson1970} which associates one strength parameter to each team, along with a single parameter describing the tendency for games to go into overtime. The rest of this paper is organized as follows: In \sref{s:models}, we describe the three models (standard Bradley-Terry, Bradley-Terry-Davidson including ties, and a new model with four possible game outcomes including overtime/shootout wins and losses), and exhibit a generalization of the relevant formulas which describes all three cases. In \sref{s:inference} we describe methods for inferring the relevant parameters of these models given a set of game results: maximum likelihood estimation, and Bayesian inference using either a Gaussian approximation or Hamiltonian Monte Carlo. In \sref{s:demo} we demonstrate these methods using a recent set of game results: the 2020-2021 Eastern College Athletic Conference (ECAC) season. This season used the standard IIHF system with 3-2-1-0 points assigned for regulation wins, overtime/shootout wins, overtime/shootout losses, and regulation losses, respectively. For the purposes of illustration, we evaluate the ECAC results with the four-outcome model as well as with the other two models, treating in one case all wins the same, and in the other all overtime/shootout results as ties. \section{Models} \label{s:models} In the standard Bradley-Terry model \cite{BRADLEY01121952,Zermelo1929} each team has a strength $\pi_i\in(0,\infty)$, and the modelled probability that team $i$ will win a game with team $j$ is \begin{equation} \theta^{\text{W}}_{ij} = \frac{\pi_i}{\pi_i+\pi_j} \end{equation} so that the probability of a set of game outcomes $D$ in which team $i$ plays team $j$ $n_{ij}$ times and wins $n^{\text{W}}_{ij}$ of those games is\footnote{The first form explicitly includes each pair of teams only once, while the second corrects for the double-counting, taking advantage of the fact that $n^{\text{W}}_{ii}=0=n^{\text{L}}_{ii}$. If the order of the games between pairs of teams is ignored, the sampling distribution for the $\{n^{\text{W}}_{ij}\}$ is instead $p(\{n^{\text{W}}_{ij}\}|\{\pi_i\}) = \left(\prod_{i=1}^{t}\prod_{j=1}^{t} \frac{(n_{ij})!}{(n^{\text{W}}_{ij})!(n^{\text{L}}_{ij})!} (\theta^{\text{W}}_{ij})^{n^{\text{W}}_{ij}} (\theta^{\text{L}}_{ij})^{n^{\text{L}}_{ij}}\right)^{\frac{1}{2}}$.} \begin{equation} P(D|\{\pi_i\}) = \prod_{i=1}^{t}\prod_{j=i+1}^{t} (\theta^{\text{W}}_{ij})^{n^{\text{W}}_{ij}} (\theta^{\text{L}}_{ij})^{n^{\text{L}}_{ij}} = \left( \prod_{i=1}^{t}\prod_{j=1}^{t} (\theta^{\text{W}}_{ij})^{n^{\text{W}}_{ij}} (\theta^{\text{L}}_{ij})^{n^{\text{L}}_{ij}} \right)^{\frac{1}{2}} \ , \end{equation} where $t$ is the number of teams, $n^{\text{L}}_{ij}=n^{\text{W}}_{ji}=n_{ij}-n^{\text{W}}_{ij}$ and $\theta^{\text{L}}_{ij}=\theta^{\text{W}}_{ji}=1-\theta^{\text{W}}_{ij}$. Davidson \cite{Davidson1970} proposed an extension for competitions which include the probabilities of ties, in which the probabilities of the three possible outcomes of a game are \begin{subequations} \begin{align} \theta^{\text{W}}_{ij} &= \frac{\pi_i}{\pi_i + \nu\sqrt{\pi_i\pi_j} + \pi_j} \\ \theta^{\text{T}}_{ij} &= \frac{\nu\sqrt{\pi_i\pi_j}} {\pi_i + \nu\sqrt{\pi_i\pi_j} + \pi_j} \\ \theta^{\text{L}}_{ij} &= \frac{\pi_j}{\pi_i + \nu\sqrt{\pi_i\pi_j} + \pi_j} \ , \end{align} \end{subequations} where $\nu\in[0,\infty)$ is an additional parameter which describes how likely ties are to occur. (The probability of a tie in a game between evenly matched teams is $\frac{\nu}{2+\nu}$.) Evidently, $\theta^{\text{L}}_{ij}=\theta^{\text{W}}_{ji}$, $\theta^{\text{T}}_{ij}=\theta^{\text{T}}_{ji}$ and $\theta^{\text{W}}_{ij}+\theta^{\text{T}}_{ij}+\theta^{\text{L}}_{ij}=1$. The probability of a given set of game outcomes in which the $n_{ij}=n^{\text{W}}_{ij}+n^{\text{T}}_{ij}+n^{\text{L}}_{ij}$ games between teams $i$ and $j$ result in $n^{\text{W}}_{ij}$ wins, $n^{\text{T}}_{ij}$ ties and $n^{\text{L}}_{ij}$ losses for team $i$ (where $n^{\text{T}}_{ij}=n^{\text{T}}_{ji}$ and $n^{\text{L}}_{ij}=n^{\text{W}}_{ji}$) is \begin{equation} P(D|\{\pi_i\},\nu) = \left( \prod_{i=1}^{t}\prod_{j=1}^{t} (\theta^{\text{W}}_{ij})^{n^{\text{W}}_{ij}} (\theta^{\text{T}}_{ij})^{n^{\text{T}}_{ij}} (\theta^{\text{L}}_{ij})^{n^{\text{L}}_{ij}} \right)^{\frac{1}{2}} \ . \end{equation} We propose an extension appropriate for a system in which a win in overtime or a shootout is treated as $2/3$ of a win and $1/3$ of a loss, Writing the four possible game outcomes as RW for regulation win, OW for overtime/shootout win, OL for overtime/shootout loss, and RL for regulation loss, the modelled probability of each outcome would be\footnote{The exponents are chosen to correspond to the share of the points ($2/3$ and $1/3$, respectively) awarded for an overtime/shootout win or loss. This has the desirable feature that the maximum likelihood equation \eqref{e:pMLE} becomes (after multiplying by $3$) $$ \sum_{j=1}^{t} n_{ij} \left( 3\ML{\theta}^{\text{RW}}_{ij} + 2\ML{\theta}^{\text{OW}}_{ij} + \ML{\theta}^{\text{OL}}_{ij} \right) = \sum_{j=1}^{t} \left( 3n^{\text{RW}}_{ij} + 2n^{\text{OW}}_{ij} + n^{\text{OL}}_{ij} \right) \ , $$ i.e., that the expected number of points for each team equals the actual number. See also the discussion in \sref{s:conclusions} about possible alternative models, including extended models in which the exponents are not fixed, but inferred from the data.} \begin{subequations} \begin{align} \theta^{\text{RW}}_{ij} &= \frac{\pi_i}{\pi_i + \nu\pi_i^{2/3}\pi_j^{1/3} + \nu\pi_i^{1/3}\pi_j^{2/3} + \pi_j} \\ \theta^{\text{OW}}_{ij} &= \frac{\nu\pi_i^{2/3}\pi_j^{1/3}} {\pi_i + \nu\pi_i^{2/3}\pi_j^{1/3} + \nu\pi_i^{1/3}\pi_j^{2/3} + \pi_j} \\ \theta^{\text{OL}}_{ij} &= \frac{\nu\pi_i^{1/3}\pi_j^{2/3}} {\pi_i + \nu\pi_i^{2/3}\pi_j^{1/3} + \nu\pi_i^{1/3}\pi_j^{2/3} + \pi_j} \\ \theta^{\text{RL}}_{ij} &= \frac{\pi_j}{\pi_i + \nu\pi_i^{2/3}\pi_j^{1/3} + \nu\pi_i^{1/3}\pi_j^{2/3} + \pi_j} \ . \end{align} \end{subequations} The probability for a set of game outcomes will then be \begin{equation} P(D|\{\pi_i\},\nu) = \left( \prod_{i=1}^{t}\prod_{j=1}^{t} (\theta^{\text{RW}}_{ij})^{n^{\text{RW}}_{ij}} (\theta^{\text{OW}}_{ij})^{n^{\text{OW}}_{ij}} (\theta^{\text{OL}}_{ij})^{n^{\text{OL}}_{ij}} (\theta^{\text{RL}}_{ij})^{n^{\text{RL}}_{ij}} \right)^{\frac{1}{2}} \ . \end{equation} If we write $\lambda_i=\ln\pi_i\in(-\infty,\infty)$ and $\tau=\ln\nu\in(-\infty,\infty)$, we can describe all three models as special cases of a general model in which the probability of a game between teams $i$ and $j$ ending in outcome $I$ is \begin{equation} \label{e:thetaImodel} \theta^{I}_{ij} = \frac{\pi_i^{p_I} \pi_j^{1-p_I} \nu^{o_I}} {\sum_J \pi_i^{p_J} \pi_j^{1-p_J} \nu^{o_J}} = \frac{(\pi_i/\pi_j)^{p_I}\nu^{o_I}} {\sum_J (\pi_i/\pi_j)^{p_J} \nu^{o_J}} = {\boldsymbol{\sigma}}(\{p_J(\lambda_i-\lambda_j)+o_J\tau|J\})_I \ , \end{equation} where \begin{equation} {\boldsymbol{\sigma}}(\mathbf{x})_I = \frac{e^{x_I}}{\sum_J e^{x_J}} \end{equation} is a vector equivalent of the logistic function known as the softmax function.\cite{Bridle1990} The probability for a set of game outcomes is \begin{equation} P(D|\{\pi_i\},\nu) = \left( \prod_{i=1}^{t}\prod_{j=1}^{t}\prod_I (\theta^{I}_{ij})^{n^{I}_{ij}} \right)^{\frac{1}{2}} \ . \end{equation} Specifically, \begin{itemize} \item For the standard Bradley-Terry model, $p_{\text{W}}=1$, $p_{\text{L}}=0$, and $o_{\text{W}}=o_{\text{L}}=0$. \item For the Bradley-Terry-Davidson model with ties, $p_{\text{W}}=1$, $p_{\text{T}}=\frac{1}{2}$, $p_{\text{L}}=0$, $o_{\text{W}}=o_{\text{L}}=0$, and $o_{\text{T}}=1$. \item For the model introduced in this paper, $p_{\text{RW}}=1$, $p_{\text{OW}}=\frac{2}{3}$, $p_{\text{OL}}=\frac{1}{3}$, $p_{\text{RL}}=0$, $o_{\text{RW}}=o_{\text{RL}}=0$, and $o_{\text{OW}}=o_{\text{OL}}=1$. \end{itemize} All of these models satisfy $\sum_I\theta^{I}_{ij}=1$, and have ``opposite'' outcomes $I$ and $-I$ such that $\theta^{-I}_{ij}=\theta^{I}_{ji}$, $p_{-I}=1-p_{I}$, and $o_{-I}=o_{I}$. They also satisfy $0\le p_I \le 1$ and $o_I\in\{0,1\}$. We confine ourselves below to cases where these properties hold. \section{Inference of Parameters} \label{s:inference} \subsection{Maximum Likelihood} Maximum likelihood estimates (MLEs) of Bradley-Terry strength parameters \cite{Zermelo1929,Ford:1957,Davidson1970} provide a straightforward way of associating a ``rating'' to each team based on their game results, and have been proposed as a replacement for less reliable ways of evaluating a team's game results in light of the difficulty of their schedule.\cite{KRACH1993} We can consider the probability $P(D|\{\pi_i\},\nu)=P(D|\{\lambda_i\},\tau)$ as a likelihood function of the parameters $\{\lambda_i\}$ and $\tau$, with log-likelihood \begin{equation} \lnP(D|\{\lambda_i\},\tau) = \frac{1}{2} \sum_{i=1}^{t}\sum_{j=1}^{t}\sum_I n^{I}_{ij}\ln\theta^{I}_{ij} \ . \end{equation} We can use the identity \begin{equation} d \ln{\boldsymbol{\sigma}}(\mathbf{x})_I = dx_I - \sum_{J} dx_J{\boldsymbol{\sigma}}(\mathbf{x})_J \end{equation} to show that \begin{subequations} \begin{equation} \frac{\partial\ln\theta^{I}_{ij}}{\partial\tau} = o_I - \sum_J o_J \theta^{J}_{ij} \end{equation} and \begin{equation} \frac{\partial\ln\theta^{I}_{ij}}{\partial\lambda_k} = (\delta_{ik}-\delta_{jk}) \left(p_I-\sum_J p_J\theta^{J}_{ij}\right) \ , \end{equation} \end{subequations} which means that \begin{equation} \label{e:dlnPdtau} \frac{\partial\lnP(D|\{\lambda_i\},\tau)}{\partial\tau} = \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}\sum_I o_I n^{I}_{ij} - \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}n_{ij}\sum_I o_I \theta^{I}_{ij} \end{equation} and \begin{equation} \label{e:dlnPdlambda} \frac{\partial\lnP(D|\{\lambda_i\},\tau)}{\partial\lambda_k} = \sum_{i=1}^{t} \sum_I n^{I}_{ki} p_I - \sum_{i=1}^{t} n_{ki} \sum_I p_I \theta^{I}_{ki} \ . \end{equation} Using these, we can write the maximum likelihood equations as \begin{equation} n^o = \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}n_{ij} \sum_I o_I \ML{\theta}^{I}_{ij} = \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}n_{ij} \sum_I o_I \frac{(\ML{\pi}_i/\ML{\pi}_j)^{p_I} \ML{\nu}^{o_I}} {\sum_J (\ML{\pi}_i/\ML{\pi}_j)^{p_J} \ML{\nu}^{o_J}} \end{equation} and \begin{equation} \label{e:pMLE} p_k = \sum_{i=1}^{t} n_{ki} \sum_I p_I \ML{\theta}^{I}_{ki} = \sum_{i=1}^{t} n_{ki} \sum_I p_I \frac{(\ML{\pi}_k/\ML{\pi}_i)^{p_I} \ML{\nu}^{o_I}} {\sum_J (\ML{\pi}_k/\ML{\pi}_i)^{p_J} \ML{\nu}^{o_J}} \end{equation} where \begin{equation} n^o = \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}\sum_I o_I n^{I}_{ij} \end{equation} can be interpreted in the models considered as the number of games which are tied or go to overtime, respectively, and \begin{equation} p_k = \sum_{i=1}^{t} \sum_I n^{I}_{ki} p_I \end{equation} can be seen as the total number of ``points'' for team $i$. The maximum likelihood equation set each of these quantities equal to their expectation values. We can solve the maximum likelihood equations by a generalization of the iterative method in \cite{Ford:1957}. writing them \begin{equation} \ML{\nu} = n^o \left/ \left( \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}n_{ij} \frac{\sum_I o_I (\ML{\pi}_i/\ML{\pi}_j)^{p_I}} {\sum_J (\ML{\pi}_i/\ML{\pi}_j)^{p_J} \ML{\nu}^{o_J}} \right) \right. \end{equation} (where we have used the fact that the only non-zero term in the numerator has $o_I=1$) and \begin{equation} \ML{\pi}_k = p_k \left/ \left( \sum_{i=1}^{t} n_{ki} \frac{\sum_I p_I\ML{\pi}_k^{p_I-1}\ML{\pi}_i^{-p_I} \ML{\nu}^{o_I}} {\sum_J (\ML{\pi}_k/\ML{\pi}_i)^{p_J} \ML{\nu}^{o_J}} \right) \right. \ . \end{equation} As in the standard Bradley-Terry model, the overall multiplicative scale of $\ML{\pi}_k$ is undefined (because $\theta^{I}_{ij}$ can be written so that the team strengths appear only in the combination $\pi_j/\pi_i$), so it is necessary to rescale the team strengths at each iteration to preserve a property such as $\prod_{i=1}^{t} \ML{\pi}_i=1$. Beyond that, there are conditions for the maximum likelihood estimates to be finite and well-defined, which are explored in e.g., \cite{Albert1984,Santner1986,ButlerWhelan}. \subsection{Bayesian Approach} It is useful to move beyond maximum likelihood estimates, both to quantify uncertainty in the model parameters, and to make predictions about the outcome of future games. (For instance, \cite{Whelan2019} proposed simulating future games with probabilites drawn from a posterior distribution capturing the uncertainty in the strength parameters, rather than fixed probabilties generated from the MLEs of those parameters.) A convenient framework for parameter estimates including uncertainties is Bayesian inference, which defines the posterior probability density for the parameters $\{\pi_i\}$ and $\nu$, or equivalently $\{\lambda_i\}$ and $\tau$, given a set of game results $D$ and prior assumptions $I$, as \begin{equation} f(\{\lambda_i\},\tau|D,I) = \frac{P(D|\{\lambda_i\},\tau)\,f(\{\lambda_i\},\tau|I)}{P(D|I)} \propto P(D|\{\lambda_i\},\tau)\,f(\{\lambda_i\},\tau|I) \ . \end{equation} A variety of choices can be made for the multivariate prior distribution on $\{\lambda_i\}$ \cite{Whelan2017} in the Bradley-Terry model, and likewise for the tie/overtime parameter $\tau$. For simplicity, we work in this paper with the improper Haldane prior\footnote{So named because the marginal prior distribution for probabilities such as $\theta_{ij}$ will follow the Haldane prior \cite{Haldane1932,Jeffreys1939}, which is the limit of a $\text{Beta}(\alpha,\beta)$ distribution as $\alpha,\beta\rightarrow 0$.} \begin{equation} f(\{\lambda_i\},\tau|I_0) = \text{constant} \end{equation} which means that the posterior distribution is proportional to the likelihood: \begin{equation} f(\{\lambda_i\},\tau|D,I_0) \propto P(D|\{\lambda_i\},\tau) \ . \end{equation} With this choice of prior, the posterior probability density will be independent of the combination $\sum_{i=1}^{t}\lambda_i$, but otherwise will be normalizable under the same circumstances that lead to well-defined maximum likelihood estimates for the parameters. \subsubsection{Gaussian Approximation} One convenient approach is to Taylor expand the log-posterior $\lnf(\{\lambda_i\},\tau|D,I)$ about the maximum a posteriori solution (which in this case is the maximum likelihood solution $\{\ML{\lambda}_i\},\ML{\tau}$).\footnote{Note that this method does not assign special significance to the MAP estimates, but uses them as the starting point for a convenient approximation to the posterior probability distribution.} Truncating the expansion at second order gives a Gaussian approximation \begin{multline} f(\{\lambda_i\},\tau|D,I_0) \approx f(\{\ML{\lambda}_i\},\tau|D,I_0) \exp\left( -\frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t} H_{ij} \left(\lambda_i-\ML{\lambda}_i\right)\left(\lambda_j-\ML{\lambda}_j\right) \right. \\ \left. -\sum_{i=1}^{t} H_{i\tau} \left(\lambda_i-\ML{\lambda}_i\right)\left(\tau-\ML{\tau}\right) -\frac{1}{2} H_{\tau\tau} \left(\tau-\ML{\tau}\right)^2 \right) \ , \end{multline} where $\mathbf{H}$ is the $(t+1)\times(t+1)$ Hessian matrix \begin{equation} \mathbf{H} = \begin{pmatrix} \{H_{ij}\} & \{H_{i\tau}\} \\ \{H_{\tau j}\} & H_{\tau\tau} \\ \end{pmatrix} \end{equation} with elements\footnote{Note the similarity to the Fisher information matrix $I_{ij}(\{\lambda_k\})=\sum_D P(D|\{\lambda_k\},I)\frac{\partial^2}{\partial\lambda_i\partial\lambda_j} [-\ln P(D|\{\lambda_k\},I)]$, which differs from the Hessian in that that $H_{ij}$ depends on the observed data, while $I_{ij}$ is a function defined on parameter space.} \begin{subequations} \begin{gather} H_{ij} = -\left[ \frac{\partial^2}{\partial\lambda_i\partial\lambda_j} \ln P(D|\{\lambda_k\},\tau) \right]_{\{\lambda_k=\ML{\lambda}_k\},\tau=\ML{\tau}} \\ H_{i\tau} = H_{\tau i} = -\left[ \frac{\partial^2}{\partial\lambda_i\partial\tau} \ln P(D|\{\lambda_k\},\tau) \right]_{\{\lambda_k=\ML{\lambda}_k\},\tau=\ML{\tau}} \\ H_{\tau\tau} = -\left[ \frac{\partial^2}{\partial\tau^2} \ln P(D|\{\lambda_k\},\tau) \right]_{\{\lambda_k=\ML{\lambda}_k\},\tau=\ML{\tau}} \ . \end{gather} \end{subequations} To compute the elements of the Hessian matrix, we return to the first derivative \eqref{e:dlnPdtau} and differentiate them to get \begin{equation} - \frac{\partial^2\lnP(D|\{\lambda_i\},\tau)}{\partial\tau^2} = \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}n_{ij}\sum_I o_I \frac{\partial \theta^{I}_{ij}}{\partial\tau} = \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}n_{ij} \theta^o_{ij}(1-\theta^o_{ij}) \ , \end{equation} where \begin{equation} \theta^o_{ij} = \sum_I o_I \theta^{I}_{ij} \end{equation} is the probability of a tie or overtime game, depending on the model, and we have used the fact that $o_I^2=o_I$ since $o_I\in\{0,1\}$. Similarly, using the properties $\sum_I\theta^{I}_{ij}=1$, $\theta^{-I}_{ij}=\theta^{I}_{ji}$, $p_{-I}=1-p_{I}$, and $o_{-I}=o_{I}$, we find \begin{equation} - \frac{\partial^2\lnP(D|\{\lambda_i\},\tau)}{\partial\tau\partial\lambda_k} = \sum_{i=1}^{t}n_{ki} \sum_I o_I \theta^{I}_{ki} \left( p_I - \sum_J p_J\theta^{J}_{ki} \right) \end{equation} and, finally, differentiating \eqref{e:dlnPdlambda} gives us \begin{multline} - \frac{\partial^2\lnP(D|\{\lambda_i\},\tau)} {\partial\lambda_k\partial\lambda_\ell} = \delta_{k\ell} \sum_{i=1}^{t} n_{ki} \sum_I p_I\theta^{I}_{ki} \left(p_I-\sum_J p_J\theta^{J}_{ki}\right) \\ - n_{k\ell} \sum_I p_I\theta^{I}_{k\ell} \left(p_I-\sum_J p_J\theta^{J}_{k\ell}\right) \end{multline} so that the Hessian matrix has components \begin{subequations} \label{e:Hessian} \begin{gather} H_{\tau\tau} = \frac{1}{2}\sum_{i=1}^{t}\sum_{j=1}^{t}n_{ij} \sum_I o_I\ML{\theta}^{I}_{ij}(1-\sum_Jo_J\ML{\theta}^{J}_{ij}) \\ H_{\tau k} = H_{k\tau} = \sum_{i=1}^{t}n_{ki} \sum_I o_I \ML{\theta}^{I}_{ki} \left( p_I - \sum_J p_J\ML{\theta}^{J}_{ki} \right) \\ H_{k\ell} = \delta_{k\ell} \sum_{i=1}^{t} n_{ki} \sum_I p_I\ML{\theta}^{I}_{ki} \left(p_I-\sum_J p_J\ML{\theta}^{J}_{ki}\right) - n_{k\ell} \sum_I p_I\ML{\theta}^{I}_{k\ell} \left(p_I-\sum_J p_J\ML{\theta}^{J}_{k\ell}\right) \ . \end{gather} \end{subequations} Note that in the case of the Bradley-Terry model, where the only outcomes are win and loss, the condition $o^I=0$ simplifies the Hessian to $H_{\tau\tau}=H_{\tau k}=0$ (since the $\tau$ parameter is not actually part of the likelihood), and \begin{equation} H_{k\ell} = \delta_{k\ell} \sum_{i=1}^{t} n_{ki}\ML{\theta}^{\text{W}}_{ki} \left(1-\ML{\theta}^{\text{W}}_{ki}\right) - n_{k\ell}\ML{\theta}^{\text{W}}_{k\ell} \left(1-\ML{\theta}^{\text{W}}_{k\ell}\right) \end{equation} which is the form seen in, e.g., \cite{Whelan2019}. The Hessian matrix in \eqref{e:Hessian} is singular, since $\sum_{\ell=1}^{t} H_{\tau\ell}=0$ and $\sum_{\ell=1}^{t} H_{k\ell}=0$, which ultimately arise from the fact that the probabilities $\{\theta^{I}_{ij}\}$, and thus the likelihood, are unchanged by adding the same constant to all the $\{\lambda_i\}$. This can be handled computationally by constructing a variance-covariance matrix $\boldsymbol{\Sigma}=\mathbf{H}^+$ which is the Moore-Penrose pseudo-inverse\cite{penrose_1955}\footnote{For a real symmetric matrix with a complete eigenvalue decomposition, this operation replaces each non-zero eigenvalue with its reciprocal while leaving zero eigenvalues unchanged.} of the Hessian matrix, and approximating the posterior as a multivariate Gaussian with a mean of $\{\ML{\lambda}_i\},\ML{\tau}$ and a variance-covariance matrix $\boldsymbol{\Sigma}$. This has the effect of enforcing the constraint \begin{gather} \sum_{i=1}^{t} \lambda_i = \sum_{i=1}^{t} \ML{\lambda}_i = 0 \end{gather} on the combination of the parameters which has no influence on the model. This Gaussian approximation can be used to produce analytic estimates of quantities of interest, or used for Monte Carlo sampling, as illustrated in \sref{s:demo}. It can also be used as a starting point for importance sampling of the sort discussed in \cite{Whelan2019}. For the present work, we consider a different Monte Carlo method for sampling from the exact posterior. \subsubsection{Hamiltonian Monte Carlo} Markov-chain Monte Carlo methods provide a convenient way to draw samples from a posterior distribution. We demonstrate in this paper how to draw posterior samples for the Bradley-Terry extensions considered, using Hamiltonian Monte Carlo as implemented in the Stan library.\cite{Stan} There are a few technical considerations. Because the posterior on $\{\lambda_i\}$ and $\tau$ is improper, trying to draw from it directly will lead to chains which never converge. Any probabilities constructed from the samples will be well-behaved, since only the meaningless degree of freedom $\sum_{i=1}^{t} \lambda_i$ is unconstrained, but these apparent errors make it more difficult to detect other potential problems. It is thus useful instead to consider only variables $\gamma_{ij}=\lambda_i-\lambda_j$ (and $\tau$) which contribute to the probability model via (see \eqref{e:thetaImodel}) \begin{equation} \theta^{I}_{ij} = {\boldsymbol{\sigma}}(\{p_J\gamma_{ij}+o_J\tau|J\})_I \ . \end{equation} Of course, the full set of $\frac{t(t-1)}{2}$ values $\gamma_{ij}$ are not independent. Instead, they are determined by the $t-1$ parameters $\omega_i=\lambda_i-\lambda_{i+1}$ for $i=1,\ldots,t-1$. Given the $\{\omega_i\}$ we can construct $\gamma_{ij} = \sum_{k=i}^{j-1} \omega_k$. In \aref{s:stanmodel} we show the code of the Stan model used to perform Hamiltonian Monte Carlo simulations of all three models. \begin{table}[t!] \centering \caption{Results of the 2020-2021 Eastern College Athletic Conference (ECAC) season, showing the number of regulation wins $n^{\text{RW}}_{ij}$ and overtime/shootout wins $n^{\text{OW}}_{ij}$ for each team against each opponent. From these we can derive the total number of results of each type (RW, OW, OL and RL) for each team, which are used, for example, to generate the standings in the 3-2-1-0 point system} \begin{tabular}{l| cccc |cccc} & \multicolumn{4}{|c|}{$n^{\text{RW}}_{ij}$ ($n^{\text{OW}}_{ij}$)}& \multicolumn{4}{|c}{$n^{I}_{i}=\sum_{j=1}^{t}n^{I}_{ij}$} \\ Team $i$ & Cg & Ck & Qn & SL& RW & OW & OL & RL \\ \hline Colgate (Cg) & --- & 1(1) & 1(0) & 2(1) & 4 & 2 & 3 & 9 \\ Clarkson (Ck) & 3(1) & --- & 1(2) & 1(0) & 5 & 3 & 4 & 2 \\ Quinnipiac (Qn) & 4(1) & 1(2) & --- & 4(1) & 9 & 4 & 2 & 3 \\ St.~Lawrence (SL) & 2(1) & 0(1) & 1(0) & --- & 3 & 2 & 2 & 7 \\ \end{tabular} \label{t:ECdata} \end{table} \section{Demonstration Using Game Results} \label{s:demo} We now illustrate the application of the models described in this paper using game results from a competition which used the 3-2-1-0 point system: the 2020-2021 Eastern College Athletic Conference (ECAC) season. While the league ordinarily plays a balanced round-robin schedule in which each team plays each other team the same number of times, the season in question ended up being unbalanced due to cancellations of games arising from the COVID-19 pandemic. In \tref{t:ECdata} we show the results for the ECAC season, in the form of $n^{\text{RW}}_{ij}$ and $n^{\text{OW}}_{ij}$ for each team against each opponent, along with the total number of results of each type for each team, $n^I_{i}=\sum_{j=1}^{t}n^I_{ij}$. \subsection{ECAC: Standard Bradley-Terry Model} \label{s:ECACBT} As a first demonstration, we consider the standard Bradley-Terry model applied to the ECAC results with regulation and overtime/shootout wins being counted as simply ``wins'' and regulation and overtime/shootout losses being counted as ``losses''. I.e., we define $n^{\text{W}}_{ij}=n^{\text{RW}}_{ij}+n^{\text{OW}}_{ij}$ and $n^{\text{L}}_{ij}=n^{\text{RL}}_{ij}+n^{\text{OL}}_{ij}$. The resulting maximum-likelihood solutions $\{\ML{\lambda}_i\}$ and associated probabilities $\{\ML{\theta}^{\text{W}}_{ij}\}$ are shown in \tref{t:ECBTGauss}, along with the uncertainties and correlations encoded in the variance-covariance matrix $\{\Sigma_{ij}\}$ of the Gaussian approximation to the posterior distribution. \begin{table}[t!] \centering \caption{The maximum likelihood estimates and parameters of the the Gaussian approximation to the posterior distribution for the Bradley-Terry model applied to the 2020-2021 ECAC results, with regulation and overtime/shootout results counted the same. The maximum likelihood estimate $\ML{\lambda}_i$ for each team's log-strength has an associated one-sigma uncertainty $\sqrt{\Sigma_{ii}}$. The variance-covariance matrix $\{\Sigma_{ij}\}$ can be converted to a correlation matrix $\rho_{ij}=\Sigma_{ij}/\sqrt{\Sigma_{ii}\Sigma_{jj}}$. Note that the information included in $\Sigma_{ij}$ is also influenced by the constraint $\sum_{i=1}^{t}\lambda_i=0$, so for example the anti-correlation of the different log-strengths is somewhat artificial. We also show the maximum-likelihood estimates $\{\ML{\theta}^{\text{W}}_{ij}\}$ for the head-to-head win probabilities between pairs of teams} \begin{tabular}{l|cc|cccc|cccc} & & & \multicolumn{4}{|c}{$\ML{\theta}^{\text{W}}_{ij}$}& \multicolumn{4}{|c}{$\rho_{ij}=\Sigma_{ij}/\sqrt{\Sigma_{ii}\Sigma_{jj}}$} \\ Team & $\ML{\lambda_i}$ & $\sqrt{\Sigma_{ii}}$ & Cg & Ck & Qn & SL & Cg & Ck & Qn & SL \\ \hline Cg & $-0.55$ & $0.39$ & --- & $0.29$ & $0.22$ & $0.49$ & $1.00$ & $-0.31$ & $-0.39$ & $-0.21$ \\ Ck & $0.32$ & $0.43$ & $0.71$ & --- & $0.40$ & $0.70$ & $-0.31$ & $1.00$ & $-0.22$ & $-0.50$ \\ Qn & $0.74$ & $0.40$ & $0.78$ & $0.60$ & --- & $0.78$ & $-0.39$ & $-0.22$ & $1.00$ & $-0.35$ \\ SL & $-0.51$ & $0.45$ & $0.51$ & $0.30$ & $0.22$ & --- & $-0.21$ & $-0.50$ & $-0.35$ & $1.00$ \\ \end{tabular} \label{t:ECBTGauss} \end{table} \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECBTQnCg.pdf} \includegraphics[width=0.45\textwidth]{ECBTQnCk.pdf} \caption{Posterior probability density for difference in log-strengths $\gamma_{ij}=\lambda_i-\lambda_j$ between selected pairs of teams (left: Quinnipiac and Colgate; right: Quinnipiac and Clarkson), based on 2020-2021 ECAC game results in the standard Bradley-Terry model with regulation and overtime/shootout wins treated the same. The dotted red vertical line shows the maximum likelihod estimate $\ML{\gamma}_{ij}$. Since the Haldane prior used is uniform in the $\{\lambda_i\}$, this is also the maximum a posteriori (MAP) value. The curves show the approximate Gaussian posterior from expanding about the MAP value (solid blue line), along with density estimates from a set of Monte Carlo samples drawn from that distribution (dashed brown line), and a set of samples drawn from the exact distribution using Hamiltonian Monte Carlo (dot-dash black line). Differences between the Gaussian approximation and the samples from the exact posterior are small, but can be noticeable, especially if the maximum likelihood estimate $\ML{\gamma}_{ij}$ is far from zero. For reference, note that the ``Gaussian approx'' and ``Gaussian MC'' curves should only differ due to Monte Carlo errors in the construction of the latter} \label{f:ECBTQnCg} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECBTH2HQnCg.pdf} \includegraphics[width=0.45\textwidth]{ECBTH2HQnCk.pdf} \caption{Posterior probability density for the win probability $\theta^{\text{W}}_{ij}=\logistic(\gamma_{ij})$ predicted by the Bradley-Terry model for selected pairs of teams, as in \fref{f:ECBTQnCg}. Note that, due to the transformation of the probability density, the maximum of the probability density in this parameter is not the maximum likelihood value as it was in \fref{f:ECBTQnCg}} \label{f:ECBTH2HQnCg} \end{figure} Since the log-strengths $\{\lambda_i\}$ have an arbitrary additive scale, a more meaningful understanding of the posterior distributions is obtained by considering the marginal distribution of the difference of a pair of team strengths $\gamma_{ij}=\lambda_i-\lambda_j$. In \fref{f:ECBTQnCg}, we illustrate the maximum likelihood estimate and posterior distribution of this quantity for two of the six pairs of teams: Quinnipiac-Colgate and Quinnipiac-Clarkson. We show the posterior in Gaussian approximation (for which the marginal posterior on $\gamma_{ij}$ is also a Gaussian), in a Monte Carlo drawn from the approximate multivariate Gaussian distribution, and in posterior samples from the exact posterior generated using Hamiltonian Monte Carlo with the Stan library.\cite{Stan} We can transform the posterior on a difference $\gamma_{ij}$ in log-strength into a posterior on the corresponding probability $\theta^{\text{W}}_{ij}=\logistic(\gamma_{ij})$; this is shown in \fref{f:ECBTH2HQnCg} for the two sets of posterior samples. In all cases, the exact marginal posterior, as estimated by the Hamiltonian Monte Carlo is only slightly different from the Gaussian approximation. This is similar to results found using importance sampling in \cite{Whelan2019}. \begin{table}[t!] \centering \caption{The maximum likelihood estimates for the Bradley-Terry-Davidson model applied to the 2020-2021 ECAC results, with all overtime games counted as ties. The maximum likelihood estimates $\{\ML{\lambda_i}\}$ and $\ML{\tau}$ of the log-strengths and log tie parameter are used to compute the estimated probability $\ML{\theta}^{\text{W}}_{ij}$ for a win and $\ML{\theta}^{\text{T}}_{ij}$ for a tie between each pair of teams. Note that the estimated probability of a game between evenly-matched teams to end in a tie is $\frac{e^{\ML{\tau}}}{2+e^{\ML{\tau}}}={0.39}$, and it is lower the more different the two teams' strengths are} \begin{tabular}{l|c|cccc} & & \multicolumn{4}{|c}{$\ML{\theta}^{\text{W}}_{ij}$ ($\ML{\theta}^{\text{T}}_{ij}$)} \\ Team $i$ & $\ML{\lambda_i}$ & Cg & Ck & Qn & SL \\ \hline Cg & $-0.73$ & --- & $0.13$ ($0.33$) & $0.11$ ($0.32$) & $0.33$ ($0.38$) \\ Ck & $0.70$ & $0.54$ ($0.33$) & --- & $0.28$ ($0.38$) & $0.56$ ($0.32$) \\ Qn & $0.89$ & $0.57$ ($0.32$) & $0.34$ ($0.38$) & --- & $0.59$ ($0.31$) \\ SL & $-0.85$ & $0.29$ ($0.38$) & $0.12$ ($0.32$) & $0.10$ ($0.31$) & --- \\ \hline \multicolumn{6}{c}{$\ML{\tau}=0.23$}\end{tabular} \label{t:WLTMLE} \end{table} \begin{table}[t!] \centering \caption{The parameters of the the Gaussian approximation to the posterior distribution for the Bradley-Terry-Davidson model applied to the 2020-2021 ECAC results, with all overtime games counted as ties. In addition to the log-strength parameters considered for the Bradley-Terry model in \tref{t:ECBTGauss}, there are uncertainties and correlations associated with the log-tie parameter $\tau$} \begin{tabular}{l|cc|ccccc} & & & \multicolumn{4}{|c}{$\rho_{ij}=\Sigma_{ij}/\sqrt{\Sigma_{ii}\Sigma_{jj}}$} \\ Team $i$ & $\ML{\lambda}_i$ & $\sqrt{\Sigma_{ii}}$ & Cg & Ck & Qn & SL & $\tau$ \\ \hline Cg & $-0.73$ & $0.50$ & $1.00$ & $-0.35$ & $-0.40$ & $-0.16$ & $-0.22$ \\ Ck & $0.70$ & $0.57$ & $-0.35$ & $1.00$ & $-0.16$ & $-0.53$ & $0.19$ \\ Qn & $0.89$ & $0.51$ & $-0.40$ & $-0.16$ & $1.00$ & $-0.38$ & $0.26$ \\ SL & $-0.85$ & $0.58$ & $-0.16$ & $-0.53$ & $-0.38$ & $1.00$ & $-0.22$ \\ $\tau$ & $0.23$ & $0.40$ & $-0.22$ & $0.19$ & $0.26$ & $-0.22$ & $1.00$ \\ \end{tabular} \label{t:WLTGauss} \end{table} \subsection{ECAC: Bradley-Terry-Davidson Model with Ties} \label{s:ECACWLT} Moving on to the Bradley-Terry-Davidson model with ties, we now consider inference of the log-strength parameters $\{\lambda_i\}$ along with the log-tie parameter $\tau$. We illustrate the methods by reanalyzing the 2020-2021 ECAC results, with all overtime games treated as ties, so that now $n^{\text{W}}_{ij}=n^{\text{RW}}_{ij}$, $n^{\text{T}}_{ij}=n^{\text{OW}}_{ij}+n^{\text{OL}}_{ij}$, and $n^{\text{L}}_{ij}=n^{\text{RL}}_{ij}$. The maximum likelihood solutions $\{\ML{\lambda}_i\}$ and $\ML{\tau}$ are shown in \tref{t:WLTMLE}, along with the associated probabilities $\{\ML{\theta}^{\text{W}}_{ij}\}$ for a win and $\{\ML{\theta}^{\text{T}}_{ij}\}$ for a tie in contests between pairs of teams. In \tref{t:WLTGauss}, we show the maxumum-likelihood estimates along with the uncertainties in and correlations among the log-strengths $\{\lambda_i\}$ and the log-tie parameter $\tau$, which are encoded in the variance-covariance matrix $\{\Sigma_{ij},\Sigma_{i\tau},\Sigma_{\tau\tau}\}$ of the Gaussian approximation to the posterior distribution. As with the standard Bradley-Terry model, we can show the marginal posterior distributions on the differences $\{\gamma_{ij}=\lambda_i-\lambda_j\}$ between pairs of log-strength parameters, and we do this in \fref{f:ECWLTQnCg} for the same pairs of teams as before. Once again, samples drawn from the multivariate Gaussian approximation capture the shape of that distribution well, and samples drawn from the exact posterior using Hamiltonian Monte Carlo are slightly different but similar. We cannot convert $\gamma_{ij}$ directly into a probability, however, since probabilities depend on the log-tie parameter $\tau$ as well. In \fref{f:ECWLTtau} we plot the marginal posterior on $\tau$. The parameter $\tau$ can be transformed into a probability $\frac{\nu}{2+\nu}$ where ($\nu=e^{\tau}$) for a game between evenly-matched teams to be tied, and we plot the posterior for this as well. Finally, in \fref{f:ECWLTscat} we illustrate the joint marginal posterior in $\gamma_{ij}$ and $\tau$ for our selected pairs of teams. \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECWLTQnCg.pdf} \includegraphics[width=0.45\textwidth]{ECWLTQnCk.pdf} \caption{Posterior probability density for difference in log-strengths $\lambda_i-\lambda_j$ between selected pairs of teams (left: Quinnipiac and Colgate; right: Quinnipiac and Clarkson), based on the Bradley-Terry-Davidson model applied to the 2020-2021 ECAC results, with all overtime games counted as ties. Curves are as defined in \fref{f:ECBTQnCg}} \label{f:ECWLTQnCg} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECWLTtau.pdf} \includegraphics[width=0.45\textwidth]{ECWLTtieprob.pdf} \caption{Posterior probability density for the log-tie parameter $\tau$ (left) and the associated probability $\frac{e^{\tau}}{2+e^{\tau}}$ (right) of a tie game between evenly matched teams, in the Bradley-Terry-Davidson model applied to the 2020-2021 ECAC results, with all overtime games counted as ties. As in \fref{f:ECBTQnCg} and \fref{f:ECWLTQnCg}, the dashed vertical line is the maximum-likelihood estimate, the solid blue line is a Gaussian approximation to the posterior but expanding about the MAP point $\ML{\tau}$, and the dashed brown and dot-dash black lines are densty estimates, respectively constructed from a Monte Carlo sample from the approximate Gaussian distribution and from the exact distribution using Hamiltonian Monte Carlo. Note that while the MLE is the maximum of the marginal posterior on $\tau$, the transformation of the posterior probability density means $\frac{e^{\ML{\tau}}}{2+e^{\ML{\tau}}}$ is not the maximum of the posterior on $\frac{e^{\tau}}{2+e^{\tau}}$} \label{f:ECWLTtau} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECWLTscatQnCg.pdf} \includegraphics[width=0.45\textwidth]{ECWLTscatQnCk.pdf} \caption{Contours of the joint posterior probability density of the log-strength differences $\gamma_{ij}=\lambda_i-\lambda_j$ shown in \fref{f:ECWLTQnCg} and the log-tie parameter $\tau$ shown in the left panel of \fref{f:ECWLTtau}. The red circle is the MLE $\ML{\gamma}_{ij},\ML{\tau}$. The solid blue curve are contours of the Gaussian approximation, and the dashed brown curves are density contours of a Monte Carlo sample drawn from that approximate distribution. The dot-dashed black curves are density contours of a sample from the exact distribution drawn using Hamiltonian Monte Carlo. As with the Bradley-Terry model, the exact and approximate posteriors are comparable, but differences are detectable beyond the level of the Monte Carlo uncertainties illustrated by the difference between the ``Gaussian approx'' and ``Gaussian MC'' contours} \label{f:ECWLTscat} \end{figure} To illustrate the posterior on the probabilities $\{\theta^{I}_{ij}|I=\text{W},\text{T},\text{L}\}$ for a pair of teams, we note that the constraint $\sum_I\theta^{I}_{ij}=1$ means that the space is actually two dimensional. The natural visualization for the behavior of three quantities which sum to one is a ternary plot, and we contour plot density estimates of the posterior and its Gaussian approximation in \fref{f:ECTern}, along with the maximum likelihood estimates $\{\ML{\theta}^{I}_{ij}\}$. \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECWLTTernCgQn-crop.pdf} \includegraphics[width=0.45\textwidth]{ECWLTTernCkQn-crop.pdf} \caption{Ternary plots illustrating the joint posterior on $\theta^{\text{W}}_{ij}$, $\theta^{\text{T}}_{ij}$, and $\theta^{\text{L}}_{ij}$, based on the Bradley-Terry-Davidson model applied to the 2020-2021 ECAC results, with all overtime games counted as ties. The horizontal gridlines correspond to lines of constant $\theta^{\text{T}}_{ij}$, with $\theta^{\text{T}}_{ij}=1$ labelled as ``Tie''; the diagonal gridlines correspond to lines of constant $\theta^{\text{W}}_{ij}$ or $\theta^{\text{L}}_{ij}$, with $\theta^{\text{W}}_{ij}=1$ labelled with the abbreviation for team $i$ (``Qn'' for Quinnipiac in both cases) and $\theta^{\text{L}}_{ij}=1$ labelled with the abbreviation for team $j$ (``Cg'' for Colgate and ``Ck'' for Clarkson). The red triangle is the maximum likelihood point $\ML{\theta}^{I}_{ij}$. Note that for a given set of game results, the maximum likelihood point for all pairs of teams will lie along a one-dimensional curve in the Ternary plot. since, for a fixed $\ML{\tau}$, the maximum-likelihood probabilities are functions of the single value $\ML{\gamma}_{ij}$. The three sets of contours are as defined in \fref{f:ECWLTscat}. Note that the MLE is no longer the maximum of the posterior probability density after tranforming parameters from $\gamma_{ij},\tau$ to $\theta^{\text{W}}_{ij}$, $\theta^{\text{T}}_{ij}$, and $\theta^{\text{L}}_{ij}=1-\theta^{\text{W}}_{ij}-\theta^{\text{T}}_{ij}$} \label{f:ECTern} \end{figure} \subsection{ECAC: Bradley-Terry-like Model with Overtime/Shootout Results} \label{s:ECAC} Having developed the mechanisms to characterize the posterior distribution for the Bradley-Terry-Davidson model with three outcomes (win, tie, and loss), we apply similar analogues for the model with four outcomes: regulation win (RW), overtime/shootout win (OW), overtime/shootout loss (OL), and regulation loss (RL), now applied to the full 2020-2021 ECAC results shown in \tref{t:ECdata}. As before, there is a log-strength parameter $\lambda_i$ for each team, and $\tau$ is now the log of a parameter associated with overtime results. We show the maximum likelihood estimates in \tref{t:ECMLE} along with the probabilities $\{\ML{\theta}^{\text{RW}}_{ij}\}$ for a regulation win $\{\ML{\theta}^{\text{OW}}_{ij}\}$ for an overtime/shootout win in contests between pairs of teams. In \tref{t:ECGauss} we show the parameters of the Gaussian approximation to the posterior. \begin{table}[t!] \centering \caption{The maximum likelihood estimates for a Bradley-Terry-like model with four game outcomes applied to the 2020-2021 ECAC results. The maximum likelihood estimates $\{\ML{\lambda_i}\}$ and $\ML{\tau}$ of the log-strengths and log overtime parameter are used to compute the estimated probability $\ML{\theta}^{\text{RW}}_{ij}$ for a regulation win and $\ML{\theta}^{\text{OW}}_{ij}$ for an overtime/shootout win between each pair of teams. Note that the estimated probability of a game between evenly-matched teams to got to overtime is $\frac{e^{\ML{\tau}}}{1+e^{\ML{\tau}}}={0.38}$, and it is lower the more different the two teams' strengths are.} \begin{tabular}{l|c|cccc} & & \multicolumn{4}{|c}{$\ML{\theta}^{\text{RW}}_{ij}$ ($\ML{\theta}^{\text{OW}}_{ij}$)} \\ Team $i$ & $\ML{\lambda_i}$ & Cg & Ck & Qn & SL \\ \hline Cg & $-0.74$ & --- & $0.14$ ($0.13$) & $0.11$ ($0.12$) & $0.32$ ($0.19$) \\ Ck & $0.60$ & $0.53$ ($0.21$) & --- & $0.26$ ($0.18$) & $0.53$ ($0.20$) \\ Qn & $0.93$ & $0.57$ ($0.20$) & $0.36$ ($0.20$) & --- & $0.58$ ($0.20$) \\ SL & $-0.79$ & $0.30$ ($0.19$) & $0.13$ ($0.13$) & $0.10$ ($0.11$) & --- \\ \hline \multicolumn{6}{c}{$\ML{\tau}=-0.49$}\end{tabular} \label{t:ECMLE} \end{table} \begin{table}[t!] \centering \caption{The parameters of the the Gaussian approximation to the posterior distribution for the Bradley-Terry-like model with four game outcomes applied to the 2020-2021 ECAC results. In addition to the log-strength parameters $\{\lambda_i\}$, there are uncertainties and correlations associated with the log-overtime parameter $\tau$.} \begin{tabular}{l|cc|ccccc} & & & \multicolumn{4}{|c}{$\rho_{ij}=\Sigma_{ij}/\sqrt{\Sigma_{ii}\Sigma_{jj}}$} \\ Team $i$ & $\ML{\lambda}_i$ & $\sqrt{\Sigma_{ii}}$ & Cg & Ck & Qn & SL & $\tau$ \\ \hline Cg & $-0.74$ & $0.48$ & $1.00$ & $-0.34$ & $-0.41$ & $-0.17$ & $-0.19$ \\ Ck & $0.60$ & $0.54$ & $-0.34$ & $1.00$ & $-0.17$ & $-0.52$ & $0.14$ \\ Qn & $0.93$ & $0.50$ & $-0.41$ & $-0.17$ & $1.00$ & $-0.38$ & $0.23$ \\ SL & $-0.79$ & $0.56$ & $-0.17$ & $-0.52$ & $-0.38$ & $1.00$ & $-0.18$ \\ $\tau$ & $-0.49$ & $0.39$ & $-0.19$ & $0.14$ & $0.23$ & $-0.18$ & $1.00$ \\ \end{tabular} \label{t:ECGauss} \end{table} As in the Bradley-Terry-Davidson model, we can plot the marginal parameters for the differences $\{\gamma_{ij}=\lambda_i-\lambda_j\}$ between pairs of log-strength parameters (\fref{f:ECQnCg}), the log-overtime parameter $\tau$ or equivalently the probability $\frac{\nu}{1+\nu}$ where ($\nu=e^{\tau}$) for a game between evenly-matched teams to go to overtime (\fref{f:ECtau}), and the joint marginal posterior in $\gamma_{ij}$ and $\tau$ for our selected pairs of teams (\fref{f:ECscat}). \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECQnCg.pdf} \includegraphics[width=0.45\textwidth]{ECQnCk.pdf} \caption{Posterior probability density for difference in log-strengths $\lambda_i-\lambda_j$ between selected pairs of teams (left: Quinnipiac and Colgate; right: Quinnipiac and Clarkson), based on the Bradley-Terry-like model with four game outcomes applied to the 2020-2021 ECAC results. Curves are as defined in \fref{f:ECBTQnCg}.} \label{f:ECQnCg} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECtau.pdf} \includegraphics[width=0.45\textwidth]{ECOTprob.pdf} \caption{Posterior probability density for the log-overtime parameter $\tau$ (left) and the associated probability $\frac{e^{\tau}}{1+e^{\tau}}$ (right) of an overtime game between evenly matched teams, in the Bradley-Terry-like model with four game outcomes applied to the 2020-2021 ECAC results. Curves are as defined in \fref{f:ECWLTtau}. Note that the posterior on the overtime probability is very similar to that for the tie probability in the right panel of \fref{f:ECWLTtau}. This is not surprising since the two calculations are based on different interpretations of the same set of game results, and the ``ties'' used to generate \fref{f:ECWLTtau} are just the overtime games in the current computation. The estimates on $\tau$ appear different in the two models, but that is mostly because $\nu=e^{\tau}$ is a measure of the probability of each type of overtime result compared to each type of regulation result, and there are two overtime results in this model and only one in the Bradley-Terry-Davidson model with ties} \label{f:ECtau} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECscatQnCg.pdf} \includegraphics[width=0.45\textwidth]{ECscatQnCk.pdf} \caption{Samples from the joint posterior probability density of the log-strength differences $\gamma_{ij}=\lambda_i-\lambda_j$ shown in \fref{f:ECQnCg} and the log-overtime parameter shown in the left panel of \fref{f:ECtau}. Contours are as defined in \fref{f:ECWLTscat}.} \label{f:ECscat} \end{figure} The posterior distribution on the probabilities $\{\theta^{I}_{ij}|I=\text{RW},\text{OW},\text{OL},\text{RW}\}$ is more difficult to visualize, because we have four probabilities which sum to 1, so the posterior can be thought of as defined on the interior of a tetrahedron, which is an example of an Aitchison simplex \cite{Aitchison1982}. However, since all four probabilities are determined by two parameters $\gamma_{ij}$ and $\tau$, they must lie on a (curved) two-dimensional subsurface of the simplex, defined by the constraint $\frac{\theta^{\text{OW}}_{ij}}{\theta^{\text{OL}}_{ij}} = \left(\frac{\theta^{\text{RW}}_{ij}}{\theta^{\text{RL}}_{ij}}\right)^{1/3}$ as well as $\theta^{\text{RW}}_{ij} + \theta^{\text{OW}}_{ij} + \theta^{\text{OL}}_{ij} + \theta^{\text{RL}}_{ij} = 1$. In \fref{f:ECthetascat} we illustrate one possibility for a two-dimensional plot of the marginal posterior on $\theta^{I}_{ij}$, by plotting posterior density contours in $\theta^{\text{W}}_{ij}=\theta^{\text{RW}}_{ij}+\theta^{\text{OW}}_{ij}$ (the probability of any sort of a win) and $\theta^{\text{O}}_{ij}=\theta^{\text{OW}}_{ij}+\theta^{\text{OL}}_{ij}$ (the probability of an overtime result). This has the conceptual advantage that each side of the square corresponds to an edge of the tetrahedrom, and each vertex of the square corresponds to a vertex of the tetrahedron, at which $\theta^{I}_{ij}=1$ for some result $I$. However, the conversion of a point $\theta^{\text{W}}_{ij},\theta^{\text{O}}_{ij}$ into $\theta^{I}_{ij}$ is nontrivial and cannot be written in closed form, so further investigation of methods of presenting the posterior is called for. \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{ECthetascatQnCg.pdf} \includegraphics[width=0.45\textwidth]{ECthetascatQnCk.pdf} \caption{Density contours from the joint posterior probability distribution of $\theta^{\text{W}}_{ij}=\theta^{\text{RW}}_{ij}+\theta^{\text{OW}}_{ij}$ $\theta^{\text{O}}_{ij}=\theta^{\text{OW}}_{ij}+\theta^{\text{OL}}_{ij}$, transformed from the joint probability on $\gamma_{ij}$ and $\tau$ shown in \fref{f:ECscat}. Each point on this plot can be converted into a set of probabilitues $\{\theta^{I}_{ij}|I\}$ using the relations $\frac{\theta^{\text{OW}}_{ij}}{\theta^{\text{OL}}_{ij}} = \left(\frac{\theta^{\text{RW}}_{ij}}{\theta^{\text{RL}}_{ij}}\right)^{1/3}$ and $\theta^{\text{RW}}_{ij} + \theta^{\text{OW}}_{ij} + \theta^{\text{OL}}_{ij} + \theta^{\text{RL}}_{ij} = 1$. The red square is the maximum-likelihood estimate $\ML{\theta}^{\text{W}}_{ij},\ML{\theta}^{\text{O}}_{ij}$. The dashed brown curves are density contours of a Monte Carlo sample drawn from the Gaussian approximation to the posterior distribution on $\{\lambda_i\}$ and $\tau$. The dot-dashed black curves are density contours of a sample from the exact distribution drawn using Hamiltonian Monte Carlo. As usual, while the MLE is the maximum a posteriori point in the parameters $\gamma_{ij}$ and $\tau$, it is not so in the parameters shown here due to the transformation of the posterior probability density.} \label{f:ECthetascat} \end{figure} \section{Discussion and Conclusions} \label{s:conclusions} We have defined a generalization of Davidson's extension to the Bradley-Terry outcome that handles the set of game outcomes currently distinguished in ice hockey: regulation wins, overtime/shootout wins, overtime/shootout losses, and regulation losses. We've explicitly computed maximum likelihood estimates, constructed a Gaussian approximation to the likelihood, and drawn posterior samples directly from the Gaussian approximation or from the exact posterior using the Hamiltonian Monte Carlo method implemented in Stan. For the data sets examined, the Gaussian approximation produced similar (but slightly different) results to the exact posterior. The differences in log-team strengths were qualitatively similar among the original Bradley-Terry model (\sref{s:ECACBT}), the Bradley-Terry-Davidson model with ties (\sref{s:ECACWLT}), and the new model including regulation and overtime/shootout results (\sref{s:ECAC}), when applied to the same set of results (albeit with overtime/shootout results interpreted differently). However, these computations are not meant to determine a ``best'' model, but to illustrate the capabilities of the algorithm. (By definition, we consider the appropriate model to be the one that corresponds to how the league actually assigns values to the results of games in the standings.) We now wish to discuss some limitations of the work to date, and possible approaches to address them: the use of an improper non-informative Haldane prior, the choice of the parameters $\{p_{I}\}$ in the probability model, and the application of the model to predict the outcomes of playoff games, which may not be played under the same conditions with overtime and shootouts. First, for simplicity, we worked with a non-informative Haldane prior which was uniform in the log-parameters $\{\lambda_i\}$ and $\tau$, so that the posterior probability distribution in those variables was proportional to the likelihood. There are a number of options for normalizable priors on the distribution of log-strengths $\{\lambda_i\}$ in the Bradley-Terry model (see \cite{Whelan2017} for a discussion), of which two promising options are a Gaussian prior or a generalized logistic prior \cite{Phelan2017,Whelan2019}, each of which has a hyperprior which can be fixed to previous seasons' data or estimated in a hierarchical model as in \cite{Phelan2017}. Similar options suggest themselves for the prior on the log-overtime parameter $\tau$, although the situation is somewhat different in that $\tau$ has a meaningful origin, so one has to consider a possible location parameter. In particular, it's not clear whether the most natural ``origin'' for $\nu=e^{\tau}$ is $1$, $2$, or something else. Second, we made something of an arbitrary choice by setting $p_{\text{OW}}=\frac{2}{3}$ and $p_{\text{OL}}=\frac{1}{3}$. In the Bradley-Terry-Davidson model with ties, the requirement that $p_{\text{T}}=p_{-\text{T}}=1-p_{\text{T}}$ means $p_{\text{T}}=\frac{1}{2}$ is the only option, as there is only one zero-point system in the three-outcome model. With four outcomes, however, $p_{\text{OW}}=\frac{2}{3}$ is a choice. This choice was of course informed by the point system used for the standings, so that the maximum likelihood equations would enforce that the expected number of points for each team equals its actual number. Other point systems are possible, however. In an earlier experiment with shootouts the Central Collegiate Hockey Association awarded 5 points for a win in regulation or overtime, 3 for a shootout win, 2 for a shootout loss, and 0 points for a loss in regulation or overtime, so analysis of that season might have used $p_{\text{SW}}=\frac{3}{5}$ and $p_{\text{SL}}=\frac{2}{5}$. Similarly, the NCAA, for tournament selection purposes, considers a win in 3-on-3 overtime worth $0.55$ of a win, and treats games decided in a shootout as a tie. Capturing this in a model would require two parameters in addition to the team-strengths: one for overtime games and one for ties, and would have parameters like $p_{\text{RW}}=1$, $p_{\text{OTW}}=0.55$, $p_{\text{SO}}=0.50$, $p_{\text{OTL}}=0.45$, and $p_{\text{RL}}=0$. One avenue for future investigation would be to define an extended model in which the unconstrained values of $\{p_{I}\}$ are treated as additional parameters to be estimated from the data. For instance, in the four-outcome model, $p_{\text{OW}}$ could be treated as a parameter with prior support on the interval $\frac{1}{2}<p_{\text{OW}}<1$. Finally, the model has assumed all games are played under the same conditions, with 3-on-3 overtimes and shootouts. However, in a number of hockey leagues, playoffs and other postseason games are played to conclusion with overtimes played under the same set of rules with a full squad on the ice, and shootouts are not possible, To produce probabilities for such a game, one would have to decide what probability to assign to a win or a loss. The natural model is probably to use $\theta^{POW}_{ij}=\frac{\pi_i}{\pi_i+\pi_j}$, i.e., the conditional probability of winning a game given that it's not decided in (3-on-3) overtime or a shootout. Likewise if any playoff games are included in the results used for inference, their contribution to the likelihood would need to be adjusted.
{ "attr-fineweb-edu": 2.302734, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfKM4uzliCsUqrD9t
\section{Introduction} \label{sec:introduction} Consider two passing plays during the game between the Los Angeles Rams and visiting Indianapolis Colts in the first week of the 2017 National Football League (NFL) season. The first passing play was a short pass in the first quarter from Colts quarterback Scott Tolzien intended for T.Y. Hilton which was intercepted by Trumaine Johnson and returned for a Rams touchdown. The second passing play was a long pass from Rams quarterback Jared Goff to Cooper Kupp, resulting in a Rams touchdown. In this work, we consider the question: which play had the better route(s)? From one perspective, one could argue that Kupp's route was better than Hilton's; after all it resulted in the offense scoring while the first play resulted in a turnover and a defensive score. However, ``resulting'', or evaluating a decision based only on its outcome is not always appropriate or productive. Two recent examples come to mind: Pete Carroll's decision to pass the ball from the 1 yard line in Super Bowl XLIX and the ``Philly Special'' in Super Bowl LII. Had the results of these two plays been reversed, Pete Carroll might have been celebrated and Doug Pederson criticized. If evaluating plays solely by their outcomes is inadequate, on what basis should we compare routes? One very attractive option is to use tracking data. In the NBA, tracking data has been available since 2013, allowing researchers to quantify actions with spatial statistics. For instance, the XY Research group has produced several papers outlining how to organize and evaluate possessions and player ability using tracking data. \citet{Cervone1} and \citet{Cervone2} introduced expected possession value (EPV), a framework for using player tracking data to estimate the expected number of points scored by the end of an offensive possession. \citet{Franks1} used tracking data to quantify player ability and \citet{MillerBornn} introduced Possessions Sketches, a machine learning method that decomposes player movement into a small number of interpretable actions. Outside of professional basketball, there has been comparatively little work done using tracking data in the public domain. Recently, and perhaps most similar to the method presented below, is \citet{Burke}, who develops a deep learning method to quantify quarterback decision making in the NFL. In this paper, we will focus on completion probability and will discuss how one might adapt the methodology developed in the sequel to other measures of play success in Section~\ref{sec:discussion}. Intuitively, we might tend to prefer routes which maximize the receiver's chance of catching the pass. To this end, if we let $y$ be a binary indicator of whether a pass was caught and let $\mathbf{x}$ be a collection of covariates summarizing information about the pass, we can consider a logistic regression model of completion probability: \begin{equation} \label{eq:completion_probability_model} \log{\left(\frac{\mathbb{P}(y = 1 | \mathbf{x})}{\mathbb{P}(y = 0 | \mathbf{x})}\right)} = f(\mathbf{x}), \end{equation} or equivalently $\mathbb{P}(y = 1 | \mathbf{x}) = \left[1 + \text{e}^{-f(\mathbf{x})}\right]^{-1},$ for some unknown function $f.$ If we knew the true function $f,$ assessing a route is easy: we simply plug in the relevant covariates $\mathbf{x}$ and compute the forecasted completion probability. Regardless of whether the receiver caught the actual pass, if this forecasted probability exceeded 50\%, we could conclude that the route was run and the pass was thrown in a way that gave the receiver a better chance than not of catching the pass. We could moreover directly compare the forecasted completion probability of the two plays mentioned above; if it turned out that the Tolzien interception had a higher completion probability than the Kupp touchdown, that play would not seem as bad, despite the much worse outcome. While such a comparison is a good first step towards evaluating routes, it is not completely satisfactory -- there are often multiple receivers running routes on a play and this comparison focuses only on a single player's chances of successfully catching a specific pass thrown to a single location along his route. The comparison does not, in particular, answer the very natural follow-up question: was there another location along a possible \emph{different} receiver's route where the completion probability was higher? If so, one could argue that the quarterback ought to have thrown to ball to that spot. At first glance, determining the completion probability at an arbitrary location along a different receiver's route seems impossible: even if we know the true function $f,$ we are essentially trying to deduce what might have happened in a \emph{counterfactual} world where the quarterback had thrown the ball to a different player at a different time, with the defense reacting differently. For the attempted passes that we actually observe, we are able to directly measure all possible information about the pass including, for instance, the receiver's speed and separation from the nearest defender at (i) the time that the pass was thrown and (ii) the time that the pass arrives and he attempts to catch the ball. In contrast, on a counterfactual pass, we can only potentially observe this information up to the the time the counterfactual pass is thrown. The fundamental challenge is that we cannot observe any covariates measured at the time the counterfactual pass arrives; see Figure~\ref{fig:visual_route_manuscript}. Before proceeding, we pause for a moment to distinguish between our use of the term ``counterfactual'' and its use in causal inference. The general causal framework of counterfactuals supposes that we change some treatment or exposure variable and asks what happens to downstream outcomes. In contrast, in this work, we considering changing a midstream variable, the location of the intended receiver when the ball arrives, and then impute both upstream and downstream variables like the time of the pass and the receiver separation at the time the ball arrives. In this work, we use ``counterfactual'' interchangeably with ``hypothetical'' and hope our more liberal usage is not a source of further confusion below\footnote{Author's note: We use the word ``counterfactual'' interchangeably with ``hypothetical'' because while an unobserved pass is hypothetical, the intended receiver of that pass is not.}. \begin{figure}[H] \includegraphics[width = 1.0\textwidth]{visual_route_manuscript.png} \caption{Schematic of what we directly observe on an actual pass (left panel) from our dataset and what we cannot observe for a hypothetical pass (right panel). In both passes, there are two receivers running routes.The targeted receiver is denoted with a circle and the defender closest to the receiver is denoted with an X. Unobservables are colored red while observables are colored blue. For the hypothetical pass, we are unable to measure the pairwise distances between the targeted receiver, his closest defender, and the ball when the pass arrives. Intuitively, all of these factors are very predictive of completion probability.} \label{fig:visual_route_manuscript} \end{figure} The difficulty in determining counterfactual completion probabilities is compounded by the fact that we do not know the true regression function $f$ and must, therefore, estimate it from observational data. In the process, estimation uncertainty about $f$ propagates to the uncertainty about the hypothetical completion probabilities. We argue that an objective assessment of routes based on a completion probability must address the inherent uncertainty in the hypothetical inputs as well as uncertainty stemming from estimating the completion probability model. In this work, we aim to overcome these challenges. Using tracking, play and game data from the first 6 weeks of the 2017 NFL season, we developed such an assessment, which we call Expected Hypothetical Completion Probability (EHCP). At a high-level, our framework consists of two steps. First, we estimate the log-odds of a catch as a function of several characteristics of each observed pass in our data. Then, we simulate the characteristics of the hypothetical pass that we do not directly observe and compute the average completion probability of the hypothetical pass. The rest of this paper is organized as follows. In Section~\ref{sec:ehcp_framework}, we outline the EHCP framework and describe the data used to develop it. We describe our Bayesian procedure for fitting the catch probability model in Equation~\eqref{eq:completion_probability_model} in Section~\ref{sec:completion_probability}. Section~\ref{sec:illustration} illustrates the EHCP framework on several routes for the both the Tolzein interception and Kupp touchdown mentioned above. We conclude with a discussion of potential methodological improvements and refinements and potential uses of our EHCP framework. \section{The EHCP Framework} \label{sec:ehcp_framework} The EHCP framework consists of three parts (i) a completion probability model, which is trained using the observational data provided by the NFL (described in Section~\ref{sec:data}), that takes as input observed features of passes and returns the estimated completion probability, (ii) an \emph{imputation} method for predicting the variables that are unobservable for hypothetical passes, and (iii) a strategy for combining these two parts and propagating uncertainty in a coherent fashion. In this section, we describe these three parts and also the data used. \subsection{The NFL Big Data Bowl Dataset} \label{sec:data} The NFL Big Data Bowl Dataset contains tracking, play, and game data from all 91 games in the the first 6 weeks of the 2017 NFL season. The tracking data is at the granularity of every tenth of a second per play. For each player on the field (and the ball) for a given play, the data contains: time stamp of play (time, yyyy-mm-dd, hh:mm:ss), player position along the long axis of the field (0 - 120 yards), player position along the short axis of the field (0 - 53.3 yards), speed in yards/second, distance traveled from prior time point (in yards), angle of player motion (0 - 360 degrees), tagged play details, (including moment of ball snap, pass release, pass catch, tackle, etc), player identification number (unique across players), player name, jersey number of player, team (away or home) of corresponding player, frame identifier for each play (starting at 1), unique game identifier, and play identifier (not unique across games). From the tracking data we are able to calculate eadch player's distance to the ball and to other players as well as their cumulative distance run in the play and in the game. The play data contains (not an exhaustive list): game quarter, time on game clock at the start of the play (counting down from 15:00, MM:SS), down, distance needed for a first down, yard line at line-of-scrimmage, home team score prior to the play, visiting team score prior to the play, home team points at the end of the play, visiting team points at the end of the play, indicator for penalty called on play, indicator for special teams play, pass length (in yards), result of pass play (caught, incomplete, intercepted, run, sack), result of play in yards, and a description of the play. The play data allows us to calculate the difference in score and incorporate the timing of the play in our models. The game data contains game specific information like final score, temperature, humidity, and wind. We did not use any of this information in our models. More detailed information on the data can be found at the Big Data Bowl GitHub page\footnote{https://github.com/nfl-football-ops/Big-Data-Bowl}. \subsection{Estimating Completion Probability} \label{sec:completion_probability} The first step of the EHCP framework is to estimate completion probability. In order to do so, for each passing play in the BDB dataset, we extract or compute several covariates which we think are predictive of completion probability. These covariates can broadly be divided into three categories: those we can observe at the time the pass is thrown, those that are observed when the pass arrives and the receiver attempts to catch the ball, and situation variables describing the context in which the pass was thrown. We include the following variables measured at the time the pass was thrown into our model: (i) the receiver's speed and direction, (ii) the pairwise Euclidean, horizontal, and vertical distances between the receiver, his nearest defender, and the ball, and (iii) the total Euclidean distance the receiver has run up to that point in the game. We also include measurements of these same variables at the time when the receiver attempts to catch the ball as well as the changes in the receiver's speed, separation, direction, and total distance travelled while the ball is in the air. Finally, we include the time between the snap and the pass, the amount of time that the ball is in the air, the total time from snap to catch attempt, the number of seconds left in the half, down, distance, yards to go to reach a first down, whether the offensive team is leading, and a categorical variable summarizing by how many scores the offensive team is leading or trailing (9+ points, 1 -- 8 points, 0 points) \footnote{This discretization of the score differential was suggested by Mike Lopez \url{https://twitter.com/StatsbyLopez/status/1082287615485886464}}. For each of the N = 4,913 passes in our dataset, we let $y_{i}$ be a binary indicator of whether the pass was caught and we let $\mathbf{x}_{i}$ be a vector concatenating all of the $p = $ covariates for that pass. Perhaps the simplest completion probability model we can build is a convention logistic regression: $$ \log{\left(\frac{\mathbb{P}(y_{i} = 1 \mid \mathbf{x}_{i})}{\mathbb{P}(y_{i} = 0 \mid \mathbf{x}_{i})}\right)} = \mathbf{x}_{i}^{\top}\theta $$ where $\theta \in \mathbb{R}^{p}$ is some unknown vector of each covariates effect. We take a Bayesian approach to estimating $\theta$ by specifying a prior $\pi(\theta),$ that captures all of our initial uncertainty about each covariates effect, and we update it to form the posterior distribution $\pi(\theta \mid \mathbf{y})$ using Bayes' theorem: $\pi(\theta \mid \mathbf{y}) \propto \pi(\theta)p(\mathbf{y} \mid \theta)$ where $p(\mathbf{y} \mid \theta)$ is the \textit{likelihood} implied by the logistic model in Equation~\eqref{eq:completion_probability_model}. Since the posterior distribution is not analytically tractable, we use a Markov Chain Monte Carlo (MCMC) simulation to generate draws $\theta^{(1)}, \ldots, \theta^{(N)}$ from the posterior. Specifically, upon re-scaling all of the continuous covariate to have mean zero and standard deviation 0.5 and re-centering all binary covariates to have mean zero, as recommended by \citet{Gelman2008}, we place independent $N(0,1)$ priors on each element of $\theta.$ We fit this model in Stan \citep{Carpenter2017} through the interface provided by the ``rstan'' \texttt{R} package \citep{rstan}. While straightforward to fit, a major drawback to this simple Bayesian logistic regression model is its assumption that none of the covariates interact with each other in ways that meaningfully impact the log-odds of completing a pass. On its face, this assumption is tenuous at best, motivating us to consider estimating the unknown log-odds function $f$ with regression trees, which naturally incorporate interactions by design. Specifically, we use \citet{Chipman2010}'s Bayesian Additive Regression Trees (BART) to express $f$ as a sum of several regression trees. Since it's introduction, BART has been used across a wide variety of domains and has demonstrated excellent predictive performance. Similar to the simple linear-logistic model above, to use BART we start by specifying a prior $\pi(f)$ mean to reflect all of our initial uncertainty about the unknown function $f.$ In the case of BART, rather than specifying this prior directly, we instead specify a prior over the space of regression trees used to approximate $f..$ We then update this prior to compute a posterior over the space of regression trees, which induces a posterior over $f.$ For a review of Bayesian tree-based methods, please see \citet{Linero2017} for further details about the BART prior and Gibbs sampler, please see \citet{Chipman2010}. We fit the BART model using the \texttt{lbart()} function available in the ``BART'' \texttt{R} package \citep{BART}. In order to facilitate variable selection, we actually used a slightly modified BART prior due to \citet{Linero2018}. Operationally, this was done by running \texttt{lbart()} with the option \texttt{sparse = TRUE}. Our code is available at \url{https://www.github.com/skdeshpande91/ehcp} For the purposes of constructing EHCP, we must prioritize accurate predictions of the completion probability. To pick between the the simple Bayesian logistic regression model and the logistic BART model, we first run a validation experiment in which we generate 10 75\%/25\% training/testing splits of our data. For each training dataset, we fit both models and then for each pass $i$ in the testing dataset, we compute the posterior predictive mean completion probability $\hat{p}_{i}$ for each method. To assess the predictive performance, we computed for each pass in the testing set (i) the squared error $(y_{i} - \hat{p}_{i})^{2},$ (ii) the log-loss $-(y_{i}\log{\hat{p}_{i}} + (1 - y_{i})\log{(1 - \hat{p}_{i})}),$ and the mis-classification error $\mathbf{1}(y_{i} \neq \mathbf{1}(\hat{p}_{i} \geq 0.5)).$ Table~\ref{tab:validation} shows the mean square error, log-loss, and mis-classification rate averaged over each training/testing split. \begin{table}[H] \centering \caption{Predictive performance of the Bayesian logistic regression model and the BART-based model averaged across 10 training/testing splits of the data. Standard deviations are shown in parentheses} \label{tab:validation} \begin{tabular}{lrrr} \hline ~ & Mean Square Error & Misclassification & Log-Loss \\ \hline Bayesian Logistic & 0.099 (0.004) & 0.138 (0.008) & 0.332 (0.018) \\ BART & 0.086 (0.004) & 0.113 (0.005) & 0.289 (0.011) \\ \hline \end{tabular} \end{table} Across each performance measure, we see that the BART-based model has better predictive performance than the simpler Bayesian logistic regression model. For that reason, we will use it in our construction of EHCP. We note, however, that despite its superior performance, our BART-based completion probability model is much more opaque than the simpler model. In particular, as with any tree-based procedure, it is not immediately clear which variables are the most predictive of catch probability. In the context of EHCP, understanding variable importance is critical: if none of the variables measured when the pass arrives at the receiver's location impacted the completion probability, we could arguably avoid the imputation of missing covariates altogether. \citet{Linero2018} introduced a modification of BART that allows for variable selection, which we used to fit our model. Whereas \citet{Chipman2010}'s original decision tree prior uniformly sampled the variable on which to split at each internal decision node, \citet{Linero2018} gives each variable its own ``splitting probability'' and places a Dirichlet prior over the collection of splitting probabilities. As a result, when we fit our model with this modified prior, in addition to approximate samples from the posterior predictive completion probabilities, we also obtain draws from the posterior distribution of each variables splitting probability. It turns out that the variables with the largest posterior mean splitting probabilities were the receiver's speed when the catch was attempted (20.74\%), the Euclidean distance between the receiver and the ball when the catch was attempted (17.29\%), the total Euclidean distance the receiver travelled between the snap and the catch attempt (10.92\%), the Euclidean distance between the receiver and the ball when the pass was thrown (7.35\%), and the separation at the time the catch was attempted (6.74\%). In other words, if we were to draw a decision tree from the posterior distribution, we would split observations along the receiver's speed while attempting to catch the ball just over 20\% of the time but would split observations based on the receiver's separation just under 7\% of the time. As suggested by an anonymous referee, another way to assess the relative importance of each of the covariates is to fix the values of all but one covariate and see how the completion probability changes as we vary that one covariate. Figure~\ref{fig:kupp_completion_prob_profiling} illustrates how the posterior mean and 95\% credible intervals of the estimated completion probability on the Kupp touchdown mentioned earlier changes as we vary several of the covariates. We see, for instance, that Kupp's estimated completion probability is increasing as a function of his speed at the time of the catch up until about eight yards per second, after which point the completion probability levels off. Similarly, we see that his completion probability increases as a function of his separation at the time of the catch. We also see that his completion probability decreases as the more he has to run between the snap and the catch. Interestingly, however, the completion probability does not seem to be depend at all on his speed, separation, and how far he ran between the the time the ball was snapped and the pass was thrown. This apparent lack of these three variables is further evidenced by their very low posterior splitting probabilities (0.27\%, 0.08\%, and 0.01\%). \begin{figure}[H] \centering \includegraphics[width = \textwidth]{kupp_completion_prob_profiling} \caption{The posterior mean completion probability on the Kupp touchdown as a function of a single covariate, keeping all other fixed. The dashed lines show the upper and lower bounds of the 95\% posterior credible interval. The actual observed value of each covariate is indicated with a red dot. Notice that the completion probability is most sensitive to variables measured at the time of the catch but not at the time of the pass.} \label{fig:kupp_completion_prob_profiling} \end{figure} \subsection{Simulating Unobserved Covariates} \label{sec:simulating_x_miss} As alluded to in Section~\ref{sec:introduction} and Figure~\ref{fig:visual_route_manuscript}, when we consider hypothetical passes, we must account for the uncertainty in the covariates that summarize what happens after the pass was thrown. This necessity is driven home by the fact that the variables most predictive of completion probability are in fact ones measured \textit{after} the catch and are therefore not measured on hypothetical passes. For each counterfactual pass, we first divide the covariates into two groups: those which we directly observe and those we cannot observe and about which we are uncertain. The variables in this second group include: (i) the receiver's speed and direction at the time of the catch attempt, (ii) the pairwise Euclidean, horizontal, and vertical distances between the receiver, his nearest defender, and the ball when the catch was attempted, (iii) the total Euclidean distance the receiver travelled between the snap and the catch attempt, (iv) the total time the ball was in the air, and (v) changes in the receiver's speed, separation, direction, and total distance travelled while the ball was in the air. Formally, let $\mathbf{x}^{\star} = (\mathbf{x}^{\star}_{\text{obs}}, \mathbf{x}^{\star}_{\text{miss}})$ be the partition of the counterfactual covariates into the observed and missing data. We propose to sample the values in $\mathbf{x}^{\star}_{\text{miss}}$ from the empirical distribution of observed covariates. For instance, since we cannot observe the vector from the receiver to the ball when the hypothetical pass arrives, we randomly sample this vector from the collection of all such vectors we actually observe in the dataset. So if we knew the true value of $f$, the log-odds of completion function, we could approximate \begin{equation} \label{eq:ehcp} \text{EHCP}(\mathbf{x}^{\star}_{\text{obs}}) = \mathbb{E}_{\mathbf{x}^{\star}_{\text{miss}}}[F(\mathbf{x}^{\star}_{\text{obs}}, \mathbf{x}^{\star}_{\text{miss}})] \approx \frac{1}{M}\sum_{m = 1}^{M}{F(\mathbf{x}^{\star}_{\text{obs}}, \mathbf{x}^{\star(m)}_{\text{miss}})}, \end{equation} where $\mathbf{x}_{\text{miss}}^{\star(1)}, \ldots, \mathbf{x}_{\text{miss}}^{\star(M)}$ are the draws of $\mathbf{x}_{\text{miss}}$ from the empirical distribution, $F(\cdot) = \left[1 + e^{-f(\cdot)}\right]^{-1}$ is the forecasted completion probability function, and the expectation is taken over the empirical distribution of $\mathbf{x}^{\star}_{\text{miss}}.$ Rather than setting the value of $\mathbf{x}^{\star}_{\text{miss}}$ at some arbitrary fixed quantity, EHCP averages over the uncertainty in the unknown (and unobservable) values of $\mathbf{x}^{\star}_{\text{miss}}.$ Importantly, since we are sampling the values of $\mathbf{x}^{\star}_{\text{miss}}$ from the set of values actually observed, EHCP is constructed using realistic values of the missing covariates. Since we do not know $f$ exactly but instead have only our MCMC samples, we can approximate EHCP for each posterior draw of $f$, thereby simulating draws from the posterior distribution of $\text{EHCP}(\mathbf{x}^{\star}_{\text{obs}}).$ We can then report the posterior mean as a point estimate of the true EHCP on the hypothetical pass and also report the 95\% interval, containing likely values of the EHCP. We can further consider all of the routes run on a given play and track these two quantities as the play develops to see which receiver-route combinations have the highest chance of pass completion. \section{Illustration} \label{sec:illustration} To illustrate our proposed framework, we return to the two plays from the introduction, the Kupp touchdown and the Tolzien interception. \subsection{Completion Probability Model} Figure~\ref{fig:completion_posterior_hist} shows the histogram of the posterior draws of the forecasted completion probability $F$ for the Kupp touchdown (blue) and the Tolzien interception (red). We see that there is substantial overlap in the bulk of these posterior distributions but the posterior for the Kupp touchdown is shifted slightly to the right of posterior for the Tolzien interception. Interestingly, on both of the these throws the receiver had less than 50\% chance of catching the ball, with the posterior mean completion probability on the Kupp touchdown approximately 10 percentage points higher than the probability for the Tolzien interception (47\% vs 37.1\%). \begin{figure}[H] \centering \includegraphics[width = 0.45\textwidth]{completion_posterior_hist.png} \caption{Histogram of posterior draws of completion probabilities for the Kupp touchdown (blue) and the Tolzien interception (red)} \label{fig:completion_posterior_hist} \end{figure} \subsection{How EHCP Evolves Over A Route} Figure~\ref{fig:ehcp_posterior_hist} shows the histogram of the posterior EHCP draws for Kupp and Hilton (the intended target on the Tolzien interception) at the times that the two passes actually arrived. As before, the posterior for the Kupp touchdown is shifted slightly to the right of the Tolzien interception. We find that the posterior mean EHCP for the Kupp touchdown is just around six percentage points higher than the posterior mean EHCP for the Tolzien interception (65.1\% vs 59.0\%). That the EHCP and forecasted completion probabilities are somewhat different is not surprising, as they measuring two different quantities: the forecasted completion probability model uses the exact information about what actually happened after the ball was thrown while EHCP averages over the uncertainty in what might have happened after the ball was thrown. We also note that often EHCP posteriors seem to have less variance than the posterior completion probability. This is also not surprising; EHCP represents an \textit{average} probability over several possible realizations of the pass while the forecasted completion probability considers only a single pass. In a certain sense, because EHCP averages over many passes, it somewhat mitigates uncertainty introduced in our estimation of $f.$ \begin{figure}[H] \centering \includegraphics[width = 0.45\textwidth]{ehcp_posterior_hist.png} \caption{Histogram of posterior draws of EHCP for Kupp touchdown (blue) and the Tolzien interception (red)} \label{fig:ehcp_posterior_hist} \end{figure} While comparing the EHCP for the two receivers actually targeted in the two plays at the times that the actual passes arrived is interesting, the real power of EHCP lines in projecting what might have happened had the ball been delivered to other receivers earlier in the play. Figures~\ref{fig:kupp_ehcp_trajectory} and~\ref{fig:tolzien_ehcp_trajectory} show the posterior mean of the EHCP for each receiver at various points in his route for the Kupp touchdown and Tolzien interception. We see that Kupp's posterior mean EHCP at the time the actual pass arrived (location A in the figure) was 65.1\%. Almost two seconds earlier, however, his posterior mean EHCP was 85.1\% (location B in the figure). Looking at the full posterior distributions of the EHCP at these two locations, we find that the 95\% intervals are nearly disjoint. So we may conclude with reasonable certainty that Kupp's EHCP would have been higher had the pass been delivered earlier along his route. Even more interesting, we find that of all of the receivers during this play, Sammy Watkins actually had the highest posterior mean EHCP 1.5 seconds after the snap (92.2\% at location C). At that time, Kupp's posterior mean EHCP was 91.9\% and his 95\% interval was (85.5\%, 96.8\%), virtually identical to Watkins'. Our analysis suggests that while the actual play resulted in a touchdown, there were times earlier in the play where the receivers would have had substantially larger expected completion probabilities. That being said, there are many reasons that the pass was not actually thrown to Watkins at location C. We will return to this point in Section~\ref{sec:discussion}. \begin{figure}[H] \centering \includegraphics[width = 0.8\textwidth]{play_1_V4.png} \caption{Posterior mean EHCP for each receiver on the Kupp touchdown. 95\% posterior intervals are shown in parentheses. $t$ lists the time in seconds after the snap} \label{fig:kupp_ehcp_trajectory} \end{figure} Turning our attention to the the Tolzien interception, we find that T.Y. Hilton, the targeted receiver, had an EHCP of 59.0\% at the time the actual pass arrived (location A in the figure). Similar to the Kupp touchdown, almost two seconds earlier, his EHCP was substantially higher (89\% at location B). Further, Donte Moncrief had the highest EHCP of all receivers at location C, 2.4 seconds after the snap. The substantial overlap in the 95\% intervals for Hilton ([81.4\%, 95.1\%])and Moncrief ([89.9\%, 99.0\%]) at this time means that we cannot tell with much certainty which of the two receivers had the higher EHCP. \begin{figure}[H] \centering \includegraphics[width = 0.8\textwidth]{play_2_V4.png} \caption{Posterior mean EHCP for each receiver on the Tolzien touchdown. 95\% posterior intervals are shown in parentheses. $t$ lists the time in seconds after the snap} \label{fig:tolzien_ehcp_trajectory} \end{figure} We do note, however, that they are very close to one another on the field, which could partially explain the similarity in EHCP at that point in time. It is interesting to note that the posterior mean EHCPs at the time the pass actually arrived to Hilton (4.3 seconds after the snap) hovered between 40\% and 60\% for all receivers on the field. \subsection{Player Comparisons Using EHCP} A natural use case of EHCP is to compare players. For example, we can examine how often a quarterback targeted the receiver with the highest or lowest EHCP on a particular play. Such an analysis can begin to disentangle whether a declining quarterback is making bad decisions (i.e. throwing to receivers with low EHCP) or bad throws. Table~\ref{tab:QBcomparisons} shows the results of such an EHCP-enabled quarterback analysis for the 91 games in the first six weeks of the 2017 NFL season. In the first six weeks, Jameis Winston threw the receiver with the highest EHCP 26.8\% of the time while targeting the receiver with the lowest EHCP just 16.8\% of the time. In contrast, Carson Wentz targeted the receiver with highest EHCP on just 15.3\% of his passes, instead throwing to the receiver with the lowest EHCP on over a quarter of all of his passes. \begin{table}[H] \begin{centering} \caption{Best and worst quarterbacks at throwing to the most and least open receivers (based on EHCP). Percentages reflect the percent of times that quarterback threw to the most or least open receiver (required at least 100 passes).} \label{tab:QBcomparisons} \begin{tabular}{llrr} \hline & Quarterback & Most & Least \\ \hline \multirow{3}{*}{Best} & Jameis Winston & 26.8\% & 16.8\%\\ & Trevor Siemian & 26.2\% & 16.9\% \\ & Jared Goff & 24.8\% & 19.2\% \\ \hline \multirow{3}{*}{Worst} & Russell Wilson & 13.2\% & 23.5\%\tabularnewline & Derek Carr & 15.1\% & 27.5\%\tabularnewline & Carson Wentz & 15.3\% & 27.5\%\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} Beyond looking at quarterbacks, we can also evaluate receivers using EHCP. Compare the EHCP to the fitted completion probability from our BART model helps begin to quantify how much credit (resp. blame) the receiver ought to receiver for making (resp. failing to) make a catch. Indeed, if a pass has a very high EHCP but very low estimated completion probability, we can infer that some combination of the receiver's actions and the defense's reactions \textit{after} the pass was thrown has reduced his chances of catching the ball. On the other hand, if a pass has a very low EHCP but very high estimated completion probability, we can conclude that the receiver's actions and defense's reactions while the ball was in the air have improved the receiver's chances of catching the pass. In this way, a comparison of EHCP and the estimated completion probability provides a quantitative bound on how much credit or blame to assign to the receiver's actions after the ball was thrown. Table~\ref{tab:RECcomparisons} shows the receivers with the highest (resp. lower) average differences between their EHCP and estimated completion probability over the six weeks in our dataset. We find that on average Golden Tate's EHCP was about 11.8 percentage points lower than the estimated completion probability, suggesting that his actions and the defenses' corresponding reactions generally improved his chances of catching these balls. On the other hand, whatever DeAndre Hopkins and the defenders covering him did while the ball was in the air generally decreased his chances of completing the pass, on average. It is important to stress, however, that the analyses contained in Tables~\ref{tab:QBcomparisons} and~\ref{tab:RECcomparisons} are for illustrative purposes only and represent only six games worth of data for each player. As such, we do not recommend extrapolating much from these results. \begin{table}[H] \begin{centering} \caption{The receivers with the highest and lowest average differences between EHCP and estimated catch probability (required at least 40 targets).} \label{tab:RECcomparisons} \begin{tabular}{lrrr} \hline Receiver & EHCP & Observed Catch Probability & Difference\tabularnewline \hline Golden Tate & 64.9\% & 76.7\% & 11.8\%\tabularnewline Christian McCaffrey & 65.1\% & 75.0\% & 9.9\%\tabularnewline Antonio Brown & 60.2\% & 68.7\% & 8.5\%\tabularnewline \hline Dez Bryant & 60.9\% & 42.5\% & -18.4\%\tabularnewline DeAndre Hopkins & 60.5\% & 42.6\% & -17.8\%\tabularnewline Keenan Allen & 64.8\% & 54.2\% & -10.6\%\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \section{Discussion} \label{sec:discussion} As presented here, EHCP provides an objective way to evaluate offensive plays retrospectively. Specifically, we can track how the completion probability evolves for each receiver over the course of a play in a way that accounts for the uncertainty about missing covariates. The EHCP framework can also be used prospectively. A defensive coordinator might, for instance, ask how best to cover a particular set of routes being run. She may fix some of the unobserved covariates like the defender's position relative to targeted receiver and then average over the uncertainty in the remaining covariates to derive the EHCP for that particular combination of receiver-defender positioning. Repeating this for various defender locations would enable her to construct optimal defender trajectories that minimize the intended receiver's EHCP. Our completion probability model and the EHCP framework can also be used to provide more nuanced broadcast commentary. In particular, if there was a play where the forecasted completion probability and EHCP were high and the receiver failed to the catch the ball, one may reasonably assign some amount of blame to the receiver for not catching the ball; after all, the route was run and the ball was delivered to give him a high probability of catching it. On the other hand, if the receiver catches a ball with very low forecasted completion probability and EHCP, it would be worthwhile to point out that receiver is succeeding despite the route design and pass delivery. Finally, one could aggregate the discrepancy between outcome and EHCP over all of a receiver's targeted routes to measure how the receiver is executing his assigned routes. We note that the NFL's Next Gen Stats include a Completion Probability metric that is similar to our forecasted completion probability but uses different input variables than us. Notably, Completion Probability includes a number of quarterback-centric features such as speed of and distance to the nearest pass rusher at the time of the throw \citet{NGS2018}. Since quarterback pressure affects where the pass ends up (e.g. if it is over- or under-thrown), EHCP accounts for it rather indirectly in averaging over the uncertainty in the ball's position relative to the receiver. That said, incorporating variables about the delivery of observed passes directly into the completion probability model is straightforward as is simulating the unobserved values of these variables for counterfactual passes in the EHCP calculation. Doing so would result in an EHCP that better accounts for why balls were thrown when they were and would enable more nuanced assessment of the hypothetical passes. We hope that our method, and our transparency about how we developed it, will facilitate further iterations that combine information about the quarterback and all receivers. There are several potential areas of methodological and modeling improvement. It is quite straightforward to include more covariates about the individual players involved in the pass completion model such as their historic completion percentage or how many times they have been targeted in the game so far. We could also include more situational variables like the expected number of points of win probability estimated from models available in the \texttt{R} package ``nflscrapR'' \citep{nflscrapR}. While we have focused on completion probability as the metric by which to assess routes, it is possible to derive analogous measures for different route metrics. For instance, instead of modeling whether the pass was caught or not, we could model whether the play resulted in a first down and derive the expected hypothetical first down probability. We could also model a continuous outcome like change in win probability or change in expected points scored. Operationally, it is very straightforward to alter our code to handle continuous outcomes. These other measures can more directly address the question of assigning expected values to particular play designs and route combinations. More substantively, a more sophisticated imputation model of $\mathbf{x}^{\star}_{\text{miss}}$ could lead to more accurate EHCP estiamtes. In the present paper, we have taken by far the simplest approach and sampled $\mathbf{x}^{\star}_{\text{miss}}$ from the observed distribution from \textit{all passes} in our dataset. It would be interesting to construct predictive models of $\mathbf{x}^{\star}_{\text{miss}}$ using the observed covariates $\mathbf{x}^{\star}_{\text{obs}}$ and to feed forecasts from these models into the EHCP calculation in Equation~\eqref{eq:ehcp}. Doing so requires careful modeling of the joint distribution of several continuous and categorical variables, which is much beyond the current scope.
{ "attr-fineweb-edu": 2.332031, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbxfxaL3SuglaztKY
\section{Introduction}\label{sec1} The use of Artificial Intelligence (AI) technology in sports has been on the rise recently. During the COVID-19 pandemic, for instance, major tennis tournaments replaced human line judges with Hawk-Eye Live technology in an effort to reduce staff. Also, more than a decade ago, football began using Goal-line technology to assess when the ball has completely crossed the goal line. These are examples of mechanical AI systems requiring the assistance of electronic devices to determine the precise location of balls impartially and fairly, thus minimizing, if not eliminating, any controversy. \begin{figure} \centering \begin{tikzpicture} \begin{axis}[width=13cm, height=8cm, xlabel=World Chess Championship, ylabel=Draw Percentage, x tick label style={ /pgf/number format/1000 sep=}, enlargelimits=0.06, xmin=1880, xmax=2020, ymin=0, ymax=100, grid, legend pos= south east, grid style=dashed] \addplot table [x=x, y=y, col sep=comma]{WCH.csv}; \end{axis} \end{tikzpicture} \caption{Percentage of draws in World Chess Championship Matches 1886--2021. Source: chessgames.com} \label{fig:wch} \end{figure} A major question now is whether AI could move beyond such rudimentary tasks in sports. A case in point and a perfect application ground is chess for two complementary reasons. On the one hand, advanced AI systems, including Stockfish, AlphaZero, and MuZero have already been implemented in chess \cite{stockfish,silver2018,schrittwieser2020}; further, the superiority of top chess engines has been widely acknowledged ever since IBM's Deep Blue defeated former world chess champion Garry Kasparov in 1997 \cite{campbell2002}. On the other hand, despite its current popularity all around the world, chess is very much hampered by the growing incidence of draws, especially in the world championships, as Fig. \ref{fig:wch} illustrates. In response to this draw problem, elite chess tournaments---like other sports competitions \cite{anbarci2021,apesteguia2010,brams2018,cohen2018}---resorted to the tiebreakers. The most common final tiebreaker is the so-called Armageddon game, where White has more time (e.g., five minutes) to think on the clock than Black (e.g., four minutes), but Black wins in the event of a draw. However, it sparks controversy among elite players and chess aficionados alike: \begin{quote} ``Armageddon is a chess penalty shoot-out, a controversial format intended to prevent draws and to stimulate interesting play. It can also lead to chaotic scrambles where pieces fall off the board, players bang down their moves and hammer the clocks, and fractions of a second decide the result'' (Leonard Barden, \textit{The Guardian} \cite{barden2019}). \end{quote} \begin{quote} ``Logic could hardly ever be found in the Armageddon games. But this, in turn, has its own logic'' (Grand Master (GM) Ian Nepomniachtchi on the final tiebreaker in the 2022 World Fischer Random Chess Championship \cite{nepomniachtchi2022}). \end{quote} In this paper, we propose that AI systems serve as a judge in the event of a tie in games such as chess. In chess, in particular, we introduce a novel and practicable AI-based method and show that it essentially eliminates the draw problem. In a nutshell, for each position in the game, our method measures the difference between the evaluations of a player's ``actual move'' and the ``best move'' as deemed by a powerful chess engine. In case of a tie, the player with the higher ``quality'' measure wins the tiebreak. Most importantly, we prove that our method is immune to strategic manipulation, whereas the current fast chess tiebreakers, as illustrated in Table~\ref{table:time-control}, are not. To give an example, in the last game of the 2018 World Chess Championship, Magnus Carlsen---in a significantly advantageous position---offered a draw to Fabiano Caruana, which Caruana accepted. The reason behind Carlsen's offer was to proceed to fast chess tiebreaks in which he had even better odds of winning the championship. In contrast, Carlsen would not possibly benefit from such an offer under our method, so he most likely would not offer a draw in the same situation (see section~\ref{sec:Carlsen-Caruana} for details). \begin{table}\[ \arraycolsep=1.1pt\def2{1.5} \begin{array}{ r|c|c|} \multicolumn{1}{r}{} & \multicolumn{1}{c}{\text{Time-control}} & \multicolumn{1}{c}{\text{Time per player}}\\ \cline{2-3} & \text{Classical} & \text{90 min.} \\ \cline{2-3} & \text{Rapid} & \text{15 min.} \\ \cline{2-3} & \text{Blitz} & \text{5 min.} \\ \cline{2-3} & \text{Armageddon} & \text{White: 5 min., Black: 4 min. (and draw odds)} \\ \cline{2-3} \end{array} \] \caption{An example of classical vs. fast chess time-controls} \label{table:time-control} \end{table} We generalize our method to all competitive sports and games in which AI's superiority is---or can be---established. More specifically, we introduce a family of AI-based scoring mechanisms and the concept of ``tiebreak strategyproofness'' in $n$-person zero-sum games. A mechanism is called tiebreak strategyproof (TSP) if a player cannot improve their tiebreak score by playing a sub-optimal action according to a given AI system. Moreover, we show that our chess tiebreak method is TSP. We anticipate that our method will be the first of many applications of AI-based TSP mechanisms to break ties in sports and games. TSP is related to the notion of strategyproofness that is often used in social choice, voting systems, mechanism design, and sports and competitive games; though, formal definition of strategyproofness varies depending on the context. (For a selective literature, see, e.g., \cite{elkind2005,faliszewski2010,pauly2013,li2017,aziz2018,brams2018c,dagaev2018,csato2019}) Informally, a social choice rule or a mechanism is said to be strategyproof if being truthful is a weakly dominant strategy for every player. In sports and competitive games, a mechanism is said to be strategyproof if the mechanism is immune to strategic manipulation, i.e., no agent can benefit from strategizing \subsection{The draw problem in chess} The ``draw problem'' in chess has a long history. Neither chess aficionados nor elite players appear to enjoy the increasing number of draws in chess tournaments. The current world champion, Magnus Carlsen, who recently announced that he will not defend his title in the 2023 cycle, appears to be dissatisfied as well. ``Personally, I'm hoping that this time there will be fewer draws than there have been in the last few times, because basically I have not led a world championship match in classical chess since 2014'' \cite{carlsen2021}. The 2018 world championship tournament, for instance, ended with 12 consecutive draws. The world champion was then determined by a series of ``rapid'' games, whereby players compete under significantly shorter time-control than the classical games (see Table~\ref{table:time-control}). If the games in the tiebreaks did not determine the winner, then a final game called Armageddon would have been played to determine the winner. Clearly, compared to classical chess games, there is no doubt that the fast-paced rapid, blitz and Armageddon games lower the quality of chess played; the latter also raises questions about its fairness because it treats players asymmetrically. \subsection{An AI-based scoring mechanism} In the event of a win, it is straightforward to deduce that the winner played a higher quality chess than the loser. In the event of a tie, however, it is more difficult to assert that the two players' performances were comparable, let alone identical. With the advancements in chess AIs, their differences in quality can now be quantified. Average centipawn loss is a known metric for evaluating the quality of a player's moves in a game where the unit of measurement is 1/100th of a pawn. We instead propose a more intuitive modification of that metric, which we term the ``total pawn loss,'' because (i) even chess enthusiasts do not seem to find the average centipawn loss straightforward, and (ii) it can be manipulated by intentionally extending the game in e.g. a theoretically drawn position. We define total pawn loss as follows. First, at each position in the game, the difference between the evaluations of a player's actual move and the ``best move'' as deemed by a chess engine is calculated. Then, the total pawn loss value (TPLV) for each player is simply the equivalent of the total number of ``pawn-units'' the player has lost during a chess game as a result of errors. If the TPLV is equal to zero, then it indicates that every move was perfect according to a chess engine. Along the above lines, we propose the following AI scoring rule. In the event of a win, the winner receives 2 points and the loser 0 points, and the player with the lower TPLV receives an additional 1 point. If the chess engine is ``strong,'' then the winner should not have a higher TPLV unless the opponent runs out of time. If the player who lost on time has a lower TPLV, then they receive 1 point instead of 0 points. This is to disincentivize players playing quick moves to ``flag'' their opponent's clock. In the event of a draw, the player with the lower TPLV receives 2 points and the other receives 1 point. Each player receives 1.5 points when both players have the same TPLV, or their TPLVs are within a threshold determined by tournament organizers or FIDE, the International Chess Federation. For uniformity against slight inaccuracies in chess engine evaluations, we suggest using a certain threshold, e.g. 5\%, within which the TPLVs can be considered equivalent. We next give examples of TPLV calculations in several real-world situations. \section{Real-world examples} \begin{quote} ``Everybody could see that I wasn't really necessarily going for the maximum. I just wanted a position that was completely safe and where I could put some pressure. If a draw hadn't been a satisfactory result, obviously I would have approached it differently'' (Magnus Carlsen \cite{carlsen2018} on game 12 in the 2018 World chess championship). \end{quote} In this section, as suggested by the above quote, we will illustrate that fast-control (i.e., rapid, blitz, and Armageddon) tiebreaks are not TSP with a counter-example in an actual game. If the tiebreaks are decided with faster time-control games, then the better fast chess player---measured by having a greater Elo rating \cite{elo1978} under rapid/blitz time-control---might be incentivized to make a weaker move under classical time-control to draw the game. \subsection{World chess championship 2018: Carlsen vs. Caruana} \label{sec:Carlsen-Caruana} As highlighted before, Magnus Carlsen offered a draw in a better position against Fabiano Caruana in the last classical game in their world championship match in 2018. This was because Carlsen was a much better player in rapid/blitz time-control than his opponent. Indeed, he won the rapid tiebreaks convincingly with a score of 3-0. Note that Carlsen made the best decision given the championship match, but due to the tiebreak system his decision was not the best (i.e., manipulation-proof) in the particular game. As we will show later, our AI-based tiebreak format better aligns these incentives. Note also that the situation would be very different if Carlsen would guarantee winning the championship with a draw in the last game. In that case, the incentive compatibility issue is not created by the tiebreak mechanism but by (i) the scoring system that gives a strictly positive point to a draw, and (ii) the fact that the value of world championship is much greater than the value of winning a game. We do not think that it is desirable and practicable to avoid such scenarios in part because the value of winning the world championship title is huge. That being said, offering extra cash prizes to the winner of each game can help incentivize players to win the games as well. For these reasons, our tiebreaking mechanisms intentionally do not rule out the aforementioned scenarios. Carlsen, of course, knew what he was doing when he offered a draw. During the post-game interview, he said ``My approach was not to unbalance the position at that point'' \cite{carlsen2018}. Indeed, in our opinion Carlsen would not have offered a draw in their last game under TPLV-based scoring system because he was already doing better in terms of having a lower TPLV than Caruana. \begin{table}\[ \arraycolsep=1.1pt\def2{1.5} \begin{array}{ r|c|c|} \multicolumn{1}{r}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\text{GM Magnus Carlsen (B)}}\\ \cline{2-3} & \text{TPLV before draw offer} & 5.2 \\ \cline{2-3} & \text{Evaluation of best move} & -1.0 \\ \cline{2-3} & \text{Evaluation of draw offer} & 0.0 \\ \cline{2-3} & \text{Pawn loss of draw offer} & 1~=~-(-1-0) \\ \cline{2-3} & \text{TPLV after draw offer} & 6.2~=~5.2+1 \\ \cline{2-3} \end{array} \] \caption{Calculation of TPLV after Carlsen's draw offer to Caruana. Negative values imply that the chess engine deems Carlsen's position better as he has the black pieces.} \label{table:Carlsen} \end{table} \begin{table} \[ \begin{array}{ r|c|c|c|c|} \multicolumn{1}{r}{} & \multicolumn{1}{c}{\text{Player}} & \multicolumn{1}{c}{TPLV} & \multicolumn{1}{c}{\text{Score}} & \multicolumn{1}{c}{\text{AI Score}} \\ \cline{2-5} & \text{GM Fabiano Caruana (W)} & 5.9 & 0.5 & 2 \\ \cline{2-5} & \text{GM Magnus Carlsen (B)} & 6.2 & 0.5 & 1 \\ \cline{2-5} \end{array} \] \caption{Game 12 in the 2018 world chess championship. TPLVs include the draw offer and its acceptance.} \label{table:Carlsen-Caruana} \end{table} Carlsen's TPLV was 5.2 in the position before his draw offer, whereas Caruana's was 5.9. When Carlsen offered a draw, the evaluation of the position was about $-1.0$---i.e., Black is a pawn-unit better---according to Sesse, which is a strong computer running Stockfish. If Carlsen played the best move, then his evaluation would be about $-1.0$, which means he would be ahead about a pawn-unit. After the draw offer was accepted, the game ended in a draw and the evaluation of the position is obviously 0. As a result, Carlsen lost $$1~=~-(-1-0)$$ pawn-unit with his offer, as calculated in Table~\ref{table:Carlsen}. Thus, his final TPLV is $6.2=5.2+1$ as Table~\ref{table:Carlsen-Caruana} shows. A draw offer in that position would make Caruana the winner of the tiebreak under our method. \subsection{Armageddon as a game tiebreaker: An innovation of Norway Chess} \begin{table}[h!] \[ \begin{array}{ r|c|c|c|c|c|} \multicolumn{1}{r}{} & \multicolumn{1}{c}{\text{Player}} & \multicolumn{1}{c}{TPLV} & \multicolumn{1}{c}{\text{Score}} & \multicolumn{1}{c}{\text{Armageddon Score}} & \multicolumn{1}{c}{\text{AI Score}} \\ \cline{2-6} & \text{GM Veselin Topalov (W)} & 3.15 & 0.5 & 1 & 2 \\ \cline{2-6} & \text{GM Magnus Carlsen (B)} & 3.4 & 0.5 & 1.5 & 1 \\ \cline{2-6} \end{array} \] \caption{TPLV scoring system vs Armageddon scoring} \label{table:TPLV} \end{table} In another example, Table~\ref{table:TPLV} illustrates the outcome of the game played by Veselin Topalov and Magnus Carlsen in the 2022 Norway Chess Tournament. Their classical game ended in a draw. Then, they played an Armageddon tiebreak game, which Topalov drew with white pieces against Magnus Carlsen, hence losing the tiebreak. However, notice that Topalov's TPLV is lower, so according to our TPLV scoring system he would have won the tiebreak. \subsection{Armageddon as a tournament tiebreaker} Many elite tournaments, including the world chess championship as mentioned earlier, uses Armageddon as a final tiebreaker. Most recently, Armageddon tiebreaker was used in the 2022 US Women Chess Championship when Jennifer Yu and Irina Krush both tied for the first place, scoring each 9 points out of 13. Both players made big blunders in the Armageddon game; Irina Krush made an illegal move under time pressure and eventually lost the game and the championship. Fig.~\ref{fig:Krush-Yu} and Table~\ref{table:Krush-Yu-TPLV} illustrate the TPLVs of Irina Krush and Jennifer Yu in the 2022 US Women Chess Championship and the cumulative TPLVs of each player, respectively. According to our TPLV-based tiebreak method, Irina Krush would have been the US champion because she played a significantly better chess in the tournament according to Stockfish: Irina Krush's games were about two pawn-units better on average than Jennifer Yu's. \begin{figure} \centering \begin{tikzpicture} \begin{axis} [width=13cm, height=8cm, xlabel={Round}, ylabel={TPLV}, enlargelimits=0.06, xmin=1, xmax=13, ymin=0, ymax=25, xtick={1,2,3,4,5,6,7,8,9,10,11,12,13}, ytick={0,5,10,15,20,25}, grid, legend pos= south east, grid style=dashed] \addplot coordinates {(1, 15.96) (2,5.52) (3,21.46) (4,9) (5,20.3) (6,20.79) (7,19.22) (8,6.66) (9,6.16) (10,20.9) (11,8.45) (12,20.52) (13,13.68)}; \addplot coordinates {(1, 15.05) (2,23.1) (3,6.24) (4,7.5) (5,11.84) (6,13.53) (7,14.28) (8,7.52) (9,16.08) (10,5.27) (11,15.6) (12,11.02) (13,12.18)}; \addlegendentry{Jennifer Yu} \addlegendentry{Irina Krush} \end{axis} \end{tikzpicture} \caption{TPLVs of Irina Krush and Jennifer Yu in the 2022 US Women Chess Championship games. Lower TPLV implies better play.} \label{fig:Krush-Yu} \end{figure} \begin{table}[h] \label{table:Krush-Yu-TPLV} \[ \begin{array}{ r|c|c|c|} \multicolumn{1}{r}{} & \multicolumn{1}{c}{\text{Player}} & \multicolumn{1}{c}{\text{Cumulative TPLV}} & \multicolumn{1}{c}{\text{Average TPLV}} \\ \cline{2-4} & \text{GM Irina Krush} & 159.21 & 12.24 \\ \cline{2-4} & \text{GM Jennifer Yu} & 188.62 & 14.50 \\ \cline{2-4} \end{array} \] \caption{TPLV vs Armageddon tiebreakers in the 2022 US Women Chess Championship. Irina Krush would have been the champion because she had significantly lower cumulative TPLV in the tournament. } \end{table} \section{Tiebreak strategyproof mechanisms in chess} \label{sec:chess} This section employs basic notation and focuses solely on chess for the sake of clarity. For a formal definition of extensive-form games and a generalization of the notions mentioned, see Appendix~\ref{sec:Appendix}. \begin{table}[h!] \[ \arraycolsep=1.3pt\def2{2} \begin{array}{ r|c|c|c|} \multicolumn{1}{r}{} & \multicolumn{1}{c}{\text{Game Theory}} & \multicolumn{1}{c}{\text{Notation}} & \multicolumn{1}{c}{\text{Chess}}\\ \cline{2-4} & \text{a game} & G & \text{the game of chess} \\ \cline{2-4} & \text{a player} &i\in \{1,2\} & \text{White or Black} \\ \cline{2-4} & \text{an action} & a_i\in A_i & \text{a move or a draw offer/acceptance} \\ \cline{2-4} & \text{a play} & \bar{a}\in \bar{A} & \text{a single chess game} \\ \cline{2-4} & \text{a node} & x_j\in X & \text{a position} \\ \cline{2-4} & \text{a tournament} & T(G) & \text{a tournament} \\ \cline{2-4} & \text{AI} & v_i: X\rightarrow \mathbb{R} & \text{a chess engine} \\ \cline{2-4} & \text{AI best-response} & a^*_i(x)\in \arg\max_{a_i(x)\in A_i(x)} v_i(a_i(x)) & \text{best move} \\ \cline{2-4} \end{array} \] \caption{The terminology in game theory and chess} \label{table:terminology} \end{table} Let $G$ denote the extensive-form game of chess under the standard International Chess Federation (FIDE) rules. Table~\ref{table:terminology} summarizes the relationship between the terminologies used in game theory and chess. In the chess terminology, a \textit{chess game} is an alternating sequence of actions taken by White and Black from the beginning to the end of the game. In game theory, we call a chess game a \textit{play} and denote it by $\bar{a}\in \bar{A}$, which is the finite set of all plays. A chess \textit{position} describes in detail the history of a chess game up to a certain point in the game. Formally, in game theory, a position is a \textit{node}, denoted by $x\in X$, in the extensive-form game $G$. A chess \textit{move} is a choice of a player in a given position. A \textit{draw offer} is a proposal of a player that leads to a draw if agreed by the opponent. If the opponent does not accept the offer, then the game continues as usual. Formally, an \textit{action} of a player $i\in \{1,2\}$ at a node $x\in X$ is denoted by $a_i(x)\in A_i(x)$, which is the set of all available actions of player $i$ at node $x$. An action can be a move, a draw offer, or the acceptance or rejection of the offer. A \textit{chess tournament}, denoted by $T(G)$, specifies the rules of a chess competition, e.g., the world chess championship where two players play a series of chess games, and Swiss tournament where a player plays against a subset of other competitors. We define \textit{AI} as a profile $v$ of functions where for each player $i$, $v_i: X\rightarrow \mathbb{R}$. In words, an AI yields an evaluation for every player and every position in a chess game. A \textit{chess engine} is an AI which inputs a position and outputs the evaluation of the position for each player. Let $a^*_i\in A_i$ be an action at a node $x$. The action is called an \textit{AI best-response} if $a^*_i(x)\in \arg\max_{a_i(x)\in A_i(x)} v_i(a_i(x))$. In words, an AI best-response action at a position is the best move according to the chess engine $v$. We now introduce our metric of ``total pawn loss.'' \begin{definition}[Pawn loss] Let $v_i(a^*_i(x_j))$ be a chess engine's evaluation of the best move for player $i$ at position $x_j$ and $v_i(a^j_i(x_j))$ be chess engine's evaluation of $i$'s actual move. Then, the \textit{pawn loss} of move $a^j_i(x_j)$ is defined as $v_i(a^*_i(x_j))-v_i(a^j_i(x_j))$. \end{definition} \begin{definition}[Total pawn loss value] Let $\bar{a}\in \bar{A}$ be a chess game (i.e., a play) and $a^j_i$ be player $i$'s action at position $x_j$ in chess game $\bar{a}$, where $\bar{a}_i=(a^1_i,a^2_i,...,a^{l_i}_i)$ for some $l_i$. Then, player $i$'s \textit{total pawn loss value} (TPLV) is defined as, \[TPLV_i(\bar{a})=\sum_{j=1}^{l_i} [v_i(a^*_i(x_j))-v_i(a^j_i(x_j))]. \] Let $\bar{a}^1, \bar{a}^2,..., \bar{a}^K$ where $\bar{a}^k\in \bar{A}$ be a sequence of chess games in each of which $i$ is a player. Player $i$'s \textit{cumulative} TPLV is defined as \[ \sum_{k=1}^{K} TPLV_i(\bar{a}^k). \] \end{definition} In words, at every position the difference between the evaluations of a player's actual move and the best move is calculated. A player's TPLV is simply the total number of pawn-units the player loses during a chess game. Let $V$ be the set of all AIs. An \textit{AI chess scoring mechanism} in a game is a function $f:V\times \bar{A}\rightarrow \mathbb{R}^2$, which inputs an AI, $v$, and a chess game, $\bar{a}$, and outputs a score for each player. We next introduce a family of TPLV-based scoring mechanisms. \begin{definition}[TPLV-based AI chess scoring mechanisms] \label{def:AI_scoring_mechanism} We define a family of AI scoring mechanisms based on the type of competition. \begin{enumerate} \item Games: The player with the lowest $TPLV$ receives an additional point or points, on top of their score based on the outcome of the game (i.e., win, draw, or a loss). \item Tournament: In case of ties in a chess tournament, the ties are broken in favor of the player(s) with the lowest cumulative TPLV whom are ranked the first, the player(s) with the second lowest TPLV rank the second, and so on. \end{enumerate} \end{definition} We next define a specific and practicable AI scoring mechanism for chess games, which we call the AI scoring rule. \begin{definition}[TPLV-based AI scoring rule for chess] \label{def:AI_scoring_rule} Let $\bar{a}$ be a chess game, $s_i$ the score of player $i$, and $TPLV_i<TPLV_j$ be player $i$'s TPLV in $\bar{a}$. If player $i$ wins the chess game, $\bar{a}$, then $i$ receives 2 points and player $j\neq i$ receives 0 points: $s_i=2$ and $s_j=0$. If chess game $\bar{a}$ is drawn then each player receives 1 point. Under any case, if $TPLV_i<TPLV_j$, then the player $i$ receives an \textit{additional} 1 point, and player $j$ does not receive any additional point: $s_i=2$ and $s_j=1$. If $TPLV_i=TPLV_j$, then each player receives an additional $0.5$ points. \end{definition} In simple words, we propose that the winner of a chess game receives 3 points (if they have a lower TPLV) and the loser 0 points, and in the event of a draw, the player with the lower TPLV receives 2 points and the other receives 1 point. This $(3,2,1)$ scoring system is akin to the scoring system used in volleyball when the match proceeds to the tiebreak, which is a shorter fifth set. Norway Chess also experimented with the $(3,2,1)$ scoring system, but now uses $(3,1.5,1)$ system perhaps to further incentivize winning a game. To our knowledge, Norway Chess was the first to use Armageddon to break ties at the game level rather than at the tournament level. There are several ways one could use TPLV to break ties. Definition~\ref{def:AI_scoring_rule} provides a specific scoring rule in case of a tie in a chess game. For example, AI scoring mechanism can also be used with the $(3,1.5,1)$ scoring system: The winner of a game receives 3 points regardless of the TPLVs, the winner of the tiebreak receives 1.5 points and the loser of the tiebreaker receives 1 point. In short, based on the needs and specific aims of tournaments, the organizers could use different TPLV-based scoring systems. Regardless of which scoring rule is used to break ties in specific games, Definition~\ref{def:AI_scoring_mechanism} provides a tiebreaking rule based on cumulative TPLV in chess tournaments. In the unlikely event that cumulative TPLVs of two players are equal in a chess tournament, then average centipawn loss of the players could be used as a second tiebreaker; if these are also equal, then there is a strong indication that the tie should not be broken. But if the tie has to be broken in a tournament such as the world championship, then we suggest that players play two games---one as White and one as Black---until the tie is broken by the AI scoring rule. In the extremely unlikely event that the tie is not broken after a series of two-game matches, one could, e.g., argue that the reigning world champion should keep their title. We next define tiebreak strategyproofness in chess. We refer the interested reader to the Appendix \ref{sec:Appendix}, for the definition of tiebreak strategyproofness in more other games. \begin{definition}[TSP] \label{def:AI_strategyproof} A play $\bar{a}\in \bar{A}$ is called \textit{tiebreak strategyproof} (TSP) if for every player $i$ and every action $a^k_i$ in sequence $\bar{a}$, \[ \sum_{j=1, j\neq k}^{l_i} [v_i(a^*_i(x_j))-v_i(a^j_i(x_j))]\leq \sum_{j=1}^{l_i} [v_i(a^*_i(x_j))-v_i(a^j_i(x_j))]. \] An AI scoring mechanism $f$ is called TSP if every play $\bar{a}\in \bar{A}$ under the mechanism is TSP. \end{definition} In words, given a play $\bar{a}\in \bar{A}$, fix the total pawn-losses excluding a node $x$ on the path of $\bar{a}$. If the play is TSP, then it is in the best interest of the active player at $x$ to choose an AI best-response action. A straightforward extension of TSP mechanisms could be to define tiebreak strategyproofness with respect to a more general function of the errors made in a game rather than with respect to the total errors as in TPLV. We keep the current definition for its simplicity. We next show that our tiebreaking rule based on TPLV is indeed TSP. \begin{theorem}[TSP mechanisms] \label{thm:strategyproof} AI scoring mechanisms given by Definition~\ref{def:AI_scoring_mechanism} are TSP. \end{theorem} The proof of the theorem is in the Appendix~\ref{sec:proof}. To explain TSP in games in plain words, suppose, to reach a contradiction, that the AI scoring rule given in Definition~\ref{def:AI_scoring_rule} is not TSP in a chess game. This implies that there is some player $i$, a position in a chess game, and there are two moves (move 1 and move 2) such that move 1 is the best move according to an engine and its evaluation is strictly greater than engine evaluation of move 2. Notice that choosing move 1 would decrease player $i$'s TPLV, which implies that player $i$ would be better off with choosing move 1 instead of move 2. Thus, AI scoring rule for chess is indeed TSP. The fast chess tiebreakers are not TSP as we illustrated in section~\ref{sec:Carlsen-Caruana}. \section{Concluding remarks: Potential concerns/benefits, and future directions} \label{sec:conclusions} \subsection{Logistics} Both the AI system (software) and the hardware play a role in calculating TPLVs in a game. Thus, both of these should be made public knowledge in advance of a tournament. The engine settings should be kept fixed across all games, unless the tournament director has a reasonable doubt that the AI's assessment of a particular position in a game was flawed in a way that might affect the result. In that case, the tournament director may seek a re-evaluation of the position/game. Today, several of the best chess engines, including Stockfish and AlphaZero, are widely acknowledged to be clearly much better than humans. Thus, either of these chess engines could be employed for the AI scoring rule. (In tournaments with a large number of participants, however, one could use computationally less expensive engine settings to calculate the TPLVs.) \subsection{Computer-like play} A reasonable concern could be that our proposal will make players to play more ``computer-like.'' Nevertheless, we believe that top chess players now already play more like engines than they did in the past. Expert players try to learn as much as they can from engines, including openings and end-game strategies, in order to gain a competitive edge. As an example, Carlsen \cite{carlsen2022} recently explained how he gained a huge amount of knowledge and benefited from neural network-based engines such as AlphaZero. He also said that some players have not used these AIs in the correct way, and hence have not benefited from them. (For a further discussion, see Gonz{\'a}lez-D{\'\i}az and Palacios-Huerta \cite{gonzalez2022}.) In summary, there is little, if anything, that the players can do to play more computer-like and take advantage of the AI scoring mechanism on top of what they normally do. To put it slightly differently, if there is any ``computer-like'' chess concept that a player can learn and improve their AI score, then they would learn this concept to gain a competitive edge anyway---even if AI scoring mechanism is not used to break ties. That being said, it is up to the tournament organizers to decide which chess engine to use for tiebreaking, and some engines are more ``human-like'' than the others \cite{mcilroy2020}. \subsection{Playing strength} It is simpler to play (and win) a game against a weaker opponent than a stronger opponent, and a player is less likely to make mistakes when playing against a weaker opponent. Is it then unfair to compare the quality of the moves of different players? We do not think so. First, in most strong tournaments, including the world championship and the candidates tournament, every player plays against everyone else. Second, in Swiss tournaments, players who face each other at any round are in general of comparable strength due the format of this tournament. While it is impossible to guarantee that each tied player plays against the same opponents in a Swiss tournament, we believe that AI scoring mechanisms are preferable to other mechanisms because they are impartial, tiebreak strategyproof, and based on the quality of the moves played by the player themself as opposed to other tiebreak mechanisms that are based on e.g. the performance of the player's opponents. (For a review of ranking systems used in Swiss tournaments, see Csat{\'o} \cite{csato2017}.) \subsection{Playing style} The playing style---positional vs tactical, or conservative vs aggressive---of a player may make them more (or less) susceptible to making mistakes against a player with a different playing style. A valid concern is whether our AI mechanisms favor one style over another. The answer depends on the chess engine (software) and the hardware that are used to break ties. A top player may have a better ``tactical awareneness'' than a relatively weak chess engine or a strong engine that runs on a weak hardware. Using such an engine to break ties would then obviously be unfair to the player. However, there is little doubt that the latest version of Stockfish running on a strong hardware is a better tactical and/or positional player than a human player. As an analogy, suppose that a world chess champion evaluates a move in a game played by amateur players. While the world champion may be biased, like any other player, there is little doubt that their evaluation would be more reliable than the evaluation of an amateur. In addition, in our opinion, the scoring system, as opposed to the tiebreak mechanism, is the primary determinant of which playing style (aggressive vs conservative) would be preferred by the players. For example, the standard (1,0.5) chess scoring system---where the winner of a game receives 1 point and in case of a draw players each receive 0.5 points---does not discourage conservative playing style. By contrast, the (3,1.5,1) scoring system, most prominently used by Norway Chess, discourages conservative play because drawing two games does not give the same number of points as winning one unless one wins the tiebreak in both drawn games (for details, see section~\ref{sec:chess}). In summary, players adjust their playing style according to the scoring system. \subsection{How will the incentives of the players change?} Apart from boosting the quality of matches by naturally giving more incentives to players to find the best moves, our quality-based tiebreaking rule provides two additional benefits. First, observe that it is very likely to discourage ``prematurely'' agreed draws, as there is no assurance that each player will have the same TPLV when a draw is agreed upon during a game; thus, at least the player who senses having the worse (i.e., higher) TPLV up to that point will be less likely to offer or agree to a draw. Second, this new mechanism is also likely to reduce the incentive for players to play quick moves to ``flag'' their opponent's clock---so that the opponent loses on time---because in case of a draw by insufficient material, for instance, the player with the lower TPLV would gain an extra point. \subsection{What is the ``best strategy'' under our tiebreak mechanism?} Another valid question is whether playing solid moves, e.g., the top engine moves from beginning to the end, is the ``best strategy'' under any of our tiebreak mechanisms. The answer is that the best strategy in a human vs human competition is \textit{not} to always pick the top engine moves! This is because the opponent might memorize the line that is best response to the top engine moves in which case the outcome of the game would most likely be a draw. Our TSP mechanism says that one cannot improve their \textit{tiebreak score} by playing a sub-optimal move, and hence the only time a player should deviate from the optimal move must be when one does \textit{not} want the game to go to the tiebreak---i.e., when one wants to win the game. And, winning the game is more valuable than winning a tiebreak. Therefore, playing a sub-optimal computer move might be the ``human-optimal'' move to win the game. Notice that this seemingly paradoxical conclusion does not contradict with the tiebreak strategyproofness of our mechanism in part because we intentionally apply our mechanism in case of a tie (unless a player runs out of time). \subsection{Conclusion} In contrast to the current tiebreak system of rapid, blitz, and Armageddon games, the winner of the tiebreak under a quality-based tiebreak strategyproof AI mechanism is determined by an objective, state-of-the-art chess engine with an Elo rating of about 3600. Under the new mechanism, players' TPLV is highly likely to be different in the event of a draw by mutual agreement, draw by insufficient material, or any other `regular' draw. Thus, nearly every game will result in a winner, making games more exciting to watch and thereby increasing fan viewership. A valid question for future research direction is whether and to what extent our proposal could be applied to other games and sports. Note that we have defined AI scoring mechanisms and tiebreak strategyproofness for a general class of $n$-person zero-sum games. Thus, our TSP scoring mechanisms are applicable to all games in this class, including chess, Go, poker, backgammon, football (soccer), and tennis. However, one must be cautious when using an AI scoring mechanism in a game/sport where AI's superiority is not commonly recognised, particularly by the best players in that game. Only after is it established that AI is capable of judging the quality of the game---which is currently the case only in a handful of games including Go, backgammon, and poker \cite{brown2019}---do we recommend using our TSP scoring mechanisms.
{ "attr-fineweb-edu": 2.255859, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfvI241xiMs_hTaQ4
\section{Introduction} \blfootnote{ * denotes equal contributions. The order was determined using Python's random.shuffle()} Creating sports highlight videos often involves human efforts to manually edit the original untrimmed videos. The most popular sports videos often comprise short clips of a few seconds, while for machines to understand the video and spot key events precisely is very challenging. In this tech report, we present a two-stage paradigm (see Figure.\ref{fig:pipeline}) to solve two problems in understanding soccer videos and detecting target events: action spotting and replay grounding, which are defined in SoccerNet-v2 challenge \cite{Delige2020SoccerNetv2A}. The action spotting task aims at spotting actions such as goal, shots-on target, shots-off target, yellow card, red card, etc, in a complete video of soccer game. The replay grounding task is to ground the timestamps of the actions represented in a specific replay. In our approach, the same first stage is shared in both two tasks, which uses fine-tuned action recognition models to extract semantic features, and the second stage consists of a temporal detection module tailored for each task. \begin{figure} \centering \includegraphics[width=6.8in]{SoccerNetv2_pipelinea.png} \caption{Our two-stage paradigm for action spotting and replay grounding in soccer videos} \label{fig:pipeline} \end{figure} \subsection{Related work} In sports analytics, many computer vision technologies are developed to understand sports broadcasts \cite{THOMAS20173}. Specifically in soccer, researchers propose algorithms to detect players on field in real time \cite{Cioppa_2020_CVPR_Workshops}, analyze pass feasibility using player's body orientation \cite{Sangesa2020}, incorporate both audio and video streams to detect events \cite{Vanderplaetse2020}, recognize group activities on the field using broadcast stream and trajectory data \cite{Sanford2020}, aggregate deep frame features to spot major game events \cite{Giancola_2018_CVPR_Workshops}, and leverage the temporal context information around the actions to handle the intrinsic temporal patterns representing these actions \cite{Cioppa2020Context,Giancola_2021_CVPR_Workshops}. \subsection{Contributions} In this tech report, our main contributions can be summarized as the following: $\bullet$ Taking advantage of multiple recent action recognition models pretrained on large-scale video datasets, we extract semantic features of soccer videos by fine-tuning each model as the feature extractor, on an auxiliary snippet dataset which is derived from the original SoccerNet-v2 dataset. We concatenate and normalize the features obtained from each model to generate Baidu soccer embeddings. The proposed feature combination significantly improves both action spotting and replay grounding performance. $\bullet$ We propose a novel transformer based temporal detection module which achieves the state-of-the-art performance in both action spotting task and replay grounding task in the SoccerNet-v2 Challenge, under 2021 CVPR ActivityNet Workshop. \section{Feature Extraction} \label{FeatExtract} Both the previous competitive method NetVLAD++ \cite{Giancola_2021_CVPR_Workshops} for action spotting and the baseline method $CALF\_more\_negative$ (Cmn) \cite{Delige2020SoccerNetv2A} for replay grounding use per-frame features extracted by ResNet~\cite{He2016DeepRL} pretrained on ImageNet~\cite{krizhevsky2012imagenet}. However, we believe that features that are tailored for the soccer broadcast videos can improve the performance of the spotting module. We fine-tune multiple action recognition models on snippets of SoccerNet-v2 videos, and in the test stage we also extract features from videos (clips of frames), rather than on a per-frame basis. We fine-tune multiple action recognition models on the task of action classification. The models we use include TPN \cite{Yang2020TemporalPN}, GTA \cite{he2020gta}, VTN \cite{neimark2021video}, irCSN \cite{Tran2019}, and I3D-Slow \cite{Feichtenhofer2019SlowFastNF}. In order to perform such fine-tuning, we construct an 18-class snippet dataset by extracting snippets, each with 5 seconds long, from all the videos. Each snippet centers at one of the 17 classes of events or randomly samples from background (non-event). We apply each fine-tuned action recognition model on the temporal sliding windows of videos, and concatenate output features along the feature dimension. Here we briefly introduce the five pretrained action recognition models we choose to fine-tune on soccer data. The temporal pyramid network (TPN) \cite{Yang2020TemporalPN} efficiently incorporate visual information of different speeds at feature level, and can be integrated into 2D or 3D backbone networks. It can achieve $78.9\%$ top-1 accuracy on Kinetics-400 dataset with a ResNet-101 backbone network. The global temporal attention (GTA) mechanism for video action classification proposed in \cite{he2020gta} models global spatial and temporal interactions in a decoupled manner. It captures temporal relationships on both pixels and semantically similar regions. On Kinetics-400 dataset, GTA achieves $79.8\%$ top-1 accuracy when applied on SlowFast-R101 network. The video transformer network (VTN) \cite{neimark2021video} adopts transformer based network structure for video understanding. It trains faster than 3D CNN networks and can achieve $78.6\%$ top-1 accuracy on Kinetics-400 dataset. The interaction-reduced channel-separated convolutional networks (irCSN) introduced in \cite{Tran2019} factorizes 3D convolutions by separating channel interactions and spatio-temporal interactions. In this way, a regularization effect is observed when training the network. The authors also pretrained this network on a large-scale dataset, IG-65M \cite{Ghadiyaram2019LargeScaleWP}, before fine-tuning on Kinetics-400 where achieved $82.6\%$ top-1 accuracy. The I3D-Slow network preserves the slow pathway, which operates at low frame rate and captures spatial semantics in the SlowFast framework \cite{Feichtenhofer2019SlowFastNF}. It is pretrained with OmniSource \cite{duan2020omni} data and can reach $80.4\%$ top-1 accuracy on Kinetics-400. \section{Temporal Detection} \begin{figure} \centering \includegraphics[width=5.8in]{Transformer.pdf} \caption{Our transformer based models for (a) action spotting and (b) replay grounding} \label{fig:transformer} \end{figure} In this section, we present the temporal detection module in our two-stage paradigm for soccer video understanding. Specifically, 1) NetVLAD++ and transformer for action spotting task, and 2) Cmn and transformer for replay grounding task. \subsection{Action Spotting} Given the combined features described in Section \ref{FeatExtract} as input, a NetVLAD++ \cite{Giancola_2021_CVPR_Workshops} model can yield much higher performances than the original ResNet features. We also implemented other methods including 1D-ResNet \cite{He2016DeepRL} and Transformer \cite{vaswani2017attention}, and they can achieve similar results. Since the Transformer obtains the best performance on the challenge set, we describe its implementation details as follows. For the transformer model, as in \cite{vaswani2017attention}, we use sine, cosine positional encoding. We only use the encoder part of the model to get an output of dimension 18 to represent the 18-class probabilities. As shown in Figure \ref{fig:transformer}(a), we create three transformer encoding layers after the positional encoding. We choose 4 heads and hidden dimension of 64 for the encoding layers. In training, we adopt mix-up augmentation \cite{zhang2017mixup} to reduce over-fitting. To further improve the performance, we make the following adjustments: (a) train the video recognition models for feature extraction on an enlarged dataset, which is the aggregation of train, valid, and test sets in the snippet dataset we mentioned in Section \ref{FeatExtract}, together with snippets extracted from extra 77 games we collected in Spain Laliga 2019-2021 seasons; (b) train the spotting module on the aggregation of train, valid, and test sets; (c) change hyper parameters including feature dimension size, batch size, learning rate, chunk size and NMS window size. Details will be elaborated in the experiment section. \subsection{Replay Grounding} In this section, we first analyze replay annotations of 500 games in the SoccerNet-v2 dataset, and then discuss the baseline grounding module Cmn and our transformer based grounding module. \subsubsection{Replay pattern analysis} To better choose hyperparameters for our model and improve grounding results, we analyze all replay annotations. Figure \ref{fig:replay}(a) shows the distributions of time intervals between the end timestamp of a replay and the original event's timestamp. We found that $92.83\%$ of the time intervals fall in the range of $0\sim 120$ seconds. Therefore, for efficient training and inference, we design the transformer based grounding module and design filtering strategies in post processing for Cmn module to focus in this range. Figure \ref{fig:replay}(b) shows the number of different types of events in ground-truth replay annotations. We found that the top 3 events in terms of total counts are foul, goal, and shots-off target respectively. This observation helps us design fusion strategies in post processing which will be described in the experiments session. \begin{figure} \centering \includegraphics[width=6.8in]{replay_analysis.pdf} \caption{Replay pattern analysis. (a) Time intervals between the end timestamp of a replay and the original event's timestamp. (b) Replay events.} \label{fig:replay} \end{figure} \subsubsection{Transformer based grounding module} \label{replay:transformer} To capture the relationship between the replay clip and candidate clip, we apply a transformer encoder. Following the configurations in \cite{vaswani2017attention}, we choose sine, cosine positional encoding. As shown in Figure \ref{fig:transformer}(b), the input consists of semantic features of a candidate clip and a replay clip. We stack up 4 encoding layers and get an output with 2 dimensions (replay probability and positional offset) which align with the spotting output dimensions in the baseline approach in Cmn from \cite{Delige2020SoccerNetv2A}. Unlike the baseline grounding module Cmn, we disabled segmentation loss. We apply the binary cross-entropy loss (BCELoss) to train the replay probability and L2 loss to train the positional offset. Since our fine-tuned features work better with shorter clips (feature extractors trained on 5-second snippets) and to prevent the module from over-fitting, we adjust the video chunk size to $30$ seconds. We also only fine-tune the grounding module on video chunks extracted from at most $120$ seconds before the start of replays since the feature extractors are already trained on all event snippets from full match videos, and most replays happen within the $120$ seconds after the original events according to our replay pattern analysis. In the $120$ seconds clip, $4$ positive and $4$ negative samples are given to the transformer such that we have sufficient data to better learn the grounding component of the output. \section{Experiments} \begin{table}[t] \vspace{-0.1in} \small \centering \caption{Video action recognition models for extracting semantic features} \begin{tabular}{@{}c|c|c|c@{}} \toprule Arch & Backbone & Dim & Pretrain\\ \midrule TPN \cite{Yang2020TemporalPN} & ResNet50/101 & 2048 & K400\\ GTA \cite{he2020gta} & ResNet50 & 2048 & K400\\ VTN \cite{neimark2021video} & ViT-Base & 384 & K400 \\ irCSN \cite{Tran2019} & ResNet152 & 2048 & IG65M + K400\\ I3D-Slow \cite{Feichtenhofer2019SlowFastNF} & ResNet101 & 2048 & OmniSource\\ \bottomrule \end{tabular} \label{table:1} \vspace{-0.1in} \end{table} \subsection{Dataset and Evaluation}\label{dataeval} The SoccerNet-v2 dataset contains broadcast videos of 550 soccer games. We mainly use the LQ version of these videos at 25fps with a resolution of $398\times 224$. In addition, we collect broadcast videos of 77 extra games in Spain Laliga 2019-2021 seasons. We check the extra videos and guarantee that they do not contain any game from the SoccerNet-v2 challenge set. We annotate these videos in the similar protocol as SoccerNet-v2, and convert videos into LQ in order to fine-tune feature extractors. We report the performance of our methods using the Average-mAP metric introduced by SoccerNet-v2. \subsection{Implementation Details} For the feature extraction stage, Table \ref{table:1} shows all the action recognition models we use with their main configurations. These models are pretrained from various large-scale datasets, including Kinetics-400 (K400)\cite{46330}, IG65M \cite{Ghadiyaram2019LargeScaleWP}, and Omnisource\cite{duan2020omni}. All models are fine-tuned on SoccerNet-v2 snippets to reach a reasonable top-1 classification accuracy between $78\%$ and $82\%$ (1-view test) on the test set. At test time, all models slide on the videos and produce features at 1fps. To boost the performance on the challenge set, we also fine-tune the feature extractors (action recognition models) on an enlarged snippet dataset, which contains snippets from the train, valid, test videos of SoccerNet-v2 and videos of 77 extra games. We denote the produced features as mega features if the extractors are fine-tuned on the enlarged snippet dataset. In our experiments, the spotting or grounding module is trained in two modes, regular and ultra. In the regular mode, train, valid, and test set each serves its own purpose following the same setting as the reference NetVLAD++ \cite{Giancola_2021_CVPR_Workshops} method for action spotting or Cmn \cite{Delige2020SoccerNetv2A} for replay grounding. In the ultra mode, we make the spotting/grounding module to learn from as much data as possible, thus we use all features from train, valid, and test sets to train the spotting/grounding module, for a fixed amount of epochs. For the action spotting task, we use a learning rate of $10^{-4}$ for the NetVLAD++ model. The ultra mode training stops at $40$ epochs. For the tranformer model, we use a learning rate of $5\times10^{-4}$ and stop at $50$ epochs. For the replay grounding task, a learning rate of $2\times10^{-4}$ is adopted using the transformer model and training stops at $40$ epochs in ultra mode. \subsection{Results and Analysis} \subsubsection{Action Spotting} Table \ref{table:2} shows the performance of our methods with different configurations. When using only one (ordinary) feature from TPN-r50 and perform spotting using NetVLAD++ in the regular mode, as shown in the first row of Table \ref{table:2}, we achieve an Average-mAP of $62.96\%$ and $62.35\%$ on the test and challenge set, respectively, which is about $9\%\sim10\%$ gain over than the reference NetVLAD++'s $53.30\%$ and $52.54\%$ on test and challenge sets, respectively. This result shows the superior advantage of using a recent action recognition model fine-tuned on SoccerNet-v2 as a feature extractor. When using 4 features or 5 features combined as the input of the spotting module, as shown in row 3 and 6 in Table \ref{table:2}, we obtain about $5\% \sim 9\%$ gain on both the test and the challenge sets over the 1 feature case. Such comparison shows feature combination also significantly improves the performance. Training the spotting module in the ultra mode with 4 mega features results in a challenge Averge-mAP of $68.68\%$ (row 5 in Table \ref{table:2}), compared to the regular mode with the ordinary features at $67.51\%$ (row 3 in Table \ref{table:2}) and the regular mode with the mega features at $67.57\%$ (row 4 in Table \ref{table:2}). This comparison indicates that it improves the generalization power only if both stages use more data for training. Comparing row 6 and 8 in Table \ref{table:2}, we can see that adjusting chunk size/NMS window size from $15/30$ to $7/20$ leads to additional $1.5\% \sim 2\%$ gain on the test and challenge sets. Our transformer based spotting module, trained in the ultra mode with mega features plus adjusted chunk size/NMS window size, achieves the best challenge Average-mAP at $74.84\%$ as shown in row 10 in Table \ref{table:2}. While the NetVLAD++ based module, trained in the same setting, achieves similar performance: $74.63\%$ on the challenge set. \subsubsection{Replay grounding} We summarize our experimental results for replay grounding task in Table \ref{table:results}. As we can see from the table, taking the fine-tuned features as input significantly improved the grounding results compared to the baseline average-AP in Row 1. In addition, based on the same grounding module, combining more features extracted with different action recognition models leads to further improvements. We also observed superior performance by using a large batch size of $128$. To further improve the performance, we also investigated several post-processing techniques to refine the grounding module outputs, based on our analysis of the replay pattern: $\bullet$ Filtering: For Cmn based grounding module, we eliminate all spotting results 1) after the end timestamp of the replay, 2) more than 100 seconds (threshold) prior to the end timestamp of the replay. Note the filtering threshold in Row 9 was set to 120 seconds. $\bullet$ Fusion: Taking advantage of the action spotting results, we incorporate the action classification information into the replay grounding task. For each queried replay clip with start time $T$, we adopt the following procedures. First, we filter spotting predictions with top-3 most frequent labels of replay actions (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, 'Foul', 'Goal', 'Shots-off target') and with the predicted scores higher than $S$. Second, the first and second nearest spotting predictions to the replay clip start time $T$ are selected, and satisfying the constraint that each prediction falls into the temporal window range $[T-W, T]$, because the actual action should happen before the relay action. Third, we use the spotting confidence score as the replay grounding confidence score, and the score of the nearest prediction is multiplied with a factor $\beta_1$ and the second-nearest prediction is multiplied with $\beta_2$. Through experiments, we find that $W=42, S=0.02, \beta_1=1.25, \beta_2=0.8$ achieves the best performance. $\bullet$ NMS: We combine the grounding results from Cmn model and transformer model, normalize all event scores to $[0.0, 1.0$, and apply an NMS (Non Maximum Suppression) to reduce positive spots within a window size of $25$. The combined post processing techniques achieved a decent performance improvement, around $12\%$ comparing Row 5 and Row 8. However, our best result is achieved using the transformer based grounding module described in Section \ref{replay:transformer} and trained in ultra mode, which is $71.9\%$ as shown in Row 9. Specifically, we trained the transformer in $40$ epochs and it took about 3 hours on a TitanX GPU, which is significantly faster than training the Siamese neural network in the baseline approach. \section{Conclusion} We present the two-stage paradigm for the action spotting and replay grounding tasks, and fine-tune action recognition models on soccer videos to extract semantic features and using Transformer in the spotting and grounding modules. We achieve the state-of-the-art results on the challenge set of SoccerNet-v2. Developing the proposed action spotting and replay grounding pipeline in soccer videos is the first step for machines to fully understand sports videos, which can dramatically benefit sports media, broadcasters, YouTubers or other short video creators. We believe the presented methodology can be extended to detect and locate events in other sports videos. We released our semantic features to support further soccer video understanding research. \begin{table}[t] \small \centering \caption{Experimental results using different features, models, and window sizes. In the features column, each number stands for total types of features used: 1 for TPN-r50 only; 4 for TPN-r50, TPN-r101, GTA, and irCSN; 5 for TPN-r101, VTN, GTA, irCSN, and I3D-Slow; 6 for TPN-r50, TPN-r101, VTN, GTA, irCSN, and I3D-Slow. In the spotting column, NV stands for NetVLAD++, and TF stands for Transformer. In the test column, ``-" means the result is not meaningful, due to the reason that test set is used in fine-tuning. In the challenge column, ``N/A" means it is not evaluated due to limited availability.} \begin{tabular}{@{}c|c|c|c|c|c@{}} \toprule Row & Features & Spotting & Chunk/NMS & Test & Challenge\\ \midrule 1 & ResNet & NV & 15/30 & 53.30 & 52.54 \\ \midrule 2 & 1 & NV & 15/30 & 62.96 & 62.35 \\ 3 & 4 & NV & 15/30 & 67.97 & 67.51 \\ 4 & 4 mega & NV & 15/30 & - & 67.57 \\ 5 & 4 mega & NV ultra & 15/30 & - & 68.68 \\ 6 & 5 & NV & 15/30 & 72.64 & 71.08 \\ 7 & 5 & TF & 7/20. & 73.77 & N/A \\ 8 & 5 & NV & 7/20 & 74.05 & 73.19 \\ 9 & 6 mega & NV ultra & 7/20 & - & 74.63 \\ 10 & 6 mega & TF ultra & 7/20 & - & 74.84 \\ \bottomrule \end{tabular} \vspace{-0.1in} \label{table:2} \end{table} \iffalse \begin{table}[t] \centering \begin{tabular}{@{}c|c|c|c@{}} \toprule Arch & Backbone & Dim & Pretrain\\ \midrule TPN \cite{Yang2020TemporalPN} & ResNet50/101 & 2048 & K400\\ GTA \cite{he2020gta} & ResNet50 & 2048 & K400\\ VTN \cite{neimark2021video} & ViT-Base & 384 & K400 \\ irCSN \cite{Tran2019} & ResNet152 & 2048 & IG65M + K400\\ I3D-Slow \cite{Feichtenhofer2019SlowFastNF} & ResNet101 & 2048 & Omni + K400\\ \bottomrule \end{tabular} \caption{Models for extracting features} \label{table:1} \end{table} \fi \iffalse \begin{table}[t] \small \centering \caption{Experimental results using different number of features, grounding models, batch size (BS) and post-processing (PP) techniques including filtering (FT), fusion (FS), and NMS} \begin{tabular}{@{}c|c|c|c|c|c|c@{}} \toprule Row & Features & Grounding & PP & BS & Test & Challenge\\ \midrule 1 & ResNet & CALF & N/A & 32 & 40.35 & 40.75 \\ \midrule 2 & 2 & CALF & N/A & 32 & 54.69 & 52.11 \\ 3 & 3 & CALF & N/A & 64 & 57.2 & 55.79 \\ 4 & 1 & TF & N/A & 128 & 69.2 & 58.62 \\ 5 & 5 & CALF & N/A & 128 & 63.25. & 59.26 \\ 6 & 5 & CALF & FT &128 & 66.11 & 62.19 \\ 7 & 5 & CALF & FT,FS & 128 & ?? & 64.01 \\ 8 & 5 & CALF,TF ultra & FT,FS,NMS& 128 & - & 71.59 \\ 9 & 5 & TF ultra & N/A & 128 & 75.6 & 71.9 \\ \bottomrule \end{tabular} \label{table:results} \end{table} \fi \begin{table}[t] \small \centering \caption{Experimental results using different number of features; grounding models, including CALF-more-negative (Cmn), Transformer (TF); batch size (BS); and post-processing (PP) techniques, including filtering (FT), fusion (FS), and NMS. For features, 1 for VTN; 2 for TPN-r101 and irCSN; 3 for TPN-r101, irCSN and GTA; 5 for TPN-r101, VTN, GTA, irCSN, and I3D-Slow. TF-u denotes training the transformer in ultra mode. The evaluation metric is average-AP as defined in \cite{Delige2020SoccerNetv2A}} \begin{tabular}{@{}c|c|c|c|c|c|c@{}} \toprule Row & Features & Grounding & PP & BS & Challenge\\ \midrule 1 & ResNet & Cmn & N/A & 32 & 40.75 \\ \midrule 2 & 2 & Cmn & N/A & 32 & 52.11 \\ 3 & 3 & Cmn & N/A & 64 & 55.79 \\ 4 & 1 & TF & N/A & 128 & 58.62 \\ 5 & 5 & Cmn & N/A & 128 & 59.26 \\ 6 & 5 & Cmn & FT &128 & 62.19 \\ 7 & 5 & Cmn & FT,FS & 128 & 64.01 \\ 8 & 5 & Cmn,TF-u & FT,FS,NMS& 128 & 71.59 \\ 9 & 5 & TF-u & FT & 128 & 71.9 \\ \bottomrule \end{tabular} \label{table:results} \end{table} {\small \bibliographystyle{ieee_fullname}
{ "attr-fineweb-edu": 2.466797, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeVrxK1UJ-2RwXO2c
\section{Introduction} Fencing has a long history, and it is even one of the five activities featured in modern Olympics. However, participation in fencing has been relatively low. This phenomenon can be partly explained by the relatively high requirements for equipment and space, and more importantly, the difficulty of learning the game. In particular, fencing is difficult to learn mainly because the game is not easy to understand. Fencing is more abstract than other sports, such as table tennis and tennis. Let us take table tennis as an example. A table tennis match consists of several games. Each game is divided into multiple rounds, and each round is divided into multiple strokes. Different strokes can depict the different stroke techniques. By contrast, in a fencing bout, each phrase cannot be clearly described. In addition, fencing is a sport that relies heavily on the use of tactics, and the adoption of reasonable game strategies depending on the opponent. However, little research has focused on the data analysis of fencing. Some of the literature focused on statistical models\cite{tarrago2016complementary}, but these methods are unsuitable for discovering unknown tactic patterns. There also exists visual methods designed to help fully describe a fencing competition\cite{dentsu2016yuki}, but the technical and tactical characteristics of the game need to be clearly understood to assist in the training and the tactical arrangement of the competition. Many visualization methods are used to analyze sports competition data, but these approaches are largely unsuitable for fencing data. Most of the methods for sports data visualization are targeted toward multiplayer sports, such as soccer\cite{perin2013soccerstories,sacha2017dynamic,stein2018bring} and basketball\cite{goldsberry2012courtvision,chen2016gameflow}, in which data show completely different characteristics with those in fencing. Meanwhile, for table tennis, tennis, and most other single-player sports, the players' alternating actions generate data with hierarchical structure, which cannot be easily derived in fencing game. Thus, the data visualization methods for single-player sports cannot be used for fencing data. The competition data generated for fencing competitions represent two related time-series data, and thus, visual methods to compare different sets of time series, such as those designed for bicycle sports, are suitable\cite{beck2016matrix}. Despite the similarities, the timing sequences of fencing data are not as simple as those in bicycle races. The time-series data of fencing competitions contain features of tactics, and these features cannot be extracted explicitly as those in table tennis and tennis, and they cannot be extracted automatically from time-series data, such as those in bicycle races. Due to the nature of fencing, none of the above methods can be used directly for fencing data analysis. To fill the research gap, we cooperate with experts to analyze the technical and tactical characteristics that are regarded as either clear or fuzzy in fencing competitions, and subsequently summarize the requirements to explore the fuzzy problems through visual analysis. To meet these requirements, we design and implement an interactive visualization system for fencing competition data, which we call FencingVis. We first analyze the action sequences of fencers in a bout. Then, we extract the different sets of tactical behavior and construct a corresponding graph model to express tactical combinations. We design a tactical flow graph to show the constructed model and provide multiple interactive ways to explore it. We also provide a number of well-coordinated views to supplement the tactical flow graph. The viewing mechanisms can display the generated information of the fencing competition from different perspectives and organically integrate with the tactical flow graph through consistent visual style and view coordination. We demonstrate the usability and effectiveness of the proposed system with three case studies. According to experts, FencingVis can help them find not only the tactical patterns hidden in the fencing bout, but also the technical and tactical characteristics of the fencers. Apart from analyzing a professional competition, FencingVis can be used to teach fencing beginners and demonstrate tactics to fencing enthusiasts. The main contributions of the present study are as follows: \begin{enumerate} \item In-depth understanding of fencing data and requirement analysis, and from these aspects, a model for two-level data to represent the tactical and technical information of fencing; \item A novel visual design representing information at both tactical and technical levels, as well as multiple interactions and view associations to explore embedded patterns; and \item Case studies using the data of a real competition to confirm the usefulness of the results of FencingVis to analysts. \end{enumerate} \section{Related Work} Our work is mainly related to the analysis of fencing data and the visualization and visual analysis of sports data. Thus, we initially introduce the related work in these two areas. \subsection{Analysis and Visualization for Fencing} The existing analytical methods for fencing data are mainly at the technical level, and these efforts often analyze athletes from a biomechanical point of view. For example, by comparing the differences between excellent fencers and beginners, we can determine the factors with the greatest influence to provide guidance for the training of fencers\cite{chen2017biomechanics}. However, these methods are applied only at the technical level, which means that the use of tactics are generally not considered. Previous studies have used statistical methods for the time series analysis of fencing competition data\cite{tarrago2016complementary}, but the existing empirical model for data collection and game process analysis are summarized as a combination of several known patterns\cite{tarrago2015analisis}. We use this description of the bout for reference but selected to record the most primitive data, such as feet and blade movements. This type of recording level can reduce cost and information loss caused by introducing domain knowledge into the data acquisition process. The data abstraction work in the subsequent analysis process is considered to generate various benefits. First, the different disciplines of fencing are shown to have different behavior patterns, but all the actions can be recorded as the movement of blades and feet at the most basic level. As such, we can use a unified format to record data and apply logic in subsequent processing. In addition, if empirical models may change in the future, then we can modify the system logic without having to recapture data. Little literature has focused on the visualization of fencing data. Dentsu Lab Tokyo has conducted a fencing visualization project\cite{dentsu2016yuki}, but the visualization relies on a large number of sensors installed on fencers for data collection, which is not possible in real-life fencing matches. The main purpose of their visualization is to make fencing easy to understand and improve the aesthetics of the game. The former is one of our design goals, but our more important task is to provide the ability to explore fencing data interactively, not simply to show the collected data. \subsection{Sports Visualization and Visual Analytics} The visualization and visual analysis of sports data has developed vigorously in the past two decades, albeit with many challenges and opportunities. Basole et. al. attempted to summarize the two major difficulties of visualizing sports data\cite{basole2016sports}. In addition to data complexity, the main issue of sports data visualization is the wide range of users whose information needs vary greatly. Previous work has often targeted the needs of a particular class of users, such as general sports enthusiasts, professional athletes and coaches, or psychological and physiological researchers. The design of FencingVis is oriented toward professional and non-professional groups and aimed to meet their information needs at different levels. The visualization of sports data can be divided into two categories from the perspective of content analysis. The first category represents the full tournament or league season, in which data either show the points and rankings of each team during the season\cite{perin2016using} or provide support for game prediction\cite{vuillemot2016sports}. The second category is meant to analyze a single game, in which the situational dynamics of the game and the game information of two competing teams are presented. Some of the work is aimed at multi-player games, such as soccer and basketball, that focus on the spatial position of athletes and mine tactical information by analyzing spatial distributions and athlete tracks\cite{sacha2014feature,perin2013soccerstories}, whereas others focus on showing and analyzing the use of tactics or the characteristic abilities of individual athletes\cite{polk2014tennivis,wu2018ittvis}. The present work falls into the category of single-player analysis, in which the two above mentioned working orientations are similarly covered in our scenario. TenniVis\cite{polk2014tennivis} uses scores and offers data to analyze amateur tennis matches. iTTVis\cite{wu2018ittvis} uses higher-level specialized data, such as placement and stroke techniques, to professionally analyze table tennis data. However, the above methods for content analysis are not applicable to fencing data because fencing has characteristics that differ from tennis and table tennis. First, in tennis or table tennis, every round ends with one player scoring. However, the case is different in fencing because some phrases (like round in tennis) can end with none of the fencers scoring (in sabre or foil) while some other phrases can end with both of the fencers scoring (in epee). Thus, fencing requires a different visual design. Second, the priority rules of fencing competitions are regarded extremely important and requires professional knowledge for judging. Non-enthusiasts may not understand why one fencer has scored with knowing the current priority. Therefore, the demonstration of priority is highly important in the visualization of fencing competitions. Finally, unlike sequential games, such as tennis and table tennis, fencing is a type of a simultaneous game with a different competition structure. \subsection{Other Relevant Visualization and Visual Analysis Methods} The data we analyzed are fencing competition data. Besides the unique characteristics of fencing, these data also have some more generalized data characteristics. Therefore, we have also referred to some relevant visualization methods in our design process. We designed the phrase list view to show the details of each phrase in the form of a list. For the purpose of data analysis, users can choose different sorting methods. And when the sorting method changes, a smooth animation transition can help users to better maintain their mental map. In order to achieve this effect, we draw lessons from the work of Gratzl et. al.\cite{gratzl2013lineup}. But their work is mainly to show the ranking, and we have more details, so we have added more elements to our list design, which needs to adjust the layout and animation design. In the design of tactical flow graph view, we also draw lessons from passing networks and transition network\cite{gudmundsson2017spatio}, but our nodes have different meanings, and more detailed designs have been added to the nodes and edges. Our design is aimed to better explore and analyze fencing data, but the domain experts may not familiar with visualization and visual analysis. To facilitate the user, our design also includes many considerations of storytelling.\cite{figueiras2014narrative} For example, we use animation playback in piste view to connect the user with the scene quickly. And in the design of view-coordination, we provide navigation of time view to connect the user with time and tactics.\cite{figueiras2014narrative} \section{Background and System Overview} In this section, we present an overview of fencing, including the required data and the target of analysis. We also briefly describe the main components of the system. \subsection{Background} Fencing is one of the representative activities of the Olympic Games, and it evolved from the swordsmanship techniques used for the military combats and duels of the Cold War era. Fencing comprises the three disciplines of epee, foil, and sabre, in which scores are earned by hitting the active body parts of an opponent. The basic techniques of fencing are divided into offensive and defensive techniques. Offensive techniques include attack, riposte, feint, lunge, beat attack, disengage, compound attack, continuation/renewal of attack, and flick. Defensive techniques include parry, circle parry, counter attack, and point-in-line. These techniques are learned through limited combinations of blade and feet movements. There are two types of fencing competitions: individual and group. In an individual match, the fencer who first scores 15 points wins the game. After a fencer scores 8 points in a sabre match, the two sides take a minute off before the game is continued. A fencing match is called a \textbf{bout}, which consists of several \textbf{phrases} (i.e., a set of related actions and reactions) . At the beginning of each phrase, two fencers stand behind the two on-guard lines at both sides of the \textbf{piste} (game field of fencing) and perform their actions after the referee gives the signal. Each phrase can be ended with one fencer scoring, both fencers scoring (epee), or neither of the two fencers scoring (sabre or foil). Unlike other sports, fencing has a special priority rule\cite{roi2008science}. This rule is applied to sabre and foil. The fencer who initiated the attack first gains the priority, and each attack will lead to the exchange of the priority. When two fencers hit each other at the same time, the one with the priority scored. If it is not possible to accurately judge the priority, both fencer will not score point in this clash. Judging the priority is not trivial, so showing the current priority is also one of the important considerations in our system design. \subsection{Data Description} Owing to the fast-pace characteristic of fencing, the detailed real-time recording of the match is difficult to conduct. Moreover, to avoid interfering with competitors, it is not convenient to install sensor devices. The existing method of analyzing fencing competitions is achieved by taking videos of a match from which sports data are extracted. In general, the accuracy of a game video is 30 frames per second (fps), which means that data are video-recorded frame by frame at a time accuracy of 1/30 seconds. For each data frame, the listing attributes are recorded (see \autoref{tab:data}). In the process of data-marking the game videos, continuous footwork does not necessarily mean effective segmentation. Thus, after consulting domain experts, we use the start time of the next action as the segmentation point of two continuous actions. Specifically, for the continuous forward movement, we use each front foot off the ground as the segmentation point. For the continuous backward movement, we use each the rear foot off the ground as the segmentation point. \subsection{Requirement Analysis} On the basis of extensive discussions with field experts, the characteristics of fencing that need to be considered in the visual design are as follows: \begin{itemize} \item Fencing is not as easy to understand as tennis and table tennis and similar games, and non-enthusiasts often find it difficult to readily understand fencing bouts. The visual design should therefore be able to contribute to the enhanced understanding of specific data users. \item The current information generated in most sports competitions are generally clear, but the case of fencing is different because the most important information, such as priority, should be considered. Viewers and data users with different experiences may have different understanding of gaming decisions, and this scenario implies that a common visualization is seldom achieved. \item Most sports are bound to end each round with one side scoring, but this is not the case in fencing. Fencers may either score both or neither in a phrase, and this scenario needs to be reflected in the system designs. \item The use of tactics is more important in fencing compared with other sports that place more emphasis on adaptability. Furthermore, fencing tactics are often planned in advance before each phrase, and thus, it is more valuable to show the impact of this strategy on the bout. \end{itemize} \begin{table}[] \centering \caption{Data Description} \label{tab:data} \begin{tabular}{|p{1.5cm}|p{6.5cm}|} \cline{1-2} Bout ID & The ID of the bout to which this event belongs.\\ \cline{1-2} Phrase ID & The ID of the phrase to which this event belongs.\\ \cline{1-2} Frame & The frame at which this event occurs.\\ \cline{1-2} Footwork1 & Beginning or finishing of forward, backward, or lunge of fencer 1\\ \cline{1-2} Footwork2 & Beginning or finishing of forward, backward, or lunge of fencer 2\\ \cline{1-2} Bladework1 & Beginning or finishing of attack, parry, riposte, or counter attack of fencer 1\\ \cline{1-2} Bladework2 & Beginning or finishing of attack, parry, riposte, or counter attack of fencer 2\\ \cline{1-2} Attack1 & Attacked position of fencer 1\\ \cline{1-2} Attack2 & Attacked position of fencer 2\\ \cline{1-2} Parry1 & Parried position of fencer 1\\ \cline{1-2} Parry2 & Parried position of fencer 2\\ \cline{1-2} Confrontation& Confrontation position of the two fencers on the strip.\\ \cline{1-2} Result & Result of this phrase, which is given by the referee.\\ \cline{1-2} Score & Record which fencer scored or none.\\ \cline{1-2} \end{tabular} \end{table} Based on these discussions, we summarized the requirements of our applications as follows: \begin{itemize} \item (R1) Show how the bout changes over time. \begin{itemize} \item (R1a) Show changes in scores. \item (R1b) Show the length of each phrase. \item (R1c) Show the changing of priority ownership. \end{itemize} \item (R2) Show a detailed comparison of phrases at both tactic and technical levels. \begin{itemize} \item (R2a) Show the applied tactics of both fencers in different phrases. \item (R2b) Show the technical details of the selected tactics. \end{itemize} \item (R3) Show how the tactics of both fencers are used in the entire bout. \begin{itemize} \item (R3a) Provide a summarized view of the tactics use during the bout. \item (R3b) Map the summarized view with the listed details of each phrase. \end{itemize} \item (R4) Conduct exploratory pattern discovery and result communication. \begin{itemize} \item (R4a) Arrange information according to user interaction to aid in pattern discovery. \item (R4b) Represent clearly the discovered pattern to aid in the communication of users. \end{itemize} \end{itemize} \begin{figure*}[tb] \centering \includegraphics[width=\linewidth]{MotionView3} \caption{The phrase list view presents the details of each phrase, and the two modes of this view separately represent two different abstract levels.} \label{fig:listview} \end{figure*} \subsection{System Overview} Our system consists of four views and a control panel to support analyzing the data on a match from four different perspectives, as shown in \autoref{fig:main}. \textbf{Bout View} shows how the bout changes over time (\textbf{R1}). \textbf{Phrase List View} shows the details of every phrase form both tactical and technical level in the form of a list (\textbf{R2}). \textbf{Tactical Flow Graph View} shows the overall statistics of the use of tactics of the fencers (\textbf{R3}). \textbf{Piste View} show the details of selected phrase in the form of an animation on the piste. \textbf{Control Panel} provides set of controls to support the interactive exploration (\textbf{R4}). \section{Visualization Design} We use consistent visual styles in the whole application: \begin{itemize} \item \textbf{Color:} We use red and blue as the representative colors of fencer 1 (left side) and fencer 2 (right side). This principle is embodied in all the views. Furthermore, in the glyphs expressing the tactical combination of two fencers, the proportion of their respective colors reflect who having the priority (\textbf{R1c}). \item \textbf{Layout:} For all visual elements used to compare two fencers, we try to arrange them horizontally, and keep the information related to fencer 1 on the left side and information related to fencer 2 on the right side, which is consistent with the actual positions of the fencers. If it really needs to be arranged up and down, information related to fencer 1 is always on the top. \end{itemize} \subsection{Bout View} Most game data naturally have time attributes, and both tactical and technical analysis are needed to consider the impact of time. The influence of time is reflected in two aspects. First, the different stages of the game and the psychological and physical changes of athletes significantly affect the game results. Second, the use of tactics has time-dependent characteristics. A fencer executes tactics based on the previous ones that he or she and his or her opponent have used over a certain period. The choice of tactic repetition or conversion therefore needs to be determined on the basis of the characteristics of the previous phrase and the tactics used by the opponent. Finally, analysts need to know how the competition has changed over time. The bout view (\autoref{fig:main}D) is therefore designed to show this information. The bout view mainly shows three elements: time, score change, and phrase duration. We use a tailored step-chart to show the variations in scoring according to time (\textbf{R1a}). In the chart, the x-axis mapping represents game time whereas the y-axis mapping represents scores. The red and blue rectangles represent the scores of both fencers, and the two scores naturally overlap (purple rectangle) when equal. To visually compare the duration of each turn (\textbf{R1b}), we add a horizontal rectangle below the x-axis to show each phrase. The color of the rectangle indicates the fencer who scores in the phrase, and a gray rectangle indicates that neither fencer has scored. The upper and lower views are designed to correspond to each other to help users visually observe the relationship of the three attributes. Considering that the break called in between the first and second half of the match often has a great impact on the course of the game, we use a vertical line to emphasize this split moment. To depict the selection of a phrases, we use a gray background to reflect the selected phrases. \textbf{Description:} The game time in our viewing scheme does not exactly correspond to actual time. Considering that the time required by fencing phrases accounts for a small proportion of actual time, the view becomes very sparse if mapped directly. As such, we map the time of the phrases directly, and the combined time of two adjacent rounds represent an interval of 1 second. \subsection{Phrase List View} \begin{figure*}[tb] \centering \includegraphics[width=\linewidth]{glyph} \caption{The pose of the fencers on the piste is abstracted and designated as four glyphs} \label{fig:glyph} \end{figure*} The phrase list view (\autoref{fig:main}A) presents the details of each phrase (\textbf{R2}), and the two modes of this viewing scheme separately represent two different abstract levels. In the \textbf{motion mode}, the motions of two fencers in each phrase are displayed as a function of time from the beginning of the phrase (\textbf{R2b}). Each listed item is subdivided into two lines (up and down) to describe the actions of the two fencers, with the upper and lower parts for the left and the right fencers, respectively. Bar charts are used to describe the motions of fencers in each phrase, in which the horizontal axis corresponds to time duration, as described by the number of frames. We use the data from 30 fps video, and thus, the scale is the same.The corresponding actual timespan can be viewed in the bout view when this is needed. The tall bars in the bar charts represent feet movements, whereas the short bars embedded in the tall bars represent bladeworks. Various colors are assigned for different type of actions, as illustrated in \autoref{fig:listview}E. In the \textbf{tactic mode}, the abstracted tactic nodes are displayed (\textbf{R2a}). Specifically, the motions of two fencers in a phrase is abstracted as a sequence of tactic nodes. The motions can be described as follows: \begin{enumerate} \item Start (\textit{S}): The state when the referee issues a start command, and both fencers behind the en-garde lines (or the position where the last phrase interrupted). The start state is the first state of each tactic sequences, and the source node of the tactical flow graph described in the next section. \item Forward-Forward (\textit{FF}): The state when both fencers attack forward at the same time. The game enters the \textit{FF} state in either of the two scenarios: the standard \textit{FF} when both fencers initiate forward movements simultaneously or the scenario in which both fencers make backward movements and switch to \textit{FF} states simultaneously. \item Backward-Backward (\textit{BB}): The state of simultaneous retreat. Considering that both fencers will likely step forward at the start of a fencing phrase, our decision to include the \textit{BB} state is not literally based on actual backward movements, but whether the \textit{BB} state has been planned before the phrase. The \textit{BB} state usually occurs when both fencers move a step or two forward, then switch to \textit{BB} to back away. When a fencer pauses as he or she moves forward and subsequently decides to further move forward, we also designate this phenomenon as a \textit{BB} state, i.e., the movement was a fast-changing forward. \item Backward-Forward (\textit{BF}): The state when the left fencer chooses a backward movement, whereas the right fencer chooses the forward movement. \item Forward-Backward (\textit{FB}): The state when the left fencer chooses a forward movement, whereas the right fencer chooses a backward movement. \item Fencer 1 score (\textit{1}): The state when the left fencer scores. \item Fencer 2 score (\textit{2}): The state when the right fencer scores. \item Simultaneous (\textit{=}): The state when both fencers hit each other simultaneously but no scores are given. \end{enumerate} The glyphs of the tactic nodes are designed to allow users to easily understand their meanings. The white rectangle represents the \textit{S} node, whereas the red, blue, and gray rectangles separately represent nodes \textit{1}, \textit{2}, and =, respectively. The color scheme is consistent with our basic design principle wherein red and blue represent that dominance of the left and the right fencers, respectively. In accordance with this deign principle, the glyphs of the four tactic nodes are designed with red and blue to denote two parts in different areas. However, the tactic nodes only show tactical information. Thus, the nodes are designed to represent additional details. In the \textit{S} node, one dot or two dots are separately used to show one step or two steps forward performed by both fencers. In the \textit{FF} nodes, two small rectangles depict the attack positions of both fencers (as shown in \autoref{fig:listview}A and B). In both modes, the labels to the left of each phrase box represent the index of the current phrase, which can be quickly retrieved. Meanwhile, the labels on the right show the outcome of each phrase. We use the first letter of the word to depict the call of the referee (i.e., A for attack, R for riposte, and S for simultaneous). The phrases are arranged as a series of rows from top to bottom in the order of the game, in which each row describes the information from one phrase. To support the interactive exploration of data, we provide different sorting approaches, as shown in \autoref{fig:listview}B. By sorting the phrases according to different rules, such as the order in which they occurred or different priorities of the tactical combinations, users can easily find the different features that represent the same tactic nodes and sequences (\textbf{R4a}). To highlight the score of a phrase, we use the color of the two fencers (red and blue) to render the border and text of the phrases. The non-coloring of fills is intended to avoid interference with the display information of internal details, i.e., if the internal details are set to translucent, then the color inconsistencies can lead to confusion. The items shown in motion view are affected by the filter setting, including those for the score and duration filters in the control panel as shown in \autoref{fig:listview}D. \begin{figure*}[tb] \centering \includegraphics[width=\linewidth]{case1} \caption{Tactical flow graph view of Men's Sabre Individual Golden Match of 2017 World Fencing Championship. Each glyph in the graph indicates a state in the competition. The colors in the glyphs indicates the advantages and disadvantages of both fencers in this state. The greater the color ratio of a fencer, the greater the advantage of the fencer. In the first row, the white 'S' indicates the beginning state, at which the situation between the two sides is equal. Glyph of 'BB' state is white in the middle and has a narrow red and blue color on both sides, indicating that both fencers retreat at the same time. In the second row, the red-blue equivalent 'FF' indicates that both fencers are advancing at the same time. Correspondingly, 'BF' and 'FB' indicate that one fencer is advancing and the other is retreating respectively. The three glyphs in the third row indicate blue side scored, on one scored and red side scored respectively. } \label{fig:case1} \end{figure*} \subsection{Tactical Flow Graph View} The bout view and the phrase list view are designed to allow users to understand the game more clearly with no much statistical summaries. Thus, to allow analysts to have a relatively deeper understanding, we designed a tactical flow graph to show summary of the tactics used by the fencers in the bout (\textbf{R3a}). In analyzing the tactical information of a fencing competition, we initially converted the collected time series to a tactical graph model. We consulted professional fencing coaches and athletes to devise a series of conversion rules. The modeling process needs to introduce some domain knowledge that cannot be directly retrieved at the level of data conversion. For example, to check whether the fencer has chosen forward or backward tactics, we need to look at the behavior after the first two or more steps. The use of strategy itself is a process of deception and anti-deception\cite{roi2008science}. A fencer usually uses two-step lunges, but one-step lunges to bring about a sudden attack is also used. Apart from the one-step lunge, a backward movement can also be performed after a two-step forward. The design of the tactical flow graph view (\autoref{fig:main}B) is based on the graph model we built upon the game data. After the in-depth discussions with experts, we arranged the 8 nodes according to the following criteria: \begin{itemize} \item The designed view shows the progress of the game from top to bottom. The nodes are naturally arranged in three layers as follows: The first layer, which contains nodes \textit{S} and \textit{BB}, represents the start of each phrase. The second layer, which contains nodes \textit{FB}, \textit{FF}, and \textit{BF}, represents the middle stage of the phrase. The third layer, which contains nodes \textit{1}, = , and \textit{2}, represent the end of the phrase. All data entries in the graph flow only from the upper layer to the lower layer or between the same layer. \item The node layout of the viewing scheme in the horizontal direction denotes the advantages gained by the fencers. The nodes are arranged in three columns as follows: The left column indicates that the right fencer dominates in terms of scoring or priority (the left fencer is retreating). The right column indicates that the left fencer dominates given the same conditions. The middle column represents the balance of power between the two fencers. \item Although the design principle is implemented in its entirety, tradeoffs are considered to ensure that the view schemes are clean and tidy. For instance, nodes \textit{S} and \textit{BB}, which should have been arranged up and down if the above rules are followed strictly. However, this would bring in substantial overlaps. We therefore arranged the two nodes as left and right, but close to one another in the same region, and this design is consistent with the above rules at the regional level. This approach also makes the flow \textit{S-BB} to be in a focused position (center on upper side) to show their relative importance to analysts (i.e., see case study discussion). \end{itemize} \subsubsection{Orthogonal Layout} The designing of our tactical flow graph fully considers the actual physical scene, so that experts can more intuitively understand the information it expresses. But the experts also need to compare the tactical flow graph of the two halves or of different matches. For the comparison between the first and second half, as there are just two items to be compared, we can directly use two translucent color to show them, as shown in \autoref{fig:case1}A. But when more than two graphs are compared, this superposition can cause confusion. We also tried to lay the flows side-by-side, but because of the intersection of current designs, this may also cause visual confusion. To better illustrate the comparison of multiple tactical flow graphs, we provide an alternative orthogonal layout, as shown in \autoref{fig:comparison}. We arrange all the nodes in an orthogonal grid, with the \textit{S} node at the center as the origin, and all the flow flows from the center to the periphery. Because each node of orthogonal layout can only be adjacent to four nodes, it is not enough to show all the inflow and outflow, so as a cost, we introduce redundant nodes. Since this layout is only intended to show the comparison of tactical flows and does not focus on the overall path, the introduction of redundant nodes does not impose a burden of understanding. But we make sure that the flow direction is from inside to outside, and that all flows occur only once. \subsubsection{Operations} The above view shows the information at the highest level. Users can find some patterns with higher abstraction level from this view, and further exploration is often needed for the content of interest. To do this, we designed a series of interactions to show more detailed information and related statistics. When the mouse moves to a node on the tactical flow graph, the flow through that node is highlighted. This is to help users better observe the relationship between each node and the flow. Similarly, when the mouse moves over a segment of data flow, the associated data flow is highlighted to help the user quickly observe the source and flow of the segment, as shown in \autoref{fig:case1}B. When the mouse moves to the \textit{FF} node, the upper left corner and the lower left corner of the view respectively displays the matrix of attack positions and the matrix of forward steps, which can help the user to get the patterns of the technical details of the fencers. \subsection{Piste View} Although the phrase list view shows the details of each phrase, it has a time-dimensional presentation with certain limitations, and one of these is the inability to reflect the position information on the piste, which is not intuitive for domain experts. To resolve this deficiency, we design the piste view to show the information of each phrase in the form of animation. Our animation design mainly reflects two aspects of information: the position on the field and the postures of both fencers. These two relevant aspects of information can be disassembled. We use a more flexible design to animate the two layers and subsequently determine the changes in the two types of information, which are stacked together to show the information of a phrase. Animating the position is relatively simple to implement, i.e., the position of the glyph is driven by the position of recorded data. In animating the pose, we abstract the pose of a fencer on the piste and designate them as four glyphs on the basis of the observed games and the suggestion of domain experts. The four glyphs (shown in \autoref{fig:glyph}) are as follows: \begin{itemize} \item \textbf{En-garde glyph} is used to represent the en-garde posture. \item \textbf{Lunge glyph} is used to represent the lunge posture. \item \textbf{Parry glyph} is used to represent the parry posture. \item \textbf{Riposte glyph} is used to represent both the riposte posture and the counter attack posture. \end{itemize} \begin{figure*}[tb] \centering \includegraphics[width=\linewidth]{ScorePhrases} \caption{Bout View of Men's Sabre Individual Golden Match of 2017 World Fencing Championship with highlighted scoring phrase of each fencers.} \label{fig:boutview} \end{figure*} \subsection{Control Panel} To meet the basic requirements of interactive data exploration, we provide some control components that are mainly used to filter and change the display mode of the viewed data, as shown in \autoref{fig:main}E. First, we provide a drop-down menu to select the game intended for analysis. We also provide filters for the phrases, which correspond to results and time dimensions. The resulting filters are used to select the combinations of the scoring phrase of the fencer 1, the scoring phrase of the fencer 2, and the no-score phrases. The time filter selects a time threshold through a time bar, and the phrases with duration turns that are longer than the thresholds are filtered out, thus leaving only short phrases. The effects of the two filters can be superimposed, and the results are updated synchronously on the bout view and the phrase list view. To support keeping the number of filtered items in mind, the filtering threshold and the number of filtered results are displayed simultaneously. The display mode control for the data flow can be selected by users in three modes: display the entire game, compare the first and second halves of the game, or exchange the position of fencers (i.e., a scenario for comparing the different games of the same fencer). If a fencer is simply positioned at different sides of the game field, then a comparison of the data flow graph will not be as intuitive as expected; switching the view in the same direction is an easier option. The background of the data stream can also be selected in viewing or unviewed mode. For instance, users who simply want to experience using the system without having to analyze the game can select the background to quickly view the different data flow nodes. \subsection{Cross-view Analysis} The interactive exploration of data is mainly realized through the association of views. The main view associations include the following: \begin{itemize} \item After the user modifies the filter settings, the phrase list view and the bout view are both updated synchronously. The former displays the filtered results only, whereas the latter highlights the filtered results with a gray background. \item When the mouse hover on the items in the phrase list view or in the bout view, the corresponding phrase in the other view highlights the border for selection. The data flow of this phrase is also displayed synchronously in the data flow graph view. \item User can click the items in the action view or bout view to trigger the animation of the corresponding phrase displayed in the piste view. \end{itemize} \section{Case Studies} We demonstrate the usability and effectiveness of the proposed system with three case studies. The three cases are based on the semi-finals and finals of the Men's Sabre Individual Golden Match of 2017 World Fencing Championship. We explore from three perspectives of single match analysis, comparison of multiple matches and comparison of different matches of the same fencer. In this process, domain experts participated in the whole process and put forward assumptions and guidance for our analysis in real time. \subsection{Men's Sabre Individual Golden Match of 2017 World Fencing Championship} \begin{figure*}[tb] \centering \includegraphics[width=\linewidth]{threeBout} \caption{Comparison of tactical flow graphs in the semi-finals and final of fencing world championships in 2017} \label{fig:sample} \end{figure*} We analyze the final match of Szatmari and Gu in the Men's Sabre Individual Golden Match of the 2017 World Fencing Championship. A quick look of the bout view shows Gu leading in the first half, then Szatmari reversed the game play in the second half. The wins and losses of the game are related to changes in strategies in the first and second halves of the game. Users can switch the tactical flow graph to the half-court view, as shown in \autoref{fig:case1}A. Most of Gu's scores in the first half were enabled by nodes \textit{FF} and \textit{BF} (shown by blue flow), but the scores were significantly reduced in the second half (shown by orange flow). At the same time, the sources of Szatmari's second-half main scoring are from nodes \textit{FB} and \textit{BF}. By summarizing the four obvious flow changes, we present the following preliminary conclusions: \begin{enumerate} \item Gu's scoring from both sides choosing direct attack were reduced in the second half (\autoref{fig:case1}A-1). \item Szatmari increased his scores in the situation that he moving forward while his opponent moving backward in the second half (\autoref{fig:case1}A-2). \item Gu's forward movement with Szatmari's backward movement mainly contributed to Gu's scores in the first half, but the scenario shifted to Szatmari's favor in the second half (\autoref{fig:case1}A-3). \end{enumerate} Based on the above observation results, together with domain experts, we are trying to find out the deeper reasons. Thus, we shift our attention from the lower half to the upper half of the tactical flow graph view. We can intuitively see that the \textit{S-FB} flow increased in the second half (\autoref{fig:case1}A-4) while the \textit{S-FF} flow decreased in the second half (\autoref{fig:case1}A-5). Together, they reflect Szatmari's forward tactic usage increased and Gu's backward tactic usage decreased in the second half of the bout. Another obvious change is the \textit{S-BF} flow, which increased significantly in the second half. To further analyze the case of the \textit{BF} node, we switch back to the sum flow mode of the tactical flow graph view and select the \textit{S-BF} segment, as shown in \autoref{fig:case1}B. At this point, most of the \textit{S-BF} flows end at node \textit{1}, which correspond to Szatmari's scoring, and most of which occurred in the second half. We can make the following assumptions about the course of the game: \begin{enumerate} \item Gu led the game by earning points in the first half by relying on his strong offensive ability. Szatmari failed to handle Gu's attacks regardless whether the forward or backward tactic was applied. \item After the game break, Szatmari adapted and started to retreat to counter Gu's attacks. Szatmari was successful and he scored many times, as depicted by the \textit{BF-1} segment. \item As a consequence of being countered many times, Gu began to hesitate on his attacks and opted for more backward movements. This scenario is reflected in the decrease in simultaneous-attacks and the increase in Szatmari's forward movement and Gu's retreat. \item Collectively, the above factors led to Gu's defeat. \end{enumerate} To confirm the above assumptions, we locate the start phrase of the second half in the game view and check the details of a few succeeding phrases in the motion mode of the phrase list view (\autoref{fig:listview}C). In the phrase list view, we see that Szatmari consecutively earned offensive points in the phrases at the beginning of the second half. This finding differs with our previous assumptions. At the same time, Szatmari's backward scorings were earned at the end of the game. As such, we redefine our understanding of the game as follows: \begin{enumerate} \item Gu led the game by earning points in the first half by relying on his strong offensive ability. \item However, although Gu's attacks were sharp, his physical energy was greatly consumed in the first half. Thus, at the start of the second half, Gu's attacks began to falter, which opened opportunities for riposting from the opponent. \item Gu changed his strategy, and his retreats increases. However, Gu's ability to retreat was insufficient, and his scoring was eventually surpassed by his opponent. \item Gu had no choice but continued attacking, but his opponent had detected the decline in his speed, and continuously retreated to counter Gu's attack. As a result, Szatmari finally won the game. \end{enumerate} We summarized this match. Gu's offensive ability was very strong, but this affected his physical bearing, and this condition led to his unsustainable attacks in the second half of the game. To sustain his gaming advantage, Gu should have focused on his physical strength to ensure that his offensive ability does not decline. Other gaming aspects, such focusing on the short board, might have also been an effective method for Gu to overcome his declining attack ability. Szatmari's ability can be regarded as relatively average, but his timely discovery of the changed state of his opponent in the second half proved to be a reasonable adjustment strategy. Szatmari eventually won the game. The scoring phrases of both fencers were also analyzed in the bout view. All of the long-duration phrases contributed to Gu's scoring (\autoref{fig:boutview}A), whereas Szatmari's scoring were all in the short-duration phrases (\autoref{fig:boutview}B). This finding also confirms that the technical ability of Gu is better than that of Szatmari, but the latter won the game because of his reasonable use of tactics. \begin{figure*}[tb] \centering \includegraphics[width=\linewidth]{Comparison} \caption{Our system also supports the analysis of the characteristics of a fencer by loading the fencer's matches together for comparison} \label{fig:comparison} \end{figure*} \subsection{Comparison of Three Bouts} On the basis of the first case, we compare the tactical flow graphs of the two semifinal and final matches of the Men's Sabre Individual Golden Match of the 2017 Fencing World Championship. For ease in comparison, we switch the position of Gu and Iburagimov for the semi-finals, such that the two fencers (Szatmari and Gu) are in the same position on the graph in their respective two games. The thickest flow always immediately catches the attention of viewers. The \textit{S-FF} flow is the thickest in all three graphs, which is consistent with the dominant position of attacks in sabre match. In addition, the \textit{S-BB} flow in the finals is significantly thicker than the corresponding flows in the two semi-finals, which indicate that the fencers played more conservatively and chose to retreat more frequently in the finals. By comparing the flow at the end of the bouts, we can see that the number of forward points lost by the winners of all matches is relatively small, for example, FB-2 in A and B and BF-1 in C are relatively thin. In addition, the FF-1 flow in C is also relatively thin, which shows that Gu has stronger ability to score when both fencers advancing. In addition, the flow of FB-BF and BF-FB in relatively thin in all the three views, which shows that the main winning method in the sabre competition is to attack directly or attack folled a fake retreat, and the situation of attack-defense conversion is relatively small and it rarely shows obvious advantages. This is obvious especially in high-level competitions, basic skills of fencers are generally equal. The FencingVis system can easily compare different games given the fencing scenarios mentioned above, and such ease in comparison is not available in previous work \cite{wu2018ittvis,polk2014tennivis}. \subsection{Comparison of Different Matches of The Same Fencer} Our system also supports the analysis of the characteristics of a fencer by loading the fencer's matches together for comparison, as shown in \autoref{fig:comparison}. Gu's latest three bouts were shown in \autoref{fig:comparison}A. For the sake of comparison, we have changed Gu's position in the three match all to the left. The blue flow represents his only win of the three games. We can find some interesting patterns in the view. First of all, in the match Gu wins, it is not occurred that he chooses forward tactic with his opponent retreat and scored, though which occurred several times in the other two matches (shown in \autoref{fig:comparison}A-2). According to this phenomenon, we can judge the attacking been riposted is Gu's weakness, and if the opponent will have a greater chance of winning if he caught this. In addition, we also found Gu get significant more with both sides choosing forward tactics than the other two matches (\autoref{fig:comparison}A-3). This is consistent with our previous analysis, Gu's offensive ability is powerful, and his opponent will suffer if he chooses to play a hard ball. But if the opponent chooses to retreat to riposte, his probability of scoring will be higher. In addition, we found that there was no \textit{BF-FB} and \textit{BB-FB} transition in Gu's winning game (\autoref{fig:comparison}A-1). The two flows both means the fencer gaining priority. They were supposed to be the embodiment of the superiority tactics, but has played the opposite effect for Gu. We have concluded that Gu's ability to turn forward after retreating is not good, and often easy to fall into the trap set by the opponent. We can see this in \autoref{fig:comparison}A-2, he is supposed to lose point after gain priority in this situation. Based on the above analysis, we can get a very clear conclusion. Gu's offensive ability is strong, retreat ability is weak, and if we encountered this opponent, must not play hardball, pull back is a better way than direct attack. Then let's take a look at Szatmari's two matches, both of which he wins, as shown in \autoref{fig:comparison}B. One of the most obvious characteristics is the \textit{BF-FB} flow's absence (\autoref{fig:comparison}B-1). This flow reflects the fencer's succefully handling the opponent's attack and gaining priority. As we can see, this is Szatmari's short board. Once the opponent gets the priority, he is hard to take it back. But to make up, he always can avoid falling into long-range offensive or defensive at the beginning of each phrase by using appropriate tactics. The proper use of tactics made up for his lack of ability in this area, making it difficult to find a very effective way to beat him, so he won the championship at last. \section{Discussion} In the design and development of FencingVis, we refer to the developmental process model proposed by Sedlmair et al.\cite{sedlmair2012design}. However, substantial problems need to be clarified. First, fencing data comprise two related time-series data. We initially planned to adopt the time-series data analytical method, an approach widely accepted for fencing data. However, upon further understanding of the problem, we found that the time-series data of fencing involve a hierarchical structure. The motion-series data at different stages of a phrase also involve different technical and tactical information. For example, two fencers always choose to move forward at the beginning of the phrase, and only one-step or two-steps forward is observed in this stage according to our statistics. The one-step and two-step movement in the initial stage of a phrase is closely related to the technical characteristics of fencing and the tactics selected by fencers, and both greatly influence succeeding competitions. By contrast, after entering the long-distance attack and defense, the number of forward and backward steps is often unimportant, but the depth of the lunge will be more important than the timing. As such, we conduct a two-level analysis. We first represent the time-series data of a phrase as a sequence of tactical-combination behavior corresponding to higher-level abstraction, which reflects the tactical application of both fencers in a specific phrase. Then, for each node of the sequence, we analyze the technical ability, such as reaction time and attack position, of both fencers. Previous studies mainly focused on analyzing the technical capabilities of a sequence in its entirety. However, according to our research, additional detailed patterns and features can be found by analyzing the data using a multi-layer framework. On the basis of this hierarchical structure, the data presentation and its interactive features are designed as the two abstract levels of tactical information and technical information, respectively. However, our focus is still on the tactical level. Substantial work has been conducted to analyze the technical characteristics of fencing. By proposing FencingVis, we hope to offer users with a new perspective of analysis based on the abovementioned tactical framework so that the technical characteristics of fencers can be easily understood. In the process of system design and development, we cooperate deeply with three experts in fencing field. They were all elite fencers before, and teach in universities and professional teams now. Many of their professional suggestions give us a lot of inspiration. They also suggest that our system, apart from analyzing professional competitions, can also be used to teach fencing beginners or demonstrate tactics to fencing enthusiasts. \section{Conclusion} We design and implement a visual analysis system called FencingVis for visualization and visual analysis of fencing data. We use multiple views to present the data from different perspectives and provide exploratory analysis methods to domain experts through a series of interactive filters and view coordination. By using three case studies as basis, we prove that FencingVis can help domain experts find the patterns that were previously difficult to discover. The experts have also shared substantial positive feedback for the system. At present, FencingVis is mainly aimed toward the individual matches of sabre. Given that the rules for epee and foil slightly differ, the system needs to be further improved to accommodate the other two principles. Similarly, we also plan to improve our system to support the team events. Team events involve more fencers, the ordering of the fencers adds a new dimension to the analysis, which is a problem to be solved in our future work. \acknowledgments{ This research is partially supported by National Natural Science Foundation of China (Grant Nos. 61572274, 61672307, 61272225) and the National Key R\&D Program of China (Grant No. 2017YFB1304301). } \bibliographystyle{abbrv-doi}
{ "attr-fineweb-edu": 2.9375, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUgdvxK7IABDHEaD6K
\section{Introduction}\label{S:intro} In his 1981 \textit{Baseball Abstract} \cite{james}, Bill James posed the following problem: suppose two teams $A$ and $B$ have winning percentages $a$ and $b$ respectively, having played equally strong schedules in a game such as baseball where there are no ties. If $A$ and $B$ play each other, what is the probability $p(a,b)$ that $A$ wins? This question is perhaps more relevant to other sports, because in baseball the outcome is particularly sensitive to the pitching matchup. (In 1972, the Philadelphia Phillies won 29 of the 41 games started by Steve Carlton, and 30 of the 115 games started by their other pitchers.) The answer is quite interesting, even if its applicability is somewhat limited by the tacit assumption of uniformity. For $0<a<1$ and $c>0$, define $q_{c}(a)$ by \begin{equation}\label{E:talent} a=\frac{q_{c}(a)}{q_{c}(a)+c}\text{.} \end{equation} James calls $q_{\frac12}(a)$ the log5 of $a$, and does not consider any other values of $c$. Under the assumption of uniformity, he claims that $p(a,b)$ would be given by the function \begin{equation}\label{E:log5} P(a,b)=\frac{q_{\frac12}(a)}{q_{\frac12}(a)+q_{\frac12}(b)}\text{.} \end{equation} In this context, we take uniformity to mean that a team's likelihood of defeating another team is determined only by their winning percentages. For example, this assumption ignores the impact of the starting pitchers and precludes the situation where one team has a tendency to do particularly well or particularly poorly against another team. This technique is sometimes called the log5 method of calculating $p(a,b)$, although we will avoid using this name as there is nothing obviously logarithmic about it. It is easy to see from (\ref{E:talent}) that \begin{equation*} q_{c}(a)=\frac{ca}{1-a}\text{.} \end{equation*} Substituting this expression into (\ref{E:log5}), we see that \begin{equation}\label{jamesfunction} P(a,b)=\frac{a(1-b)}{a(1-b)+b(1-a)}\text{,} \end{equation} not only for $c=\frac{1}{2}$ but for any positive $c$. The explicit form of $P(a,b)$ was first given by Dallas Adams \cite{james}, who also christened it the \textit{James function}. It makes sense to extend the James function to values of $a$ and $b$ in the set $\{0,1\}$, except when $a=b=0$ or $a=b=1$. In these two cases, we would not have expected to be able to make predictions based on winning percentages alone. Moreover, both cases would be impossible if the two teams had previously competed against each other. \bigskip James's procedure can be interpreted as a twofold application of the general method known as the \textit{Bradley--Terry model} (or sometimes the \textit{Bradley--Terry--Luce model}). If $A$ and $B$ have worths $w(A)$ and $w(B)$ respectively, the probability that $A$ is considered superior to $B$ is \[ \pi(A,B)=\frac{w(A)}{w(A)+w(B)}\text{.} \] Despite the attribution of this model to Bradley and Terry \cite{bradleyterry} and to Luce \cite{luce}, the basic idea dates back to Zermelo \cite{zermelo}. The question, of course, is how to assign the ``right" measure for the worth of $A$ in a particular setting. In chess, for instance, it is common to express the worth of a player as $10^{R_{A}/400}$, where $R_{A}$ denotes the player's Elo rating (see \cite{glickmanjones}). (The rating of chess players is the question in which Zermelo was originally interested. Good \cite{good}, who also considered this problem, seems to have been the first to call attention to Zermelo's paper.) Another example is James's so-called Pythagorean model (introduced in \cite[p.\ 104]{james0} and discussed further in \cite{miller1}) for estimating a team's seasonal winning percentage, based on the number $R$ of runs it scores and the number $S$ of runs it allows. In this case, the worth of the team is $R^{2}$ and the worth of its opposition is $S^{2}$. In the construction of the James function, we can view the measure of a team's worth as being obtained from the Bradley--Terry model itself. We begin by assigning an arbitrary worth $c>0$ (taken by James to be $\frac{1}{2}$) to a team with winning percentage $\frac{1}{2}$. Equation (\ref{E:talent}) can be construed as an application of the Bradley--Terry model, where the worth of a team is determined by the assumption that its overall winning percentage is equal to its probability of defeating a team with winning percentage $\frac{1}{2}$. Equation (\ref{E:log5}) represents a second application of the Bradley--Terry model, where each team has an arbitrary winning percentage and the measure of its worth comes from the previous application of the model. This area of study, which is usually called the theory of paired comparisons, has focused from the outset on the question of inferring worth from an incomplete set of outcomes \cite{zermelo}. (See \cite{david} for a thorough treatment, as well as \cite{glickman} and \cite{stob} for additional context.) James, on the other hand, takes the worths to be known and uses them to determine the probability of the outcomes. We will adopt a similar point of view, emphasizing a set of axiomatic principles rather than a specific model. \bigskip James's justification \cite{james} for his method does not invoke the Bradley--Terry model, but rather the fact that the resulting function $P(a,b)$ satisfies six self-evident conditions: \begin{enumerate} \item $P(a,a)=\frac12$. \item $P(a,\frac12)=a$. \item If $a>b$ then $P(a,b)>\frac12$, and if $a<b$ then $P(a,b)<\frac12$. \item If $b<\frac12$ then $P(a,b)>a$, and if $b>\frac12$ then $P(a,b)<a$. \item $0\le P(a,b)\le 1$, and if $0<a<1$ then $P(a,0)=1$ and $P(a,1)=0$. \item $P(a,b)+P(b,a)=1$. \end{enumerate} Condition (1) pertains to the situation where two different teams have the same winning percentage (as opposed to a single team competing against itself). To avoid contradicting (5), condition (4) should exclude the cases where $a=0$ and $a=1$. We will call this set, with this slight correction, the \textit{proto-James conditions}. (James originally referred to them as ``conditions of logic.") In addition to presenting some empirical evidence for (\ref{E:log5}), James makes the following assertion. \begin{conj}[1981] The James function $P(a,b)$ is the only function that satisfies all six of the proto-James conditions. \end{conj} \noindent Jech \cite{jech} independently proposed a similar, albeit shorter list of conditions. Although he did not consider the James conjecture, he was able to prove a uniqueness theorem pertaining to a related class of functions. The purpose of this paper is to examine the mathematical theory underlying the James function and to demonstrate that the James conjecture is actually false. In fact, we will introduce and study a large class of functions that satisfy the proto-James conditions.\bigskip While the proto-James conditions are certainly worthy of attention, we prefer to work with a slightly different set. The following conditions apply to all points $(a,b)$ with $0\leq a\leq 1$ and $0\leq b\leq 1$, except for $(0,0)$ and $(1,1)$: \begin{enumerate}[label=(\alph*)] \item $P(a,\frac12)=a$. \item $P(a,0)=1$ for $0<a\leq 1$. \item $P(b,a)=1-P(a,b)$. \item $P(1-b,1-a)=P(a,b)$. \item $P(a,b)$ is a non-decreasing function of $a$ for $0\leq b\leq 1$ and a strictly increasing function of $a$ for $0<b<1$. \end{enumerate} We shall refer to conditions (a) to (e) as the \textit{James conditions}. Condition (d), which is not represented among the proto-James conditions, simply states that the whole theory could be reformulated using losing percentages rather than winning percentages, with the roles of the two teams reversed. Together with condition (c), it is equivalent to saying $P(1-a,1-b)=1-P(a,b)$, which may seem more natural to some readers. It should be clear from (\ref{jamesfunction}) that the James function satisfies James conditions (a) to (d). We will verify condition (e) in Section \ref{S:adams}. It is fairly obvious that the James conditions imply the proto-James conditions. Condition (a) is identical to condition (2). Condition (c) is condition (6), which implies (1) by taking $b=a$. Condition (e) is stronger than (3) and (4), and in concert with (1) and (2) implies them both. Combined with (c) or (d), it also implies that $P(a,b)$ is a non-increasing function of $b$ for $0\leq a\leq 1$ and a strictly decreasing function of $b$ for $0<a<1$. Finally, (b) implies the second of the three parts of (5). Together with (c), it also implies that $P(0,b)=0$ if $0<b\leq 1$. By taking $b=0$ in (d) and replacing $1-a$ with $b$, condition (b) further implies that $P(1,b)=1$ if $0\leq b<1$, and this together with (c) gives $P(a,1)=0$ for $0\leq a<1$, which is (a hair stronger than) the third part of (5). These facts, combined with (e), show that $0<P(a,b)<1$ when $0<a<1$ and $0<b<1$, which implies the first part of (5). We will focus our attention on functions that satisfy the James conditions, and hence also the proto-James conditions. See \cite{supplement}, the online supplement to this paper, for an example of a function that satisfies the proto-James conditions but not the James conditions. \section{Verification of the James Function}\label{S:miller} While the Bradley--Terry model is practically ubiquitous, its applicability to this situation is not obvious from an axiomatic perspective. We now present a self-contained proof that, under an intuitive probabilistic model in which $a$ and $b$ are the probabilities of success in simultaneous Bernoulli trials, the James function $P(a,b)$ represents the probability $p(a,b)$. This model satisfies the assumption of uniformity discussed in Section \ref{S:intro}. The following argument was discovered by the third-named author several years ago \cite{miller}, but has not previously appeared in a formal publication. \begin{thm}\label{millertime} The probability $p(a,b)$ that a team with winning percentage $a$ defeats a team with winning percentage $b$ is given by the James function \[ P(a,b)=\frac{a(1-b)}{a(1-b)+b(1-a)}\text{,} \] except when $a=b=0$ or $a=b=1$, in which case $p(a,b)$ is undefined. \end{thm} \begin{proof} Let teams $A$ and $B$ have winning percentages $a$ and $b$ respectively. Independently assign to each of $A$ and $B$ either a $0$ or $1$, where $A$ draws 1 with probability $a$ and $B$ draws $1$ with probability $b$. If one team draws $1$ and the other $0$, the team with $1$ wins the competition. If both teams draw the same number, repeat this procedure until they draw different numbers. The probability that $A$ draws 1 and $B$ draws $0$ on any given turn is clearly $a(1-b)$, while the opposite occurs with probability $b(1-a)$. The probability that $A$ and $B$ both draw $1$ is $ab$, and the probability that they both draw $0$ is $(1-a)(1-b)$. Hence \begin{equation}\label{E:total} ab+(1-a)(1-b)+a(1-b)+b(1-a)=1\text{.} \end{equation} It follows that $0\le ab+(1-a)(1-b)\le 1$ and $0\le a(1-b)+b(1-a)\le 1$ whenever $0\leq a\leq 1$ and $0\leq b\leq 1$. We can conclude the argument in either of two ways. Since the probability that $A$ and $B$ draw the same number is $ab+(1-a)(1-b)$, in which case they draw again, $p(a,b)$ must satisfy the functional equation \[ p(a,b)=a(1-b)+\left[ab+(1-a)(1-b)\right]p(a,b)\text{.} \] The only case in which we cannot solve for $p(a,b)$ is when $ab+(1-a)(1-b)=1$. By (\ref{E:total}), this situation only occurs when $a(1-b)+b(1-a)=0$, which implies that either $a=b=0$ or $a=b=1$. Otherwise, $p(a,b)=P(a,b)$. Alternatively, we may observe that the probability that $A$ wins on the $n$th trial is \[ a(1-b)\left[ab+(1-a)(1-b)\right]^{n-1}\text{,} \] and so the probability that $A$ wins in at most $n$ trials is \[ a(1-b)\sum_{k=1}^n\left[ab+(1-a)(1-b)\right]^{k-1}\text{.} \] As $n$ tends to $\infty$, this expression yields a convergent geometric series unless $ab+(1-a)(1-b)=1$. Using (\ref{E:total}), we again obtain the James function. \end{proof} This proof relies on a particular model for the relationship between winning percentages and the outcome of a competition. Under different assumptions about this relationship, it seems possible that we would obtain other approximations for $p(a,b)$. Any such function would presumably also satisfy the James conditions. \section{Properties of the James function}\label{S:adams} In this section, we will consider several important properties of the James function. We begin by computing the partial derivatives of $P(a,b)$, which will lead to an observation originally due to Dallas Adams. Note that \begin{equation}\label{partial1} \frac{{\partial}P}{{\partial}a}=\frac{b(1-b)}{\left[a(1-b)+b(1-a)\right]^2}\text{,} \end{equation} which shows that the James function satisfies condition (e), and also \begin{equation}\label{partial2} \frac{{\partial}P}{{\partial}b}=\frac{-a(1-a)}{\left[a(1-b)+b(1-a)\right]^2}\text{.} \end{equation} Furthermore, we have \[ \frac{{\partial}^2P}{{\partial}a^2}=\frac{-2b(1-b)(1-2b)}{\left[a(1-b)+b(1-a)\right]^3}\text{,} \] so that, as a function of $a$, it follows that $P(a,b)$ is concave up for $\frac12<b<1$ and concave down for $0<b<\frac12$. Similarly, \[ \frac{{\partial}^2P}{{\partial}b^2}=\frac{2a(1-a)(1-2a)}{\left[a(1-b)+b(1-a)\right]^3}\text{.} \] Adams makes an interesting remark relating to the mixed second partial derivative \begin{equation}\label{E:mixed} \frac{{\partial}^2P}{{\partial}a{\partial}b}=\frac{a-b}{\left[a(1-b)+b(1-a)\right]^3}\text{.} \end{equation} It follows from (\ref{E:mixed}) that $\frac{{\partial}P}{{\partial}a}$, viewed as a function of $b$, is increasing for $b<a$ and decreasing for $b>a$, so it is maximized as a function of $b$ when $b=a$. Since $\frac{{\partial}P}{{\partial}a}$ is positive for every $0<b<1$, it must be most positive when $b=a$. Alternatively, (\ref{E:mixed}) tells us that $\frac{{\partial}P}{{\partial}b}$, viewed as a function of $a$, is increasing for $a>b$ and decreasing for $a<b$, so it is minimized as a function of $a$ when $a=b$. Since $\frac{{\partial}P}{{\partial}b}$ is negative for every $0<a<1$, we conclude that it is most negative when $a=b$. Adams interprets these facts in the following manner: since $P(a,b)$ increases most rapidly with $a$ when $b=a$ (and decreases most rapidly with $b$ when $a=b$), one should field one's strongest team when playing an opponent of equal strength \cite{james}. Once again, this observation is perhaps more interesting in sports other than baseball, where the star players (other than pitchers) play nearly every game when healthy, although James points out that Yankees manager Casey Stengel preferred to save his ace pitcher, Whitey Ford, for the strongest opposition. It seems particularly relevant to European soccer, where the best teams engage in several different competitions at the same time against opponents of varying quality, and even the top players must occasionally be rested. \bigskip In principle, there are two ways to increase the value of $P(a,b)$: by increasing $a$ or by decreasing $b$. Under most circumstances, a team can only control its own quality and not that of its opponent. There are some situations, however, such as the Yankees signing a key player away from the Red Sox, where an individual or entity might exercise a degree of control over both teams. Similarly, there are many two-player games (such as Parcheesi and backgammon) in which each player's move affects the position of both players. In any such setting, it is a legitimate question whether the priority of an individual or team should be to improve its own standing or to diminish that of its adversary. Recall that the gradient of a function signifies the direction of the greatest rate of increase. The next result, which has apparently escaped notice until now, follows directly from equations (\ref{partial1}) and (\ref{partial2}). \begin{prop}\label{gradient} For any point $(a,b)$, except where $a$ and $b$ both belong to the set $\{0,1\}$, the gradient of the James function $P(a,b)$ is a positive multiple of the vector \begin{equation*} \langle b(1-b),-a(1-a)\rangle\text{.} \end{equation*} In other words, to maximize the increase of $P(a,b)$, the optimal ratio of the increase of $a$ to the decrease of $b$ is $b(1-b):a(1-a)$. \end{prop} One consequence of this result is that when two teams have identical winning percentages, the optimal strategy for increasing $P(a,b)$ is to increase $a$ and to decrease $b$ in equal measure. The same fact holds when two teams have complementary winning percentages. In all other situations, the maximal increase of $P(a,b)$ is achieved by increasing $a$ and decreasing $b$ by different amounts, with the ratio tilted towards the team whose winning percentage is further away from $\frac{1}{2}$. In the extremal cases, when one of the two values $a$ or $b$ belongs to the set $\{0,1\}$, the optimal strategy is to devote all resources to changing the winning percentage of the team that is either perfectly good or perfectly bad. This observation is somewhat vacuous when $a=1$ or $b=0$, since $P(a,b)$ is already as large as it could possibly be, although the strategy is entirely reasonable when $a=0$ or $b=1$. It also makes sense that the gradient is undefined at the points $(0,0)$, $(0,1)$, $(1,0)$, and $(1,1)$, since these winning percentages do not provide enough information to determine how much one team must improve to defeat the other. \bigskip If $P(a,b)=c$, it is easy to see that $a(1-b)(1-c)=(1-a)bc$, which implies the next result. \begin{prop}\label{involution} If $0<a<1$, then $P(a,b)=c$ if and only if $P(a,c)=b$. In other words, for a fixed value of $a$, the James function is an involution. \end{prop} The practical interpretation of this result is simple to state, even if it is not intuitively obvious: if team $A$ has probability $c$ of beating a team with winning percentage $b$, then team $A$ has probability $b$ of beating a team with winning percentage $c$. The James conditions already imply this relationship whenever $b$ and $c$ both belong to the set $\{0,1\}$ or the set $\{\frac{1}{2},a\}$. Nevertheless, it is not evident at this point whether the involutive property is a necessary consequence of the James conditions. (Example \ref{Ex1} will provide an answer to this question.)\bigskip Proposition \ref{involution} has two further implications that are worth mentioning. The first is a version of the involutive property that holds for a fixed value of $b$: \begin{quote} If $0<b<1$, then $P(a,b)=1-c$ if and only if $P(c,b)=1-a$. \end{quote} The second is that the level curves for the James function (that is, the set of all points for which $P(a,b)=c$ for a particular constant $c$) can be written \begin{equation}\label{levelcurve} b=P(a,c)=\frac{a(1-c)}{a(1-c)+c(1-a)} \end{equation} for $0<a<1$. These level curves are the concrete manifestation of a straightforward principle: if a team $A$ improves by a certain amount, there should be a corresponding amount that a team $B$ can improve so that the probability of $A$ defeating $B$ remains unchanged. Each level curve represents the path from $(0,0)$ to $(1,1)$ that such a pair would take in tandem. (See Figure 1.) \begin{figure}[h] \scalebox{.8}{\includegraphics{hyperbolas}} \caption{The level curves for the James function $P(a,b)$.} \end{figure} We conclude this section with one more observation relating to these level curves. \begin{prop}\label{diffeq} For any $0<c<1$, the corresponding level curve for the James function $P(a,b)$ is the unique solution to the differential equation \[ \frac{db}{da}=\frac{b(1-b)}{a(1-a)} \] that passes through the point $(c,\frac{1}{2})$. \end{prop} \noindent Another way of stating this result is that, for two teams to maintain the same value of $P(a,b)$, they should increase (or decrease) their winning percentages according to the ratio $a(1-a):b(1-b)$. One can either verify this assertion directly, by solving the differential equation to obtain (\ref{levelcurve}), or by appealing to Proposition \ref{gradient} and recalling that the gradient is always perpendicular to the level curve at a particular point. \section{Jamesian functions}\label{S:isa} We will now consider the question of whether there is a unique function satisfying the James conditions. We begin with the following observation, which is implicit in the construction of the James function. \begin{prop}\label{notbrad} The James function is the only function derived from the Bradley--Terry model that satisfies the James conditions. \end{prop} \begin{proof} Suppose $\pi(A,B)$ satisfies the James conditions and is derived from the Bradley--Terry model. Let team $A$ have winning percentage $a$, with $0<a<1$, and let team $C$ have winning percentage $\frac{1}{2}$. Condition (a) implies that \[ a=\pi(A,C)=\frac{w(A)}{w(A) + w(C)}\text{.} \] Solving for $w(A)$, we obtain \[ w(A)= \frac{aw(C)}{1-a}= q_{c}(a)\text{,} \] where $c = w(C)$. Thus $\pi(A,B)$ agrees with the James function $P(a,b)$ when both $a$ and $b$ belong to the interval $(0,1)$. Since the James conditions uniquely determine the value of a function whenever $a$ or $b$ belongs to $\{0,1\}$, the functions $\pi(A,B)$ and $P(a,b)$ must be identical. \end{proof} Let $S$ denote the open unit square $(0,1)\times (0,1)$. We will say that any function $J(a,b)$, defined on the set $\overline{S}\setminus\{(0,0)\cup(1,1)\}$, that satisfies the James conditions is a \textit{Jamesian function}. Our immediate objective is to disprove the James conjecture by identifying at least one example of a Jamesian function that is different from the James function $P(a,b)$. Proposition \ref{notbrad} guarantees that any such function, if it exists, cannot be derived from the Bradley--Terry model. \begin{ex}\label{Ex1} We will reverse-engineer our first example of a new Jamesian function by starting with its level curves. Consider the family of curves $\{j_{c}\}_{c\in(0,1)}$ defined as follows: \[ j_{c}(a)=\left\{ \begin{matrix} \displaystyle\frac{a}{2c}, & 0<a\leq\displaystyle\frac{2c}{1+2c} \\ 2ca+1-2c, & \displaystyle\frac{2c}{1+2c}<a<1 \end{matrix} \right. \] for $0<c\leq\frac{1}{2}$ and \[ j_{c}(a)=\left\{ \begin{matrix} (2-2c)a, & \displaystyle 0<a\leq\frac{1}{3-2c}\\ \displaystyle\frac{a+1-2c}{2-2c}, & \displaystyle\frac{1}{3-2c}<a<1 \end{matrix} \right. \] for $\frac{1}{2}<c<1$. (See Figure 2.) These curves have been chosen to satisfy certain symmetry properties, which the reader can probably deduce but which we will not state explicitly. (Suffice it to say that $j_{c}(c)=\frac{1}{2}$ for all $c$.) We define the function $J(a,b)$ on $S$ by assigning to every point $(a,b)$ the value of $c$ associated with the particular curve $j_{c}$ that passes through that point. We assign the value $0$ or $1$ to points on the boundary of $S$, as dictated by the James conditions. \begin{figure}[h] \scalebox{.8}{\includegraphics{lines2}} \caption{The level curves for the function $J(a,b)$ in Example \ref{Ex1}.} \end{figure} A bit more work yields an explicit formula for $J(a,b)$, from which one can verify directly that all of the James conditions are satisfied: \[ J(a,b)=\left\{ \begin{matrix} \displaystyle\frac{a}{2b}, & (a,b)\in\mathrm{I}\\ \\ \displaystyle\frac{2a-b}{2a}, & (a,b)\in\mathrm{II}\\ \\ \displaystyle\frac{1-b}{2(1-a)}, & (a,b)\in\mathrm{III}\\ \\ \displaystyle\frac{1+a-2b}{2(1-b)}, & (a,b)\in\mathrm{IV}\\ \end{matrix} \right.\text{,} \] where I, II, III, and IV are subsets of $\overline{S}\setminus\{(0,0)\cup(1,1)\}$ that are defined according to Figure 3. \begin{figure}[h]\label{quadrants} \scalebox{.8}{\includegraphics{quadrants}} \caption{The subsets of $\overline{S}\setminus\{(0,0)\cup(1,1)\}$ in Example \ref{Ex1}.} \end{figure} Observe that the appropriate definitions coincide on the boundaries between regions, from which it follows that $J(a,b)$ is continuous on $\overline{S}\setminus\{(0,0)\cup(1,1)\}$. On the other hand, it is not difficult to see that $J(a,b)$ fails to be differentiable at all points of the form $(a,1-a)$ for $0<a<\frac{1}{2}$ or $\frac{1}{2}<a<1$. (With some effort, one can show that it is differentiable at the point $(\frac{1}{2},\frac{1}{2})$.) In reference to Proposition \ref{involution}, note that $J(\textstyle\frac{1}{3},\frac{1}{4})=\frac{5}{8}$ and $J(\textstyle\frac{1}{3},\frac{5}{8})=\frac{4}{15}$. In other words, the involutive property is not a necessary consequence of the James conditions. \end{ex} In view of the preceding example, we need to refine our terminology somewhat. We will refer to any Jamesian function (such as the James function itself) that satisfies the condition \[ J\bigl(a,J(a,b)\bigr)=b \] for $0<a<1$ as an \textit{involutive Jamesian function}. \bigskip It turns out to be fairly easy to construct Jamesian functions with discontinuities in $S$ (see \cite{supplement}). Proposition \ref{invcont}, which we will prove in the next section, guarantees that any such function is not involutive. Rather than considering such pathological examples, we will devote the next section to examining Jamesian functions that are involutive, continuous, and (in many cases) differentiable. \section{Involutive Jamesian functions}\label{S:hyp} We now turn our attention to Jamesian functions that satisfy the involutive property \[ J\bigl(a,J(a,b)\bigr)=b\text{,} \] or equivalently \[ J(a,b)=c\text{ if and only if }J(a,c)=b\text{,} \] whenever $0<a<1$. This property essentially subsumes three of the five James conditions (namely (a), (b), and (d)). \begin{prop} A function $J\colon\overline{S}\setminus\{(0,0)\cup(1,1)\}\rightarrow[0,1]$ is an involutive Jamesian function if and only if it satisfies the involutive property, James condition (c), and James condition (e). \end{prop} \begin{proof} By definition, an involutive Jamesian function must satisfy the involutive property, as well as all five James conditions. Suppose then that $J(a,b)$ satisfies the involutive property, together with conditions (c) and (e). To see that $J(a,b)$ satisfies condition (b), take $0<a<1$ and suppose that $J(a,0)=c$ for $0\leq c<1$. The involutive property would then dictate that $J(a,c)=0$, and thus condition (c) would imply that $J(c,a)=1$. Hence $J(c^{\prime},a)\leq J(c,a)$ for $c<c^{\prime}\leq 1$, which would violate condition (e). Consequently $J(a,0)=1$ for $0<a<1$. Since $J(a,b)$ is a non-decreasing function of $a$, we conclude that $J(1,0)=1$ as well. Next consider condition (d). Applying the involutive property three times and condition (c) twice, we see that \begin{align*} J(a,b)=c\hspace{.1in} \Longleftrightarrow\hspace{.1in}&J(a,c)=b\\ \Longleftrightarrow\hspace{.1in}&J(c,a)=1-b\\ \Longleftrightarrow\hspace{.1in}&J(c,1-b)=a\\ \Longleftrightarrow\hspace{.1in}&J(1-b,c)=1-a\\ \Longleftrightarrow\hspace{.1in}&J(1-b,1-a)=c\text{,} \end{align*} as long as $a$, $b$, and $c$ all belong to the interval $(0,1)$. The cases where $a$, $b$, or $c$ belongs to $\{0,1\}$ can be dealt with by appealing to condition (b). In particular, we know that $J(a,0)=1$ for $0<a\leq 1$, which implies that $J(1-a,0)=1$ for $0\leq a<1$. The involutive property dictates that $J(1-a,1)=0$ for $0<a<1$. Since $J(1,0)=1$, it follows from (c) that $J(1,1-a)=1=J(a,0)$ for $0<a\leq1$. Hence condition (d) holds whenever $b=0$. The remaining cases can be deduced from this observation. Finally, consider condition (a). Taking $b=a$ in condition (c), we see that $J(a,a)=\frac{1}{2}$. Hence the involutive property dictates that $J(a,\frac{1}{2})=a$ for $0<a<1$. For $a=1$, simply note that conditions (d) and (b) imply that $J(1,\frac{1}{2})=J(\frac{1}{2},0)=~1$. Similarly, condition (c) shows that $J(0,\frac{1}{2})=1-J(\frac{1}{2},0)=0$. \end{proof} \noindent In other words, to identify an involutive Jamesian function, we can restrict our attention to the following set of conditions: \begin{enumerate}[label=(\roman*)] \item $J\bigl(a,J(a,b)\bigr)=b$ for $0<a<1$. \item $J(b,a)=1-J(a,b)$. \item $J(a,b)$ is a non-decreasing function of $a$ for $0\leq b\leq 1$ and a strictly increasing function of $a$ for $0<b<1$. \end{enumerate} We will refer to this list as the \textit{involutive James conditions}. Condition (i) also guarantees that a Jamesian function possesses another important property. \begin{prop}\label{invcont} Every involutive Jamesian function is continuous on $\overline{S}\setminus\{(0,0)\cup(1,1)\}$. \end{prop} \begin{proof} Take a fixed value $0<c<1$ and consider the level curve $J(a,b)=c$, which can be rewritten $b=J(a,c)$ for $0<a<1$. Conditions (i) and (ii) imply that \[ J\bigl(1-J(a,c),c\bigr)=1-a\text{.} \] Thus $J(a,c)$, viewed as a function of $a$, is a bijection from the interval $(0,1)$ onto itself. Hence it follows from (iii) that the curve $J(a,c)$ is a continuous, strictly increasing function of $a$ that connects the points $(0,0)$ and $(1,1)$. Suppose, for the sake of contradiction, that $J(a,b)$ fails to be continuous at a point $(a_{0},b_{0})$ in $S$. In other words, there exists a positive number $\varepsilon_{0}$ such that, for any positive $\delta$, there is a point $(a,b)$ such that $\|(a,b)-(a_{0},b_{0})\|<\delta$ and $|J(a,b)-J(a_{0},b_{0})|\geq\varepsilon_{0}$. (If necessary, redefine $\varepsilon_{0}$ so it is less than $\min\{2J(a_{0},b_{0}),2-2J(a_{0},b_{0})\}$.) Let $c_{1}=J(a_{0},b_{0})-\varepsilon_{0}/2$ and $c_{2}=J(a_{0},b_{0})+\varepsilon_{0}/2$, and consider the level curves $J(a,c_{1})$ and $J(a,c_{2})$. Let $\delta_{0}$ denote the minimum of the distance between $(a_{0},b_{0})$ and $J(a,c_{1})$ and the distance between $(a_{0},b_{0})$ and $J(a,c_{2})$. By assumption, there is a point $(a_{3},b_{3})$ such that $\|(a_{3},b_{3})-(a_{0},b_{0})\|<\delta_{0}$ and $c_{3}=J(a_{3},b_{3})$ is either less than or equal to $J(a_{0},b_{0})-\varepsilon_{0}$ or greater than or equal to $J(a_{0},b_{0})+\varepsilon_{0}$. Since $J(a,c_{i})=\frac{1}{2}$ at $a=c_{i}$, the level curve $J(a,c_{3})$ intersects the line $b=\frac{1}{2}$ either to the left of the curve $J(a,c_{1})$ or to the right of the curve $J(a,c_{2})$. On the other hand, since $(a_{3},b_{3})$ lies within $\delta_{0}$ of $(a_{0},b_{0})$, the curve $J(a,c_{3})$ must intersect the line $b=b_{3}$ between $J(a,c_{1})$ and $J(a,c_{2})$. Hence two of the level curves must intersect at a point in $S$, which is impossible. (See Figure 4 for a graphical illustration of this argument.) Now consider a point $(a_{0},b_{0})$ on the boundary of $S$. The only difference in the proof is that, if $a=0$ or $b=1$, the level curve $J(a,c_{1})$ does not exist. In this case, it is not difficult to see that $J(a,c_{3})$ must intersect the curve $J(a,c_{2})$. Similarly, if $a=1$ or $b=0$, there is no level curve $J(a,c_{2})$, but one can show that $J(a,c_{3})$ must intersect $J(a,c_{1})$. \end{proof} \begin{figure}[h]\label{contfigure} \scalebox{.8}{\includegraphics{contfigure2}} \caption{An illustration of the proof of Proposition \ref{invcont}.} \end{figure} \bigskip Let $g\colon (0,1)\rightarrow\mathbb{R}$ be a continuous, strictly increasing function that satisfies the conditions \begin{itemize} \item $g(1-a)=-g(a)$. \item $\displaystyle\lim_{a\rightarrow 0^{+}}g(a)=-\infty$. \end{itemize} These conditions imply that $g(\frac{1}{2})=0$ and that \[ \lim_{a\rightarrow 1^{-}}g(a)=\infty\text{.} \] Observe that $g^{-1}\colon\mathbb{R}\rightarrow(0,1)$ is a continuous, strictly increasing function with $g^{-1}(-s)=1-g^{-1}(s)$. It makes sense to define $g(0)=-\infty$ and $g(1)=\infty$, so that $g^{-1}(-\infty)=0$ and $g^{-1}(\infty)=1$. We claim that any such function $g$ can be used to construct an involutive Jamesian function. \begin{thm}\label{jacthm} For any $g$ satisfying the conditions specified above, the function \begin{equation}\label{ginvg} J(a,b)=g^{-1}\bigl(g(a)-g(b)\bigr) \end{equation} is an involutive Jamesian function. \end{thm} \begin{proof} Consider each of the three involutive James conditions: (i) Note that \begin{align*} J\bigl(a,J(a,b)\bigr)&=g^{-1}\bigl(g(a)-g\bigl(g^{-1}\bigl(g(a)-g(b)\bigr)\bigr)\bigr)\\ &=g^{-1}\bigl(g(a)-g(a)+g(b)\bigr)\\ &=g^{-1}\bigl(g(b)\bigr)=b\text{,} \end{align*} as long as $0<a<1$. (The cases where $a=0$ and $a=1$ yield the indeterminate forms $-\infty+\infty$ and $\infty-\infty$.) (ii) Similarly, \[ J(b,a)=g^{-1}\bigl(g(b)-g(a)\bigr)=1-g^{-1}\bigl(g(a)-g(b)\bigr)=1-J(a,b)\text{.} \] (iii) Since both $g$ and $g^{-1}$ are strictly increasing, it follows that $J(a,b)$ is a strictly increasing function of $a$ when $0<b<1$. Moreover, $J(a,b)$ takes on the constant value $1$ when $b=0$ and the constant value $0$ when $b=1$. \end{proof} \noindent While it is unnecessary to verify James conditions (a) and (d), it is worth noting that (a) corresponds to the property $g(\frac{1}{2})=0$ and (d) to the property $g(1-a)=-g(a)$. In effect, we verified condition (b) in the process of considering (iii). It is easy to use Theorem \ref{jacthm} to generate concrete examples. \begin{ex}\label{Ex2} The function \[ g(a)=\frac{2a-1}{a(1-a)} \] satisfies all the necessary conditions for Theorem \ref{jacthm}, so (\ref{ginvg}) defines an involutive Jamesian function. Since \[ g^{-1}(s)=\frac{s-2+\sqrt{s^{2}+4}}{2s}\text{,} \] we obtain \[ J(a,b)=\frac{x+y-\sqrt{x^2+y^2}}{2y}=\frac{x}{x+y+\sqrt{x^2+y^2}}\text{,} \] where $x=2ab(1-a)(1-b)$ and $y=(b-a)(2ab-a-b+1)$. \end{ex} \begin{ex}\label{Ex3} The function $g(a)=-\cot(\pi a)$ yields the involutive Jamesian function \[ J(a,b)=\frac{1}{\pi}\cot^{-1}\bigl(\cot(\pi a)-\cot(\pi b)\bigr)\text{,} \] where we are using the version of the inverse cotangent that attains values between $0$ and $\pi$. \end{ex} \bigskip The construction described in Theorem \ref{jacthm} is closely related to what is known as a \textit{linear model} for paired comparisons. In such a model, \[ \pi(A,B)=F\bigl(v(A)-v(B)\bigr)\text{,} \] where $v$ denotes a measure of worth and $F$ is the cumulative distribution function of a random variable that is symmetrically distributed about $0$ (see \cite[Section 1.3]{david}). The Bradley--Terry model can be viewed as a linear model, where $F$ is the logistic function \[ F(s)=\frac{e^{s}}{e^{s}+1}=\int_{-\infty}^{s}\frac{e^{t}}{(1+e^{t})^{2}}dt \] and $v(A)=\log w(A)$. In particular, the James function can be constructed in the manner of Theorem \ref{jacthm}, with $F=g^{-1}$ being the logistic function and $g$ being the so-called logit function \[ g(a)=\log\!\left(\frac{a}{1-a}\right)\text{.} \] (This observation could charitably be construed as an \textit{a posteriori} justification for the term ``log5" originally used by James.) What is distinctive about the James function in this context is that the construction is symmetric, with $v(A)=\log w(A)$ and $v(B)=\log w(B)$ replaced by $g(a)=\log(a/(1-a))$ and $g(b)=\log(b/(1-b))$ respectively. This symmetry corresponds to the twofold application of the Bradley--Terry model that was discussed in Section \ref{S:intro}. Likewise, the fact that both $g$ and $g^{-1}$ appear in the general formulation of Theorem \ref{jacthm} can be interpreted as a consequence of the same model being used to define both worth and probability. \bigskip \begin{ex}\label{Ex4} Take \[ F(s)=g^{-1}(s)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{s}e^{-\frac{t^{2}}{2}}dt\text{,} \] so that $g$ is the so-called probit function. The involutive Jamesian function $J(a,b)=g^{-1}\bigl(g(a)-g(b)\bigr)$ can be considered the analogue of the James function relative to the Thurstone--Mosteller model (see \cite{david}). \end{ex} \bigskip Theorem \ref{jacthm} allows us to identify a large class of functions that can be viewed as generalizations of the James function. Since \[ \log\!\left(\frac{a}{1-a}\right)=\int_{\frac{1}{2}}^{a}\left(\frac{1}{t}+\frac{1}{1-t}\right)dt=\int_{\frac{1}{2}}^{a}\frac{1}{t(1-t)}dt\text{,} \] we define \[ g_{n}(a)=\int_{\frac{1}{2}}^{a}\frac{1}{(t(1-t))^{n}}dt \] for any real number $n\geq 1$. It is not difficult to verify that $g_{n}$ satisfies all of the prescribed requirements for Theorem \ref{jacthm}. (The stipulation that $g_{n}(0)=-\infty$ precludes the case where $0<n<1$.) Define \begin{equation}\label{hypjameq} H_{n}(a,b)=g_{n}^{-1}\bigl(g_{n}(a)-g_{n}(b)\bigr)\text{.} \end{equation} For $n>1$, we shall refer to $H_{n}(a,b)$ as a \textit{hyper-James function}. Each of these functions is an involutive Jamesian function. In some situations, it is possible to obtain a more concrete representation for $H_{n}(a,b)$. For example, one can show that \[ g_{\frac{3}{2}}(a)=\frac{2(2a-1)}{\sqrt{a(1-a)}} \] and \[ g_{\frac{3}{2}}^{-1}(s)=\frac{s+\sqrt{s^{2}+16}}{2\sqrt{s^{2}+16}}\text{,} \] and hence \[ H_{\frac32}(a,b)=\frac12+\frac{v^{\prime}\sqrt u-u^{\prime}\sqrt v}{2\sqrt{u+v-4uv-2u^{\prime}v^{\prime}\sqrt{uv}}} \] for $u=a(1-a)$, $v=b(1-b)$, $u^{\prime}=1-2a$, and $v^{\prime}=1-2b$ (see \cite{supplement} for more details). In general, though, it seems unlikely that there is an explicit formula for $H_{n}(a,b)$ that is more useful than (\ref{hypjameq}). \bigskip We will now examine the issue of differentiability. For any function defined according to Theorem \ref{jacthm}, a routine calculation shows that \begin{equation}\label{djda} \frac{\partial J}{\partial a}=\frac{g^{\prime}(a)}{g^{\prime}\bigl(J(a,b)\bigl)} \end{equation} and \begin{equation}\label{djdb} \frac{\partial J}{\partial b}=\frac{-g^{\prime}(b)}{g^{\prime}\bigl(J(a,b)\bigl)} \end{equation} at all points $(a,b)$ for which the above quotients are defined. Based on this observation, we are able to obtain the following result. \begin{prop}\label{diffprop} If $g$ is continuously differentiable on $(0,1)$, with $g^{\prime}$ never equal to $0$, the corresponding Jamesian function $J(a,b)$ is differentiable on $S$. Conversely, if $J(a,b)$ is differentiable on $S$, the function $g$ must be differentiable on $(0,1)$ with $g^{\prime}$ never $0$. \end{prop} \begin{proof} Suppose that $g^{\prime}$ is continuous and nonzero on $(0,1)$. It follows from (\ref{djda}) and (\ref{djdb}) that both $\frac{\partial J}{\partial a}$ and $\frac{\partial J}{\partial b}$ are defined and continuous at all points in the open set $S$, which guarantees that $J(a,b)$ is differentiable on $S$. Now suppose that $J(a,b)$ is differentiable at every point in $S$. Let $a_{0}$ be an arbitrary element of $(0,1)$. Since $g$ is strictly increasing, it could only fail to be differentiable on a set of measure $0$ (see \cite[p.\ 112]{royden}). In particular, there is at least one $c$ in $(0,1)$ for which $g^{\prime}(c)$ is defined. Since $J(a_{0},b)$, viewed as a function of $b$, attains every value in the interval $(0,1)$, there exists a $b_{0}$ in $(0,1)$ such that $J(a_{0},b_{0})=c$. Note that \[ g(a)=g\bigl(J(a,b_{0})\bigr)+g(b_{0}) \] for all $a$ in $(0,1)$, so the chain rule dictates that \[ g^{\prime}(a_{0})=g^{\prime}(c)\cdot\frac{\partial J}{\partial a}(a_{0},b_{0})\text{.} \] Therefore $g$ is differentiable on the entire interval $(0,1)$. Suppose, for the sake of contradiction, that there were some $d$ in $(0,1)$ for which $g^{\prime}(d)=0$. As before, there would exist a $b_{1}$ in $(0,1)$ such that $J(a_{0},b_{1})=d$, which would imply that \[ g^{\prime}(a_{0})=g^{\prime}(d)\cdot\frac{\partial J}{\partial a}(a_{0},b_{1})=0\text{.} \] Consequently $g^{\prime}$ would be identically $0$ on $(0,1)$, which is impossible. \end{proof} In other words, all the specific examples of Jamesian functions we have introduced in this section, including the hyper-James functions, are differentiable on $S$. We can now state a more general version of Proposition \ref{gradient}, which follows directly from (\ref{djda}) and (\ref{djdb}). \begin{prop}\label{jacgrad} For any differentiable Jamesian function $J(a,b)$ defined according to Theorem \ref{jacthm}, the gradient at a point $(a,b)$ in $S$ is a positive multiple of the vector $\langle g^{\prime}(a),-g^{\prime}(b)\rangle$. \end{prop} If $g$ is differentiable on $(0,1)$, the condition that $g(1-a)=-g(a)$ implies that $g^{\prime}(1-a)=g^{\prime}(a)$. Hence the gradient of $J(a,b)$ is a positive multiple of $\langle 1,-1\rangle$ whenever $b=a$ or $b=1-a$. This observation generalizes the fact that, whenever two teams have identical or complementary winning percentages, the optimal strategy for increasing $P(a,b)$ is to increase $a$ and decrease $b$ by equal amounts. \bigskip For any Jamesian function given by (\ref{ginvg}), the level curve $J(a,b)=c$ for $0<c<1$ can be rewritten \[ b=J(a,c)=g^{-1}\bigl(g(a)-g(c)\bigr)\text{,} \] or $g(a)=g(b)+g(c)$. Hence we have the following generalization of Proposition \ref{diffeq}. \begin{prop} Let $J(a,b)$ be a differentiable Jamesian function defined according to Theorem \ref{jacthm}. For any $0<c<1$, the corresponding level curve for $J(a,b)$ is the unique solution to the differential equation \[ \frac{db}{da}=\frac{g^{\prime}(a)}{g^{\prime}(b)} \] that passes through the point $(c,\frac{1}{2})$. \end{prop} Thus the level curves for the Jamesian functions defined in Examples \ref{Ex2} and \ref{Ex3} are given by the differential equations \[ \frac{db}{da}=\frac{(2a^{2}-2a+1)(b(1-b))^2}{(2b^{2}-2b+1)(a(1-a))^{2}} \] and \[ \frac{db}{da}=\left(\frac{\sin(\pi b)}{\sin(\pi a)}\right)^{2} \] respectively. Likewise, the level curves for any hyper-James function $H_{n}(a,b)$ are given by the differential equation \[ \frac{db}{da}=\left(\frac{b(1-b)}{a(1-a)}\right)^{n}\text{.} \] Figure 5 shows the level curves for the hyper-James function $H_{2}(a,b)$. \begin{figure}[h] \scalebox{.8}{\includegraphics{hyperjames2c}} \caption{The level curves for the hyper-James function $H_{2}(a,b)$.} \end{figure} \section{Final thoughts} While it is possible to construct additional examples of non-involutive Jamesian functions, it would be reasonable to focus any further investigation on the involutive case. Perhaps the most obvious question is whether one can assign any probabilistic significance to the involutive Jamesian functions we have just introduced, particularly the hyper-James functions. For instance, could one somehow alter the assumptions underlying Theorem \ref{millertime} to obtain one of these functions in place of $P(a,b)$? Within this context, several lines of inquiry seem especially worthwhile: \begin{enumerate} \item Does every involutive Jamesian function have the form described in Theorem \ref{jacthm}, for some particular function $g$? \item While it is clear how the involutive property arises mathematically, is there any \textit{a priori} reason that it should hold, based on the probabilistic interpretation of the James function? \item Are there any situations for which non-differentiability would make sense in the setting of an athletic competition? \end{enumerate} We would be delighted if this paper motivated other mathematicians (or sports enthusiasts) to consider any of these questions. \section*{Acknowledgments}\label{S:ack} We would never have written this paper if Caleb Garza, a recent alumnus of Connecticut College, had not decided to give a senior seminar talk on a topic from sabermetrics. We are sincerely grateful to him for prompting (or reviving) our interest in this material and for bringing the work of the third-named author to the attention of the first two. We would also like to thank the referees and editors of this paper for providing substantial assistance and guidance.
{ "attr-fineweb-edu": 2.660156, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd4A4eIOjR9j0NUeT
\section{Introduction} \label{sec:intro} This paper proposes a novel solution to recommend travel routes in cities. A large amount of location traces are becoming available from ubiquitous location tracking devices. For example, FourSquare has 50 million monthly users who have made 8 billion check-ins~\cite{4sq}, and Flickr hosts over 2 billion geo-tagged public photos~\cite{flickr}. This growing trend in rich geolocation data provide new opportunities for better travel planning traditionally done with written travel guides. Good solutions to these problems will in turn lead to better urban experiences for residents and visitors alike, and foster sharing of even more location-based behavioural data. There are several settings of recommendation problems for locations and routes, as illustrated in Figure~\ref{fig:threesettings}. We summarise recent work most related to formulating and solving learning problems on assembling routes from POIs, and refer the reader to a number of recent surveys~\cite{bao2015recommendations,zheng2015trajectory,zheng2014urban} for general overviews of the area. The first setting can be called POI recommendation (Figure~\ref{fig:threesettings}(a)). Each location (A to E) is scored with geographic and behavioural information such as category, reviews, popularity, spatial information such as distance, and temporal information such as travel time uncertainty, time of the day or day of the week. A popular approach is to recommend POIs with a collaborative filtering model on user-location affinity~\cite{shi2011personalized}, with additional ways to incorporate spatial~\cite{lian2014geomf,liu2014exploiting}, temporal~\cite{yuan2013timeaware,hsieh2014mining,gao2013temporal}, or spatial-temporal~\cite{yuan2014graph} information. Figure~\ref{fig:threesettings}(b) illustrates the second setting: next location recommendation. Here the input is a partial trajectory (e.g. started at point A and currently at point B), the task of the algorithm is to score the next candidate location (e.g, C, D and E) based on the perceived POI score and transition compatibility with input $A\rightarrow B$. It is a variant of POI recommendation except both the user and locations travelled to date are given. The solutions to this problem include incorporating Markov chains into collaborative filtering~\cite{fpmc10,ijcai13,zhang2015location}, quantifying tourist traffic flow between points-of-interest~\cite{zheng2012patterns}, formulating a binary decision or ranking problem~\cite{baraglia2013learnext}, and predict the next location with sequence models such as recurrent neural networks~\cite{aaai16}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig/fig1-flavours.pdf} \caption{Three settings of trajectory recommendation problems. Node size: POI score; edge width: transition score between pairs of POIs; grey: observed; star: starting location; flag: ending location. See Section~\ref{sec:intro} for details. } \label{fig:threesettings}\vspace{-0.0in} \end{figure} This paper considers the final setting: trajectory recommendation (Figure~\ref{fig:threesettings}(c)). Here the input are some factors about the desired route, e.g. starting point A and end point C, along with auxiliary information such as the desired length of trip. The algorithm needs to take into account location desirability (as indicated by node size) and transition compatibility (as indicated by edge width), and compare route hypotheses such as A-D-B-C and A-E-D-C. Existing work in this area either uses heuristic combination of locations and routes~\cite{lu2010photo2trip,ijcai15,lu2012personalized}, or formulates an optimisation problem that is not informed or evaluated by behaviour history~\cite{gioniswsdm14,chen2015tripplanner}. We note, however, that two desired qualities are still missing from the current solutions to trajectory recommendation. The first is a principled method to jointly learn POI ranking (a prediction problem) and optimise for route creation (a planning problem). The second is a unified way to incorporate various features such as location, time, distance, user profile and social interactions, as they tend to get specialised and separate treatments. This work aims to address both challenges. We propose a novel way to learn point preferences and routes jointly. In Section~\ref{sec:feature}, we describe the features that are used to ranking points, and POI to POI transitions that are factorised along different types of location properties. Section~\ref{sec:recommendation} details a number of our proposed approaches to recommend trajectories. We evaluate the proposed algorithms on trajectories from five different cities in Section~\ref{sec:experiment}. The main contributions of this work are: \begin{itemize} \setlength{\itemsep}{-2pt} \item We propose a novel algorithm to jointly optimise point preferences and routes. We find that learning-based approaches generally outperform heuristic route recommendation~\cite{ijcai15}. Incorporating transitions to POI ranking results in a better sequence of POIs, and avoiding sub-tours further improves performance of classical Markov chain methods. \item Our approach is feature-driven and learns from past behaviour without having to design specialised treatment for spatial, temporal or social information. It incorporates information about location, POI categories and behaviour history, and can use additional time, user, or social information if available. \item We show good performance compared to recent results~\cite{ijcai15}, and also quantify the contributions from different components, such as ranking points, scoring transitions, and routing. \item We propose a new metric to evaluate trajectories, pairs-F$_1$, to capture the order in which POIs are visited. Pairs-F$_1$ lies between 0 and 1, and achieves 1 if and only if the recommended trajectory is exactly the same as the ground truth. \end{itemize} Supplemental material, benchmark data and results are available online at \surl{https://bitbucket.org/d-chen/tour-cikm16}. \section{POI, Query and Transition} \label{sec:feature} The goal of tour recommendation is to suggest a sequence of POIs, $(p_1, \ldots, p_L)$, of length $L$ such that the user's utility is maximised. The user provides the desired start ($p_1=p_s$) and end point ($p_L=p_e$), as well as the number $L$ of POIs desired, from which we propose a trajectory through the city. The training data consists of a set of tours of varying length in a particular city. We consider only POIs that have been visited by at least one user in the past, and construct a graph with POIs as nodes and directed edges representing the observed transitions between pairs of POIs in tours. We extract the category, popularity (number of distinct visitors)~\cite{ht10}, total number of visits and average visit duration for each POI. POIs are grouped into $5$ clusters using K-means according to their geographical locations to reflect their neighbourhood. Furthermore, since we are constrained by the fact that trajectories have to be of length $L$ and start and end at certain points, we hope to improve the recommendation by using this information. In other words, we use the \textit{query} $q = (p_s, p_e, L)$ to construct new features by contrasting candidate POIs with $p_s$ and $p_e$. For each of the POI features (category, neighbourhood, popularity, total visits and average duration), we construct two new features by taking the difference of the feature in POI $p$ with $p_s$ and $p_e$ respectively. For the category (and neighbourhood), we set the feature to $1$ when their categories (and cluster identities) are the same and $-1$ otherwise. For popularity, total visits and average duration, we take the real valued difference. Lastly, we compute the distance from POI $p$ to $p_s$ (and $p_e$) using the Haversine formula~\cite{haversine}, and also include the required length $L$. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig/poi_transmat.png} \caption{Transition matrices for two POI features from Melbourne: POI category and neighbourhood. } \label{fig:transmat}\vspace{-0.0in} \end{figure} In addition to information about each individual POI, a tour recommendation system would benefit from capturing the likelihood of going from one POI to another different POI. One option would be to directly model the probability of going from any POI to any other POI, but this has several weaknesses: Such a model would be unable to handle a new POI (one that has not yet been visited), or pairs of existing POIs that do not have an observed transition. Furthermore, even if we restrict ourselves to known POIs and transitions, there may be locations which are rarely visited, leading to significant challenges in estimating the probabilities from empirical data. We model POI transitions using a Markov chain with discrete states by factorising the transition probability ($p_i$ to $p_j$) as a product of transition probabilities between pairs of individual POI features, assuming independence between these feature-wise transitions. The popularity, total visits and average duration are discretised by binning them uniformly into $5$ intervals on the log scale. These feature-to-feature transitions are estimated from data using maximum likelihood principle. The POI-POI transition probabilities can be efficiently computed by taking the Kronecker product of transition matrices for the individual features, and then updating it based on three additional constraints as well as appropriate normalisation. First we disallow self-loops by setting the probability of ($p_i$ to $p_i$) to zero. Secondly, when multiple POIs have identical (discretised) features, we distribute the probability uniformly among POIs in the group. Third, we remove feature combinations that has no POI in dataset. Figure~\ref{fig:transmat} visualises the transition matrices for two POI features, category and neighbourhood, in Melbourne. \section{Tour Recommendation} \label{sec:recommendation} In this section, we first describe the recommendation of points and routes, then we discuss how to combine them, and finally we propose a method to avoid sub-tours. \subsection{POI Ranking and Route Planning} \label{sec:rankplan} A naive approach would be to recommend trajectories based on the popularity of POIs only, that is we always suggest the top-$k$ most popular POIs for all visitors given the start and end location. We call this baseline approach \textsc{PoiPopularity}, and its only adaptation to a particular query is to adjust $k$ to match the desired length. On the other hand, we can leverage the whole set of POI features described in Section~\ref{sec:feature} to learn a ranking of POIs using rankSVM, with linear kernel and L$2$ loss~\cite{lranksvm}, \begin{equation*} \min_{\mathbf{w}} \frac{1}{2} \mathbf{w}^T \mathbf{w} + \underset{p_i, p_j \in \mathcal{P},~ q \in \mathcal{Q}}{C ~\sum} \max \left( 0,~ 1 - \mathbf{w}^T (\phi_{i,q} - \phi_{j,q}) \right)^2, \end{equation*} where $\mathbf{w}$ is the parameter vector, $C > 0$ is a regularisation constant, $\mathcal{P}$ is the set of POIs to rank, $\mathcal{Q}$ denotes the queries corresponding to trajectories in training set, and $\phi_{i,q}$ is the feature vector for POI $p_i$ with respect to query $q$. The ranking score of $p_i$ given query $q$ is computed as $R_{i,q} =\mathbf{w}^T \phi_{i,q}$. For training the rankSVM, the labels are generated using the number of occurrences of POI $p$ in trajectories grouped by query $(p_s, p_e, L)$, without counting the occurrence of $p$ when it is the origin or destination of a trajectory. Our algorithm, \textsc{PoiRank}, recommends a trajectory for a particular query by first ranking POIs then takes the top ranked $L-2$ POIs and connects them according to the ranks. In addition to recommend trajectory by ranking POIs, we can leverage the POI-POI transition probabilities and recommend a trajectory (with respect to a query) by maximising the transition likelihood. The maximum likelihood of the Markov chain of transitions is found using a variant of the Viterbi algorithm (with uniform emission probabilities). We call this approach that only uses the transition probabilities between POIs as \textsc{Markov}. \subsection{Combine Ranking and Transition} \label{sec:rank+markov} We would like to leverage both point ranking and transitions, i.e., recommending a trajectory that maximises the points ranking of its POIs as well as its transition likelihood at the same time. To begin with, we transform the ranking scores $R_{j,q}$ of POI $p_j$ with respect to query $q$ to a probability distribution using the softmax function, \begin{equation} \label{eq:rankprob} P_R(p_j | q) = \frac{\exp(R_{j,q})}{\sum_i \exp(R_{i,q})}, \end{equation} One option to find a trajectory that simultaneously maximises the ranking probabilities of its POIs and its transition likelihood is to optimise the following objective: \vspace{-0.3em} \begin{equation*} \argmax_{\mathcal{T} \in \mathcal{P}^L} ~\alpha \sum_{k=2}^{L} \log P_R(p_{k} | q) + (1-\alpha) \sum_{k=1}^{L-1} \log P(p_{k+1} | p_{k}), \end{equation*} such that $p_{1} = p_s, ~ p_{L} = p_e$ and $p_{k} \in \mathcal{P}, ~1 \le k \le L$. The first term captures the POI ranking, and the second one incorporates the transition probabilities. $\mathcal{T} = (p_{1}, \dots, p_{L})$ is any possible trajectory, $\alpha \in [0, 1]$ is a parameter to trade-off the importance between point ranking and transition, and can be tuned using cross validation in practice. Let $S(p; p', q)$ be a convex combination of point ranking and transition, \vspace{-0.3em} \begin{equation}\label{eq:combined-score} S(p; p', q) = \alpha \log P_R(p|q) + (1-\alpha) \log P(p|p'), \end{equation} then the best path (or walk) can be found using the Viterbi algorithm. We call this approach that uses both the point ranking and transitions \textsc{Rank+Markov}, with pseudo code shown in Algorithm~\ref{alg:rank+markov}, where $A$ is the score matrix, and entry $A[l, p]$ stores the maximum value associated with the (partial) trajectory that starts at $p_s$ and ends at $p$ with $l$ POI visits; $B$ is the backtracking-point matrix, and entry $B[l, p]$ stores the predecessor of $p$ in that (partial) trajectory. The maximum objective value is $A[L, p_e]$, and the corresponding trajectory can be found by tracing back from $B[L, p_e]$. \setlength{\textfloatsep}{0.5em} \begin{algorithm}[t] \caption{\textsc{Rank+Markov}: recommend trajectory with POI ranking and transition} \label{alg:rank+markov} \begin{algorithmic}[1] \STATE \textbf{Input}: $\mathcal{P}, p_s, p_e, L$ \STATE \textbf{Output}: Trajectory $\mathcal{T} = (p_s, \cdots, p_e)$ with $L$ POIs \STATE Initialise score matrix $A$ and backtracking pointers $B$ \FOR{$p \in \mathcal{P}$} \STATE $A[2, p] = S(p; p_s, q)$ \STATE $B[2, p] = p_s$ \ENDFOR \FOR{$l=2$ to $L-1$} \FOR{$p \in \mathcal{P}$} \STATE $A[l+1, p] = \max_{p' \in \mathcal{P}} \{ A[l, p'] + S(p; p', q) \}$ \label{eq:max} \STATE $B[l+1, p] = \argmax_{p' \in \mathcal{P}} \{ A[l, p'] + S(p; p', q) \}$ \label{eq:argmax} \ENDFOR \ENDFOR \STATE $\mathcal{T}= \{p_e\}$, $l = L$, $p = p_e$ \REPEAT \STATE Prepend $B[l, p]$ to $\mathcal{T}$ \STATE $l = l - 1$, $p = B[l, p]$ \UNTIL{$l < 2$} \RETURN $\mathcal{T}$ \end{algorithmic} \end{algorithm} \subsection{Avoiding sub-tours} \label{sec:nosubtour} Trajectories recommended by \textsc{Markov} (Section~\ref{sec:rankplan}) and \textsc{Rank+Markov} (Section~\ref{sec:rank+markov}) are found using the maximum likelihood approach, and may contain multiple visits to the same POI. This is because the best solution from Viterbi decoding may have circular sub-tours (where a POI already visited earlier in the tour is visited again). We propose a method for eliminating sub-tours by finding the best path using an integer linear program (ILP), with sub-tour elimination constraints adapted from the Travelling Salesman Problem~\cite{opt98}. In particular, given a set of POIs $\mathcal{P}$, the POI-POI transition matrix and a query $q = (p_s, p_e, L)$, we recommend a trajectory by solving the following ILP: \vspace{-0.3em} \begin{alignat}{5} & \max_{x,u} ~&& \sum_{i=1}^{N-1} \sum_{j=2}^N ~x_{ij} ~\log P(p_j | p_i) \nonumber \\ & ~s.t. ~&& x_{ij} \in \{0, 1\}, ~x_{ii} = 0, ~u_i \in \mathbf{Z}, ~\forall i, j = 1, \cdots, N \label{eq:cons1} \\ & && \sum_{j=2}^N x_{1j} = \sum_{i=1}^{N-1} x_{iN} = 1, ~\sum_{i=2}^N x_{i1} = \sum_{j=1}^{N-1} x_{Nj} = 0 \label{eq:cons2} \\ & && \sum_{i=1}^{N-1} x_{ik} = \sum_{j=2}^N x_{kj} \le 1, ~\forall k=2, \cdots, N-1 \label{eq:cons3} \\ & && \sum_{i=1}^{N-1} \sum_{j=2}^N x_{ij} = L-1, \label{eq:cons4} \\ & && u_i - u_j + 1 \le (N-1) (1-x_{ij}), \forall i, j = 2, \cdots, N \label{eq:cons5} \end{alignat} where $N=|\mathcal{P}|$ is the number of available POIs and $x_{ij}$ is a binary decision variable that determines whether the transition from $p_i$ to $p_j$ is in the resulting trajectory. For brevity, we arrange the POIs such that $p_1 = p_s$ and $p_N = p_e$. Firstly, the desired trajectory should start from $p_s$ and end at $p_e$ (Constraint~\ref{eq:cons2}). In addition, any POI could be visited at most once (Constraint~\ref{eq:cons3}). Moreover, only $L-1$ transitions between POIs are permitted (Constraint~\ref{eq:cons4}), i.e., the number of POI visits should be exactly $L$ (including $p_s$ and $p_e$). The last constraint, where $u_i$ is an auxiliary variable, enforces that only a single sequence of POIs without sub-tours is permitted in the trajectory. We solve this ILP using the Gurobi optimisation package~\cite{gurobi}, and the resulting trajectory is constructed by tracing the non-zeros in $x$. We call our method that uses the POI-POI transition matrix to recommend paths without circular sub-tours \textsc{MarkovPath}. Sub-tours in trajectories recommended by \textsc{Rank+Markov} can be eliminated in a similar manner, we solve an ILP by optimising the following objective with the same constraints described above, \vspace{-1em} \begin{equation} \label{eq:obj2} \max_{x,u} \sum_{i=1}^{N-1} \sum_{j=2}^N ~x_{ij} ~S(p_j; p_i, q), \end{equation} where $S(p_j;p_i,q)$ incorporates both point ranking and transition, as defined in Equation~(\ref{eq:combined-score}). This algorithm is called \textsc{Rank+MarkovPath} in the experiments. \section{Experiment on Flickr Photos} \label{sec:experiment} \begin{table}[t] \caption{Statistics of trajectory dataset} \label{tab:data} \centering \begin{tabular}{l*{5}{r}} \hline \textbf{Dataset} & \textbf{\#Photos} & \textbf{\#Visits} & \textbf{\#Traj.} & \textbf{\#Users} \\ \hline Edinburgh & 82,060 & 33,944 & 5,028 & 1,454 \\ Glasgow & 29,019 & 11,434 & 2,227 & 601 \\ Melbourne & 94,142 & 23,995 & 5,106 & 1,000 \\ Osaka & 392,420 & 7,747 & 1,115 & 450 \\ Toronto & 157,505 & 39,419 & 6,057 & 1,395 \\ \hline \end{tabular}\vspace{-0.0in} \end{table} We evaluate the algorithms above on datasets with trajectories extracted from Flickr photos~\cite{thomee2016yfcc100m} in five cities, namely, Edinburgh, Glasgow, Melbourne, Osaka and Toronto, with statistics shown in Table~\ref{tab:data}. The Melbourne dataset is built using approaches proposed in earlier work~\cite{ht10, ijcai15}, and the other four datasets are provided by Lim et al.~\cite{ijcai15}. We use leave-one-out cross validation to evaluate different trajectory recommendation algorithms, i.e., when testing on a trajectory, all other trajectories are used for training. We compare with a number of baseline approaches such as \textsc{Random}, which naively chooses POIs uniformly at random (without replacement) from the set $\mathcal{P} \setminus \{p_s, p_e \}$ to form a trajectory, and \textsc{PoiPopularity} (Section~\ref{sec:rankplan}), which recommends trajectories based on the popularity of POIs only. Among the related approaches from recent literature, \textsc{PersTour}~\cite{ijcai15} explores POI features as well as the sub-tour elimination constraints (Section~\ref{sec:nosubtour}), with an additional time budget, and its variant \textsc{PersTour-L}, which replaces the time budget with a constraint of trajectory length. Variants of point-ranking and route-planning approaches including \textsc{PoiRank} and \textsc{Markov} (Section~\ref{sec:rankplan}), which utilises either POI features or POI-POI transitions, and \textsc{Rank+Markov} (Section~\ref{sec:rank+markov}) that captures both types of information. Variants that employ additional sub-tour elimination constraints (\textsc{MarkovPath} and \textsc{Rank+MarkovPath}, Section~\ref{sec:nosubtour}) are also included. A summary of the various trajectory recommendation approaches can be found in Table~\ref{tab:algsummary}. \input{table_9methods} \input{table_perfsummary} \subsection{Performance metrics} \label{sec:metric} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig/pairF1.pdf} \caption{Examples for F$_1$ vs pairs-F$_1$ as evaluation metric. Solid grey: ground truth; dashed blue: recommended trajectories. See Section~\ref{sec:metric} for details.} \label{fig:pairf1}\vspace{-0.0in} \end{figure} A commonly used metric for evaluating POI and trajectory recommendation is the F$_1$ score on points, which is the harmonic mean of precision and recall of POIs in trajectory~\cite{ijcai15}. While being good at measuring whether POIs are correctly recommended, F$_1$ score on points ignores the visiting order between POIs. We propose a new metric $\text{pairs-F}_1$ that considers both POI identity and visiting order, by measuring the F$_1$ score of every pair of POIs, whether they are adjacent or not in trajectory, \begin{displaymath} \text{pairs-F}_1 = \frac{2 P_{\textsc{pair}} R_{\textsc{pair}}} {P_{\textsc{pair}} + R_{\textsc{pair}}}, \end{displaymath} where $P_{\textsc{pair}}$ and $R_{\textsc{pair}}$ are the precision and recall of ordered POI pairs respectively. Pairs-F$_1$ takes values between 0 and 1 (higher is better). A perfect pairs-F$_1$ is achieved {\em if and only if} both the POIs and their visiting order in the recommended trajectory are exactly the same as those in the ground truth. On the other hand, pairs-F$_1 = 0$ means none of the recommended POI pairs was actually visited (in the designated order) in the real trajectory. An illustration is shown in Figure~\ref{fig:pairf1}, the solid grey lines represent the ground truth transitions that actually visited by travellers, and the dashed blue lines are the recommended trajectory by one of the approaches described in Section~\ref{sec:recommendation}. Both examples have a perfect F$_1$ score, but not a perfect pairs-F$_1$ score due to the difference in POI sequencing. \subsection{Results} \label{sec:result} The performance of various trajectory recommendation approaches are summarised in Table~\ref{tab:f1} and Table~\ref{tab:pairf1}, in terms of F$_1$ and pairs-F$_1$ scores respectively. It is apparent that algorithms captured information about the problem (Table~\ref{tab:algsummary}) outperform the \textsc{Random} baseline in terms of both metrics on all five datasets. Algorithms based on POI ranking yield strong performance, in terms of both metrics, by exploring POI and query specific features. \textsc{PoiRank} improves notably upon \textsc{PoiPopularity} and \textsc{PersTour} by leveraging more features. In contrast, \textsc{Markov} which leverages only POI transitions does not perform as well. Algorithms with ranking information (\textsc{Rank+Markov} and \textsc{Rank+MarkovPath}) always outperform their respective variants with transition information alone (\textsc{Markov} and \textsc{MarkovPath}). We can see from Table~\ref{tab:f1} that, in terms of F$_1$, \textsc{MarkovPath} and \textsc{Rank+MarkovPath} outperform their corresponding variants \textsc{Markov} and \textsc{Rank+Markov} without the path constraints, which demonstrates that eliminating sub-tours improves point recommendation. This is not unexpected, as sub-tours worsen the proportion of correctly recommended POIs since a length constraint is used. In contrast, most Markov chain entries have better performance in terms of pairs-F$_1$ (Table~\ref{tab:pairf1}), which indicates Markov chain approaches generally respect the transition patterns between POIs. \textsc{PersTour}~\cite{ijcai15} always performs better than its variant \textsc{PersTour-L}, in terms of both metrics, especially on Glasgow and Toronto datasets. This indicates the time budget constraint is more helpful than length constraint for recommending trajectories. Surprisingly, we observed that \textsc{PersTour} is outperformed by \textsc{Random} baseline on Melbourne dataset. It turns out that on this dataset, many of the ILP problems which \textsc{PersTour} needs to solve to get the recommendations are difficult ILP instances. In the leave-one-out evaluation, although we utilised a large scale computing cluster with modern hardware, $12\%$ of evaluations failed as the ILP solver was unable to find a feasible solution after $2$ hours. Furthermore, a lot of recommendations were suboptimal solutions of the corresponding ILPs due to the time limit. These factors lead to the inconsistent performance of \textsc{PersTour} on Melbourne dataset. \subsection{An Illustrative Example} \label{sec:example} Figure~\ref{fig:exampleresult} illustrates an example from Edinburgh. The ground truth is a trajectory of length $4$ that starts at a POI of category \textit{Structures}, visits two intermediate POIs of category \textit{Structures} and \textit{Cultural} and terminates at a POI of category \textit{Structures}. The trajectory recommended by \textsc{PersTour} is a tour with $11$ POIs, as shown in Figure~\ref{fig:exampleresult}(a), with none of the desired intermediate POIs visited. \textsc{PoiRank} (Figure~\ref{fig:exampleresult}(b)) recommended a tour with correct POIs, but with completely different routes. On the other hand, \textsc{Markov} (Figure~\ref{fig:exampleresult}(c)) missed one POI but one of the intermediate routes is consistent with the ground truth. The best recommendation, as shown in Figure~\ref{fig:exampleresult}(d), with exactly the same points and routes as the ground truth, which in this case is achieved by \textsc{Rank+MarkovPath}. \section{Discussion and Conclusion} \label{sec:conclusion} In this paper, we propose an approach to recommend trajectories by jointly optimising point preferences and routes. This is in contrast to related work which looks at only POI or next location recommendation. Point preferences are learned by ranking according to POI and query features, and factorised transition probabilities between POIs are learned from previous trajectories extracted from social media. We investigate the maximum likelihood sequence approach (which may recommend sub-tours) and propose an improved sequence recommendation method. Our feature driven approach naturally allows learning the combination of POI ranks and routes. We argue that one should measure performance with respect to the visiting order of POIs, and suggest a new pairs-F$_1$ metric. We empirically evaluate our tour recommendation approaches on five datasets extracted from Flickr photos, and demonstrate that our method improves on prior work, in terms of both the traditional F$_1$ metric and our proposed performance measure. Our promising results from learning points and routes for trajectory recommendation suggests that research in this domain should consider both information sources simultaneously. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{fig/example-tour.pdf} \caption{Different recommendations from algorithm variants. See the main text in Section~\ref{sec:example} for description.} \label{fig:exampleresult} \end{figure*} \section{POI Features for Ranking} \begin{table*}[ht] \caption{Features of POI $p$ used in rankSVM given query $(p_s, p_e, L)$} \label{tab:featurerank} \centering \setlength{\tabcolsep}{10pt} \begin{tabular}{l|l} \hline \textbf{Feature} & \textbf{Description} \\ \hline \texttt{category} & one-hot encoding of the category of $p$ \\ \texttt{neighbourhood} & one-hot encoding of the POI cluster that $p$ resides in \\ \texttt{popularity} & logarithm of POI popularity of $p$ \\ \texttt{nVisit} & logarithm of the total number of visit by all users at $p$ \\ \texttt{avgDuration} & logarithm of the average duration at $p$ \\ \hline \texttt{trajLen} & trajectory length $L$, i.e., the number of POIs required \\ \texttt{sameCatStart} & $1$ if the category of $p$ is the same as that of $p_s$, $-1$ otherwise \\ \texttt{sameCatEnd} & $1$ if the category of $p$ is the same as that of $p_e$, $-1$ otherwise \\ \texttt{sameNeighbourhoodStart} & $1$ if $p$ resides in the same POI cluster as $p_s$, $-1$ otherwise \\ \texttt{sameNeighbourhoodEnd} & $1$ if $p$ resides in the same POI cluster as $p_e$, $-1$ otherwise \\ \texttt{distStart} & distance between $p$ and $p_s$, calculated using the Haversine formula \\ \texttt{distEnd} & distance between $p$ and $p_e$, calculated using the Haversine formula \\ \texttt{diffPopStart} & real-valued difference in POI popularity of $p$ from that of $p_s$ \\ \texttt{diffPopEnd} & real-valued difference in POI popularity of $p$ from that of $p_e$ \\ \texttt{diffNVisitStart} & real-valued difference in the total number of visit at $p$ from that at $p_s$ \\ \texttt{diffNVisitEnd} & real-valued difference in the total number of visit at $p$ from that at $p_e$ \\ \texttt{diffDurationStart} & real-valued difference in average duration at $p$ from that at $p_s$ \\ \texttt{diffDurationEnd} & real-valued difference in average duration at $p$ from that at $p_e$ \\ \hline \end{tabular} \end{table*} \begin{figure*}[ht] \centering \includegraphics[width=0.7\textwidth]{fig/poi_cats_fat.pdf} \caption{POI Categories} \label{fig:poicats} \end{figure*} \begin{figure*}[t] \includegraphics[width=\textwidth]{fig/feature_distro.pdf} \caption{Distribution of POI popularity, the number of visit and visit duration} \label{fig:distro}\vspace{-0.0in} \end{figure*} We described an algorithm to recommend trajectories based on ranking POIs (\textsc{PoiRank}) in Section~\ref{sec:rankplan}, the features used to rank POIs are POI and query specific, as described in Table~\ref{tab:featurerank}. Categories of POIs in all of the five trajectory datasets are show in Figure~\ref{fig:poicats}. The distribution of POI popularity, the number of visit and average visit duration are shown in Figure~\ref{fig:distro}. To rank POIs, features described in Table~\ref{tab:featurerank} are scaled to range $[-1.0, 1.0]$ using the same approach as that employed by libsvm (\url{http://www.csie.ntu.edu.tw/~cjlin/libsvm/}), i.e., fitting a linear function $f(x) = a x + b$ for feature $x$ such that the maximum value of $x$ maps to $1.0$ and the minimum value maps to $-1.0$. \section{Transition Probabilities} \begin{table}[ht] \caption{POI features used to factorise POI-POI transition probabilities} \label{tab:featuretran} \centering \setlength{\tabcolsep}{28pt} \begin{tabular}{l|l} \hline \textbf{Feature} & \textbf{Description} \\ \hline \texttt{category} & category of POI \\ \texttt{neighbourhood} & the cluster that a POI resides in \\ \texttt{popularity} & (discretised) popularity of POI \\ \texttt{nVisit} & (discretised) total number of visit at POI \\ \texttt{avgDuration} & (discretised) average duration at POI \\ \hline \end{tabular} \end{table} We compute the POI-POI transition matrix by factorising transition probabilities from POI $p_i$ to POI $p_j$ as a product of transition probabilities between pairs of individual POI features, which are shown in Table~\ref{tab:featuretran}. POI Features are discretised as described in Section~\ref{sec:feature} and transition matrices of individual features are computed using maximum likelihood estimation, i.e., counting the number of transitions for each pair of features then normalising each row, taking care of zeros by adding a small number $\epsilon$ \footnote{In our experiments, $\epsilon = 1$.} to each count before normalisation. Figure~\ref{fig:transmat_all} visualises the transition matrices for individual POI features in Melbourne. The POI-POI transition matrix is computed by taking the Kronecker product of the transition matrices for the individual features, and then updating it with the following constraints: \begin{itemize} \item Firstly, we disallow self transitions by setting probability of ($p_i$ to $p_i$) to zero. \item Secondly, when a group of POIs have identical (discretised) features (say a group with $M$ POIs), we distribute the probability uniformly among POIs in the group, in particular, the incoming (unnormalised) transition probability (say, $P_{in}$) of the group computed by taking the Kronecker product is divided uniformly among POIs in the group (i.e., $\frac{P_{in}}{M}$), which is equivalent to choose a POI in the group uniformly at random. Moreover, the outgoing (unnormalised) transition probability of each POI is the same as that of the group, since in this case, \textit{the transition from any POI in the group to one outside the group represents an outgoing transition from that group}. In addition, the self-loop transition of the group represents transitions from a POI in the group to other POIs ($M-1$ POIs) in the same group, \textit{similar to the outgoing case}, the (unnormalised) self-loop transition probability (say $P_o$) is divided uniformly (i.e., $\frac{P_o}{M-1}$), which corresponds to choose a transition (from $p_i$) among all transitions to the other $M-1$ POIs (exclude self-loop $p_i$ to $p_i$) in that group uniformly at random. \item Lastly, we remove feature combinations that has no POI in dataset and normalise each row of the (unnormalised) POI-POI transition matrix to form a valid probability distribution for each POI. \end{itemize} \begin{figure*}[htbp] \includegraphics[width=\textwidth]{fig/poi_transmat_all.png} \caption{Transition matrices for five POI features: POI category, neighbourhood, popularity, number of visits, and visit duration. These statistics are from the Melbourne dataset.} \label{fig:transmat_all} \end{figure*} \section{Experiment} \subsection{Dataset} Trajectories used in experiment (Section~\ref{sec:experiment}) are extracted using geo-tagged photos in the Yahoo! Flickr Creative Commons 100M (a.k.a. YFCC100M) dataset~\cite{thomee2016yfcc100m} as well as the Wikipedia web-pages of points-of-interest (POI). Photos are mapped to POIs according to their distances calculated using the Haversine formula~\cite{haversine}, the time a user arrived a POI is approximated by the time the first photo taken by the user at that POI, similarly, the time a user left a POI is approximated by the time the last photo taken by the user at that POI. Furthermore, sequence of POI visits by a specific user are divided into several pieces according to the time gap between consecutive POI visits, and the POI visits in each piece are connected in temporal order to form a trajectory~\cite{ht10, ijcai15}. \subsection{Parameters} We use a $0.5$ trade-off parameter for \textsc{PersTour} and \textsc{PersTour-L}, found to be the best weighting in~\cite{ijcai15}. The regularisation parameter $C$ in rankSVM is $10.0$. The trade-off parameter $\alpha$ in \textsc{Rank+Markov} and \textsc{Rank+MarkovPath} is tuned using cross validation. In particular, we split trajectories with more than $2$ POIs in a dataset into two (roughly) equal parts, and use the first part (i.e., validation set) to tune $\alpha$ (i.e., searching value of $\alpha$ such that \textsc{Rank+Markov} achieves the best performance on validation set, in terms of the mean of pairs-F$_1$ scores from leave-one-out cross validation), then test on the second part (leave-one-out cross validation) using the tuned $\alpha$, and vice verse. \subsection{Implementation} We employ the rankSVM implementation in libsvmtools (\url{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/}). Integer linear programming (ILP) are solved using Gurobi Optimizer (\url{http://www.gurobi.com/}) and lp\_solve (\url{http://lpsolve.sourceforge.net/}). Dataset and code for this work are available in repository \url{https://bitbucket.org/d-chen/tour-cikm16}. \subsection{Performance metric} A commonly used metric for evaluating POI and trajectory recommendation is the F$_1$ score on points~\cite{ijcai15}, Let $\mathcal{T}$ be the trajectory that was visited in the real world, and $\hat{\cal T}$ be the recommended trajectory, $\mathcal{P}_{\mathcal{T}}$ be the set of POIs visited in $\mathcal{T}$, and $\mathcal{P}_{\hat{\mathcal{T}}}$ be the set of POIs visited in $\hat{\mathcal{T}}$, F$_1$ score on points is the harmonic mean of precision and recall of POIs in trajectory, \begin{equation*} F_1= \frac{2 P_{\textsc{point}} R_{\textsc{point}}} {P_{\textsc{point}} + R_{\textsc{point}}}, \text{~where~} P_{\textsc{point}} = \frac{|\mathcal{P}_{\mathcal{T}} \cap \mathcal{P}_{\hat{\mathcal{T}}}|} {|\hat{\mathcal{T}}|} \text{~and~} R_{\textsc{point}} = \frac{|\mathcal{P}_{\mathcal{T}} \cap \mathcal{P}_{\hat{\mathcal{T}}}|} {|\mathcal{T}|}. \end{equation*} A perfect F$_1$ (i.e., F$_1 = 1$) means the POIs in the recommended trajectory are exactly the same set of POIs as those in the ground truth, and F$_1 = 0$ means that none of the POIs in the real trajectory was recommended. While F$_1$ score on points is good at measuring whether POIs are correctly recommended, it ignores the visiting order between POIs. $\text{Pairs-F}_1$ takes into account both the point identity and the visiting orders in a trajectory. This is done by measuring the F$_1$ score of every pair of ordered POIs, whether they are adjacent or not in trajectory, \begin{equation*} \text{pairs-F}_1 = \frac{2 P_{\textsc{pair}} R_{\textsc{pair}}} {P_{\textsc{pair}} + R_{\textsc{pair}}}, \text{~where~} P_{\textsc{pair}} = \frac{N_c} {|\hat{\mathcal{T}}|(|\hat{\mathcal{T}}|-1) / 2} \text{~and~} R_{\textsc{pair}} = \frac{N_c} {|\mathcal{T}|(|\mathcal{T}|-1) / 2}, \end{equation*} and $N_c$ \footnote{We define pairs-F$_1 = 0$ when $N_c = 0$.} is the number of ordered POI pairs $(p_j, p_k)$ that appear in both the ground-truth and the recommended trajectories, \begin{align*} (p_j \prec_{\mathcal{T}} p_k ~\land~ p_j \prec_{\hat{\mathcal{T}}} p_k) ~\lor~ (p_j \succ_{\mathcal{T}} p_k ~\land~ p_j \succ_{\hat{\mathcal{T}}} p_k), \end{align*} with $p_j \ne p_k, ~p_j, p_k \in \mathcal{P}_{\mathcal{T}} \cap \mathcal{P}_{\hat{\mathcal{T}}}, ~1 \le j \ne k \le |\mathcal{T}|$. Here $p_j \prec_{\mathcal{T}} p_k$ denotes POI $p_j$ was visited before POI $p_k$ in trajectory $\mathcal{T}$ and $p_j \succ_{\mathcal{T}} p_k$ denotes $p_j$ was visited after $p_k$ in $\mathcal{T}$. Pairs-F$_1$ takes values between 0 and 1. A perfect pairs-F$_1$ (1.0) is achieved if and only if both the POIs and their visiting orders in the recommended trajectory are exactly the same as those in the ground truth. Pairs-F$_1 = 0$ means none of the recommended POI pairs was actually visited in the real trajectory. Performance data reported in Table~\ref{tab:f1} and Table~\ref{tab:pairf1} are the mean and standard deviation of instances successfully recommended by all methods shown in Table~\ref{tab:algsummary}. \section*{Keywords} Trajectory recommendation; learning to rank; planning \input{1.introduction.tex} \input{2.method.tex} \input{3.recommendation.tex} \input{4.experiment.tex} \input{5.conclusion.tex} \section*{Acknowledgements} We thank Kwan Hui Lim for kindly providing his R code to reproduce his experiments. This work is supported in part by the Australian Research Council via the Discovery Project program DP140102185. \bibliographystyle{abbrv}
{ "attr-fineweb-edu": 2.011719, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdao5qdmDNenrz6Lq
\section{Introduction} The travel industry is more and more relying on e-commerce nowadays as online solutions have made life more convenient and comfortable for everyone. However, unlike the online shopping industry, the travel industry is more complicated to analyse in four ways: 1. user data are much more sparse. A user on amazon may have a lot of search and purchasing history on Amazon during a short period like one month. But in the travel industry, a passenger may only reserve fight tickets once a year. 2. For platforms such as Amazon, each good/item has a clear hierarchical category (e.g, diapers belong to the category of Baby care, which belongs to a higher level category of Baby), which may be used as the definition of user interest. In this application for the travel industry, we consider the destination as the item such as Paris, London or Shanghai. But it is hard to define a category for Paris in terms of explicit user interest. 3. Account and user information is necessary for online purchasing on Amazon, which makes it easier to group the history of searches and purchases by user. For online flight bookings for example, the user does not have to create an account during the booking phase. Travelers often book flight tickets, train tickets, hotel, activities on different platforms, which makes it harder to group the purchases by user. 4. Most online products such as movies or Amazon products have ratings as user can give feedback easily. However, given that a traveler has been to Paris for example, it is hard to get a rating from the traveler on the Paris trip. Because a trip includes tickets booking, hotel booking, Point Of Interests (POIs) visits, food, etc. On the other hand, it is difficult for travelers to rate a trip to a certain destination as there are many variables. All these differences make it a great challenge to collect, analyse and understand user data in the travel industry. \iffalse 5. There are many choices (airline companies, travel agencies and meta search websites) when searching for a flight, while there are fewer choices for shopping diapers online. This means that, the search and reservation data for the same user is distributed on many different online platforms. Due to GDPR, it is impossible to share information among platforms to match the user information. \fi One important way to help understand traveler trends is destination similarity. Destination similarity is very important for the travel industry: \begin{enumerate} \item For travelers: we want to help travelers to find similar destinations that they may be interested in. For example, with the search or booking history of a traveler, similar destinations can be recommended to the traveler for the next trip. Another example is that the unprecedented COVID-19 crisis has introduced a number of travel restrictions which will prevent leisure travelers from reaching to their dream destination. With the destination similarity, we can recommend them alternative non-restricted destinations they might also be interested in. \item For tourism offices: tourism offices can better identify their competitors for each origin market. This can allow them to better distinguish themselves and target travelers considering trips to similar locations. \item For online advertising companies, destination similarity can be used to identify if the current user who is searching for destination A would be interested in their impression of destination B, to improve the click through rate or conversion rate. \item For a sustainable travel industry, the destination similarity can be used to suggest destinations that travelers might be willing to visit (so with the potential to convert a search into a booking), but closer to their home or simply better served by direct flights, thus reducing the CO2 emissions linked to transport. It can also be a solution to fight over-tourism problems by recommending similar destinations with fewer tourists or offering a more local an authentic experience, making travel even more rewarding. \end{enumerate} In this work, we propose to measure destination similarity from the search logs based on anonymized cookie ID. Various similarity measures (among users or items) have been proposed in the literature for collaborative filtering (CF) based recommender systems. However, most of these measures are proposed based on the users' ratings, while there's no rating information for a destination (city) in the travel industry. This makes many similarity measures not suitable for our problem. To fill this gap in the literature, we investigate different possible solutions and propose a new similarity measure, which has superior performance than the state of the art methods. The remainder of the paper is organized as follows: the background and related works are introduced in Section \rom{2}. In Section \rom{3}, the proposed similarity measures are introduced. We describe the data sets chosen in this study and provide the protocol of our experiments in Section \rom{4} and the experimental results in Section \rom{5}. The final conclusion and future works are given in Section \rom{6}. \section{Background and related works} Generally speaking, before a traveler makes a booking for a holiday, there are two preliminary steps: inspiration and search. There are a lot of information sources that can inspire travelers such as a movie scene or recommendations from relatives. During the inspiration step, the user interest is broad and general, thus we can only estimate the implicit user interest. With enough accumulated motivation, travelers will then search for more detailed information from travel websites, blogs or reviews to enrich their general picture of the destination. Then, the traveler will start to search flight and hotel information. If the prices agree with the budget, the traveler will pass to the next step: booking. Otherwise, the traveler may search for another similar destination and compare the price. The general motivation here is that: when a user searches for travel to a destination, this action shows that the user is interested in this destination. But we don't know what exactly the user is interested in (e.g. the museum, beach, mountains or other activities), which can be called 'implicit user interest'. The explicit user interest is difficult and expensive to get. Many companies ask the customers directly, while others try to infer from customer shopping/spending behavior. However, apart from the costs, the explicit user interest may not be clear for travelers themselves either. Because tourist attractions or POIs are not the only reason travelers get interested in a destination, it may also due to the culture (local people, food, etc.), weather, events and so on. Capturing implicit user interest seems easier and more direct. When a user searches both destination A and destination B, there must be some similarities between these two destinations for this user. However, the user interest changes overtime. If the time difference between two different searches is 10 months for example, these two destinations may not be similar as the user interest may shift due to the season or travel context. Hence, limiting the time period between two different searches is important. The research question is: given that a user has searched for several destinations, how can we determine the destination similarity? In the literature, there are many item similarity measures that can be applied. For example, in recommender systems, the CF has become the most widely used method to recommend items for users \cite{ricci2011introduction, gazdar2020new}. The core of CF is to calculate similarities among users or items \cite{liu2014new}. Destination similarity can be seen as recommending destinations for users from the point of view of CF. The classic CF problem has a rating matrix. Let $R = [r_{u,i}]^{m\times n}$ be a rating matrix with $m$ users and $n$ items. Each entry $r_{ui}$ is a rating value given by user $u$ to item $i$. There are different ranges of $r_{ui}$ in real world datasets. Among which, the range 1,2,3,4,5 is adopted in many datasets such as movie reviews and restaurant reviews. Many predefined similarity measures can be used for CF. The most commonly used similarity measure are the Pearson correlation coefficient (PCC) \cite{su2009survey} and the Cosine similarity (Cos) \cite{adomavicius2005toward}: \begin{equation} \label{pcc} PCC(u, v) = \frac{\Sigma_{i\in I}(r_{u,i} -\overline{r}_u )(r_{v,i} -\overline{r}_v)}{\sqrt{\Sigma_{i\in I}(r_{u,i} -\overline{r}_u)^2} \sqrt{\Sigma_{i\in I}(r_{v,i} -\overline{r}_v)^2}} \end{equation} \begin{equation} \label{cos} Cos(u, v) = \frac{\Sigma_{i\in I}r_{u,i}r_{v,i}}{\sqrt{\Sigma_{i\in I}r_{u,i}^2 \Sigma_{i\in I}r_{v,i} ^2}} \end{equation} where $I$ represents the set of common rating items by users $u$ and $v$. $\overline{r}_u$ and $\overline{r}_v$ are the average rating value of user $u$ and $v$ respectively. Many variants of PCC and Cos have also been proposed. In \cite{shardanand1995social}, Constrained Pearson Correlation Coefficient (CPCC) has been proposed to take the negative rating values into account. Other studies suggest that the number of co-rated items can impact the performance of the similarity measure. Hence, the Weighted Pearson Correlation Coefficient (WPCC) \cite{herlocker2017algorithmic} and the Sigmoid Function based Pearson Correlation Coefficient (SPCC) \cite{jamali2009trustwalker} have also been proposed. Another widely used similarity measure is the Jaccard. Jaccard similarity coefficient is one of the popular methods for calculating the similarity between users/items \cite{arsan2016comparison, jain2020survey}: \begin{equation} \label{jac} Jaccard(u, v) = \frac{|I_u\cap I_v|}{|I_u\cup I_v|} \end{equation} where $I_u$ and $I_v$ are the sets of items rated by users u and v respectively. Unlike the previous two similarity measures, the Jaccard similarity considers only the number of co-rated items between two users regardless the rating values, which seems to be suitable for our problem. Apart from previously introduced widely used similarity measures, there are some recent advances in the literature too. In \cite{gazdar2020new}, the authors list the limitations of popular similarity measures used in CF and propose a new similarity measure by combining the percentage of non common ratings with the absolute difference of ratings. However, all these similarity measures are proposed for measuring user similarities with item ratings \cite{jain2020survey,al2018similarity}. However, this is not adapted to recommend destinations to travelers for two reasons. Firstly, there are no ratings for destinations from travelers. From search logs, we can only get binary response that if a traveler searched this destination or not. Secondly, due to the user interest change, only recent searches should be used to recommend destinations to travelers, which means that there are very few searched cities. It is difficult to measure the user similarity with little information. Hence, we need to measure the destination similarity instead of user similarity. Here, we propose a new similarity measure for items without any user ratings in the next section, and apply it to destination similarity in our experiments. \iffalse The main goal of CF is to recommend a subset (topN) of the unknown items to a given user. The most popular way of realizing this for datasets with ratings is to : 1. measure user similarity; 2. predict unknown ratings. \fi \section{Proposed similarity measures} To recommend a destination to a traveler who has made one or more searches recently, we can directly measure the destination similarity and recommend destinations similar to travelers' recent searches. Let $R = [r_{u,i}]^{m\times n}$ be a binary matrix of with $m$ users and $n$ destinations. $r_{u,i} = 1$ means user $u$ recently searched destination $i$, while $r_{u,i} = 0$ means user $u$ didn't search destination $i$. The matrix $R$ is very sparse due to two reasons: 1. there are many destinations while each traveler only knows few of them; 2. people don't plan travel frequently. As the matrix is binary, many commonly used CF similarity measures based on ratings are less meaningful. In this work, a simple and easy to understand similarity measure is proposed inspired from Random Forest Similarity (RFS) \cite{cao2019random, cao2019random1}. Random Forest (RF) classifier \cite{breiman2001random} is one of the most successful and widely used classifiers. A RF $\textbf{H}$ is an ensemble made up with $M$ decision trees , denoted as in Equation \eqref{e2}: \begin{equation}\label{e2} \mathbf{H}(\mathbf{X}) = \{h_k(\mathbf{X}),k=1,\dots,M\} \end{equation}. RFS is a similarity measure inferred from RF, which is also widely used \cite{shi2006unsupervised,cao2018improve,farhadi2015gene}. For each tree $h_k$ in the forest $\textbf{H}$, if two different instances $\mathbf{X}_i$ and $ \mathbf{X}_j$ fall in the same terminal node, they are considered as similar: \begin{equation}\label{sk} RFS^{(k)}(\mathbf{X}_i, \mathbf{X}_j)= \begin{cases} 1, & \text{if}\ l_k(\mathbf{X}_i) = l_k(\mathbf{X}_j)\\ 0, & \text{otherwise} \end{cases} \end{equation} where $l_k(\mathbf{X})$ is a function that returns the leaf node of tree $k$ given input $\mathbf{X}$. The final RFS measure $RFS^{(\mathbf{H})}$ consists in calculating the $RFS^{(k)}$ value for each tree ${h_k}$ in the forest, and to average the resulting similarity values over the $M$ trees as in Equation \eqref{simil}: \begin{equation}\label{simil} RFS^{(\mathbf{H})}(\mathbf{X}_i, \mathbf{X}_j) = \frac{1}{M}\sum_{k=1}^{M} RFS^{(k)}(\mathbf{X}_i, \mathbf{X}_j) \end{equation} \textbf{Cluster Consensus Similarity ($CCS$)}: RFS is mostly designed for classification problems, which is not suitable for the destination similarity problem with only user-destination interaction matrix. However, inspired by the idea of RFS, we propose a simple method named $CCS$. For RFS, each tree is trained to give a different data partition: each leaf node groups one or several instances together. In this work, the destinations searched by each user can be seen as a cluster. The destinations in this cluster share some similarity in terms of this user's interest. With this intuition, the similarity between destination $i$ and $j$ for user $u$ can be defined as: \begin{equation}\label{ccs1} CCS^{(u)}(i, j) = \begin{cases} 1, & \text{if}\ r_{u,i} = r_{u,j} = 1 \\ 0, & \text{otherwise} \end{cases} \end{equation} Similar to RFS, the final similarity between destination $i$ and $j$ is then averaged over all users: \begin{equation}\label{ccs2} CCS^{(\mathbf{U})}(i, j)= \frac{1}{m}\sum_{u=1}^{m} CCS^{(u)}(i, j) \end{equation} With the proposed $CCS$ measure, a $n\times n$ destination similarity matrix can be provided (\eqref{matrix}). \begin{equation}\label{matrix} \textbf{S}^{CCS} = \begin{bmatrix} CCS^{(\mathbf{U})}(1,1) & CCS^{(\mathbf{U})}(1,2) & \dots & CCS^{(\mathbf{U})}(1,n) \\ CCS^{(\mathbf{U})}(2,1) & CCS^{(\mathbf{U})}(2,2) & \dots & CCS^{(\mathbf{U})}(2,n) \\ \vdots & \vdots & \vdots & \vdots \\ CCS^{(\mathbf{U})}(n,1) & CCS^{(\mathbf{U})}(n,2) & \dots & CCS^{(\mathbf{U})}(n,n) \end{bmatrix} \end{equation} In the similarity matrix, each row is a similarity vector, represents its similarity to all other destinations. To avoid recommend the searched destination, the diagonal values (the similarity value to itself) are set to 0. \textbf{ Normed Cluster Consensus Similarity ($CCS_{norm}$)}: The proposed $CCS$ method is simple and easy to understand. The destination similarity for each user is calculated at first to reflect this user's travel interest. Then, the similarity values of all users are averaged, which can be seen as a majority voting process. However, there are two requirements for a similarity measure. The first one is that each similarity vector can provide a good ranking, so that it can answer which are the most similar destinations given a known destination. The second one is to provide a solution when given multiple inputs. This requires that different similarity vectors (e.g. $\textbf{S}^{CCS}_{i,}$ and $\textbf{S}^{CCS}_{j,}$) should be comparable so that simple operations such as summing make sense. CCS can meet the first requirement, but not the second one. Because less popular destinations have very small value, especially when the user number $m$ is very large, while popular destinations have larger value. This means that if we average two similarity vectors to find destinations similar to both given searched cities, the popular one dominates the averaged results. This means that we only focus on the destinations to the popular one and ignore the less popular one. To mitigate this effect, the $CCS_{norm}$ is proposed to re-scale each similarity vector: \begin{equation}\label{ccsnorm} \textbf{S}^{CCS_{norm}}_{i,}= \frac{\textbf{S}^{CCS}_{i,}}{max(\textbf{S}^{CCS}_{i,})} \end{equation} \iffalse where $\textbf{S}^{CCS}_{i,}$ is the vector of similarity between destination i and all other destinations calculated with CCS: \begin{equation}\label{ccsnorm} \textbf{S}^{CCS}_{i,}= [CCS^{(\mathbf{U})}(i,1), CCS^{(\mathbf{U})}(i,2),...,CCS^{(\mathbf{U})}(i,n)] \end{equation} \fi \textbf{ Popularity based Cluster Consensus Similarity ($PCCS$)}: The proposed CCS\_norm method re-scales the similarity vectors for all destinations to the same range [0,1] so that they are comparable between destinations. CCS\_norm can help to avoid the situation that popular destination dominates the final similarity values when merging with less popular destinations. However, this brings another problem. Popular destinations are searched by most people, which means there are more data and we have more confidence on the similarity vector for popular destinations. For example, if unpopular destination $i$ has been searched by 2 users among 1 million users and another unpopular destination $j$ has been searched by another 2 users, the similarity between $i$ and $j$ is 0. However, the fact can be that they are quite similar, we just don't have enough data to support their similarity. Hence, we have more confidence for the similarity vector of popular destinations. With this intuition, the $PCCS$ is proposed based on the $CCS_{norm}$: \begin{equation}\label{PCCS} \textbf{S}^{PCCS}_{i,}= \frac{1}{1+e^{p_i- \textbf{S}^{CCS_{norm}}_{i,}}} \end{equation} where $p_i$ is the popularity of destination $i$, defined on $b_i$, the rank of destination $i$ based on the number of searches: \begin{equation}\label{popularity} p_i= 1-w\times \frac{b_i}{m} \end{equation} Here, $w \in (0,1)$ is a parameter to control the difference between the similarity values for popular and unpopular destinations. This allows a trade-off between putting more confidence on popular destinations ($ w > 0$) and putting same confidence for all destinations ($w = 0$). When $w$ is bigger, the popular destination vectors are more weighted. However, too big $w$ may lead to over focus on popular destinations and decrease the recommendation diversity. \section{Experiments} \subsection{Description of datasets} In this work, we use the destination search data. In this dataset, we use anonymized user cookie id to group the searches of the same user together. Users from 5 countries are selected due to business interest. The data contain activity related to search sessions, from which only 3 columns are preserved: the anonymized user cookie id, the searched destination location and the country from which the search was made. This last field is only used to create multiple market specific datasets to preserve cultural differences in destination preferences. \subsection{Protocol of experiments} The main objective of our experiments is to find the most suited similarity measure for search logs (binary user-items interaction). Apart from comparing different measures, the cultural difference and user interest change should also be taken into consideration. \textbf{Culture difference} The travel preferences of a French person, for example, may be different from those of a Chinese person. To deal with this challenge, we took country/culture into account when measuring destination similarity. The destination similarity is measured by the country from which the search was made. \textbf{User interest change} For most countries, user interest can changes over time. For example, summer destinations are usually different from winter destinations. To cope with this challenge, we regularly update the destination similarity to adapt to this shift in user interest. In the previous solution, recent two months data are used to calculate the similarity matrix and the results are updated weekly. \textbf{Training} For the training procedure, we have data from 5 countries. For each country, 10 time periods are selected across 2019 and 2020. The three proposed methods $CCS$, $CCS_{norm}$ and $PCCS$ are compared to 4 widely used similarity measures in the literature: Cosine similarity, Pearson similarity, Jaccard similarity and Kulsinski similarity. Like Jaccard similarity, Kulsinski similarity is also a measure for binary vectors. It is less popular, but tested in many different fields \cite{lewis2019data,smailagic2018medal,levine2017acquiring} and achieving some good results \cite{vinayan2018amritanlp}. For $PCCS$ method, different $w$ values are tested and the best $w$ is selected. \textbf{Testing} The similarity matrix is calculated from the 8 weeks of data, and tested on the data of the following week. During the test phase, we randomly mask one searched destination for each user, and use the rest of the searched destinations to predict the masked destination. To realize this, the average of the rest searched destinations' similarity vectors is calculated. Then, we check if the masked destination is in the list of top 5 most similar destinations. \section{Results} \subsection{Comparison of different similarity measures} To compare seven similarity measures, the experiments on 5 countries data with 10 time periods for each country have been done. The means and standard deviations of the top 5 accuracy (relative) over 10 periods for each method are shown in Table \ref{table1}. \iffalse \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth]{f1.PNG} \caption{The experimental results of seven similarity measures on 5 countries' data. X-axis is the country. Y-axis is the average top 5 accuracy. } \label{fig:f1} \end{figure} \fi \begin{table*}[htbp] \centering \caption{The experimental results on five countries data, with ten time periods for each country. The mean and standard deviation of top 5 accuracy (relative) is shown. The baseline is Pearson method, and the numbers in this table show the improvement against Pearson method.} \label{table1} \begin{adjustbox}{width=0.75\textwidth} \begin{tabular}{|l |p{1cm}| p{1.cm}| p{1.cm}| p{1.2cm}|p{1.cm}| p{1.1cm}|p{1.1cm}|} \hline & Pearson & Cos & Jaccard & Kulsinski & CCS & $CCS_{norm}$ & PCCS \\ \hline Country1 & $baseline$ & $2.22\% \pm 0.80$ & $2.90\% \pm 1.13$ & $9.38\% \pm 2.06$ & $9.39\% \pm 1.84$ & $9.38\% \pm 1.97$ & $\mathbf{9.89}\% \pm 1.82$ \vspace*{0.0mm} \\ Country2 & $baseline$ & $2.10\% \pm 1.21$ & $2.49\% \pm 1.57$ & $4.39\% \pm 1.86$ & $4.24\% \pm 1.93$ & $5.09\% \pm 1.81$ & $\mathbf{5.78}\% \pm 1.90$ \vspace*{0.0mm} \\ Country3 & $baseline$ & $1.36\% \pm 0.28$ & $2.29\% \pm 0.75$ & $4.70\% \pm 1.39$ & $4.51\% \pm 1.36$ & $4.96\% \pm 1.29$ & $\mathbf{5.71}\% \pm 1.36$ \vspace*{0.0mm} \\ Country4 & $baseline$ & $2.92\% \pm 0.42$ & $2.81\% \pm 0.53$ & $5.14\% \pm 0.51$ & $5.02\% \pm 0.48$ & $5.80\% \pm 0.51$ & $\mathbf{6.47}\% \pm 0.48$ \vspace*{0.0mm} \\ Country5 & $baseline$ & $4.22\% \pm 0.80$ & $3.90\% \pm 0.77$ & $5.81\% \pm 0.48$ & $5.55\% \pm 0.45$ & $6.32\% \pm 0.41$ & $\mathbf{7.18}\% \pm 0.44$ \vspace*{0.0mm} \\ \hline Avg rank &7.00 &5.60 &5.40 &3.20 &3.60 &2.20 &1.00 \\ \hline Avg improvement &0.00 &2.56\% &2.88\% &5.89\% &5.74\% &6.31\% &7.01\% \vspace*{0.5mm} \\ \hline \end{tabular} \end{adjustbox} \end{table*} The results illustrate that among all five countries, $PCCS$ always has the best performance, followed by $CCS_{norm}$. $Pearson$ is always the worst performing method among all similarity measures. Table \ref{table1} gives more details on comparison. On average, the proposed $PCCS$ improves the performance of $Pearson$ by 7.01\%. Compared to the previous selected method $Cos$, $PCCS$ gains an improvement of 4.44\% on average. From the results of average ranking in Table \ref{table1}, it can be observed that two best performing methods are $PCCS$ and $CCS_{norm}$, while the widely used similarity measures $Cos$ and $Pearson$ are the worst. One reason is that these two measures are not designed for the comparison for binary vectors. \iffalse From Table \ref{table1}, it can be seen that Country2, Country3, Country4 and Country5 have similar average performances for each similarity measure (e.g., $PCCS$ has around 54.30\% accuracy on all these four countries). However, the performance on RU is obviously worse (e.g., $PCCS$ has 48.67\% accuracy on RU). One possible reason is that compared to GB, FR, ES and DE, we have fewer user data from RU. For RU, $PCCS$ even has an improvement of 9.89\% over $Pearson$ and an improvement of 7.67\% over $Cos$. Compared to the performance gap on other countries, $PCCS$ has bigger improvement on RU, which may indicate that $PCCS$ is more suited to small data problems than other similarity measures. \fi The Wilcoxon-Holm post hoc test with Critical Differences (CD) is also done to have an overall statistical comparison. The statistical test result is shown in Figure \ref{fig:cd}. Generally speaking, all three $CCS$ based methods are significantly better than $Pearson$ and $Cos$. The proposed $PCCS$ is significantly better than all other 6 similarity measures. \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth]{cdtest.PNG} \caption{The CD diagram according to the Wilcoxon-Holm post hoc test result when alpha is 0.00001. Methods connected with bold black line have no significant difference. } \label{fig:cd} \end{figure} \subsection{Comparison of different training size} \begin{table*}[htbp] \centering \caption{The experimental results with four weeks of training data instead of eight weeks. The mean and standard deviation of top 5 accuracy (relative) is shown. The baseline is Pearson method, and the numbers in this table show the improvement against Pearson method.} \label{table2} \begin{adjustbox}{width=0.75\textwidth} \begin{tabular}{|l |p{1cm}| p{1.cm}| p{1.cm}| p{1.2cm}|p{1.cm}| p{1.1cm}|p{1.1cm}|} \hline & Pearson & Cos & Jaccard & Kulsinski & CCS & $CCS_{norm}$ & PCCS \\ \hline Country1 & $baseline$ & $2.21\% \pm 1.22$ & $2.89\% \pm 1.50$ & $8.88\% \pm 1.51$ & $8.81\% \pm 1.43$ & $8.87\% \pm 2.46$ & $\mathbf{9.45}\% \pm 2.04$ \vspace*{0.0mm} \\ Country2 & $baseline$ & $1.91\% \pm 0.70$ & $2.54\% \pm 0.90$ & $4.37\% \pm 1.41$ & $4.29\% \pm 1.38$ & $4.99\% \pm 1.25$ & $\mathbf{5.70}\% \pm 1.34$ \vspace*{0.0mm} \\ Country3 & $baseline$ & $1.20\% \pm 0.29$ & $2.26\% \pm 0.80$ & $5.01\% \pm 1.41$ & $4.99\% \pm 1.40$ & $5.31\% \pm 1.36$ & $\mathbf{6.01}\% \pm 1.40$ \vspace*{0.0mm} \\ Country4 & $baseline$ & $2.85\% \pm 0.36$ & $2.86\% \pm 0.59$ & $5.62\% \pm 0.73$ & $5.52\% \pm 0.74$ & $6.24\% \pm 0.74$ & $\mathbf{6.90}\% \pm 0.70$ \vspace*{0.0mm} \\ Country5 & $baseline$ & $3.79\% \pm 0.94$ & $3.87\% \pm 0.93$ & $5.97\% \pm 0.64$ & $5.76\% \pm 0.66$ & $6.40\% \pm 0.58$ & $\mathbf{7.26}\% \pm 0.66$ \vspace*{0.0mm} \\ \hline Avg rank &7.00 &6.00 &5.00 &2.80 &4.00 &2.20 &1.00 \\ \hline Avg improvement &0.00 &2.39\% &2.88\% &5.97\% &5.87\% &6.36\% &7.07\% \vspace*{0.5mm} \\ \hline \end{tabular} \end{adjustbox} \end{table*} In the previous section, all the experiments use the previous eight weeks of data as training data for the following week of test data. However, using eight weeks of training data for each market and updating the results weekly can be very time consuming and computationally expensive. In this section, we try to reduce the training data to four weeks only to see to which extend the prediction performance is affected. The experimental results are presented in Table \ref{table2}. Similar to the analysis in the previous section using eight week training data, the results trained on only four weeks data also show that the proposed $PCCS$ is the best performing method while $Cos$ and $Pearson$ are the worst. On average, $PCCS$ increases the performance of $Pearson$ by 7.06\% and increases the performance of $Cos$ by 4.67\%, which is also similar to the conclusion in the previous section. However, compared to the results in Table \ref{table1}, the average performance of each similarity measure is not strongly impacted by the reduction of training data size. The method that has the biggest difference is $Cos$, with a reduction of 0.65\% of accuracy. $PCCS$ has a performance reduction of 0.42\% on average. But when we look into each country, it can be found that the differences on Country2, Country3, Country4 and Country5 are negligible, but the performance on Country1 has a drop around 1.76\% (this analysis is based on the comparison of the absolute performances, which is not disclosed). One possible reason can be that, the data coverage in Country1 is not very good and the data size is much smaller than other countries, which can also explain this big performance drop on Country1 than other countries. The objective of this experiment is to answer the research question: how many data are enough to have a good prediction performance but at the same time keep the computational efficiency. The experimental results show that for countries with good data coverage, four weeks training data are enough compared to eight weeks training data. However, for countries without a proper data coverage, it's better to use eight weeks training data. \iffalse \subsection{Comparison of updating frequency} In the previous section, one possibility of improving the computational efficiency has been discussed in terms of reducing the training data size. Another way to reduce the computational cost is to update the results less frequently. \fi \subsection{Comparison of data from 2019 and 2020} Due to the crisis of covid-19, the data volume of 2020 is much smaller than the volume of 2019 data. The question in this section is: should we use data from 2019 or 2020 as training data for the prediction of 2020? To answer this question, we choose the first week of June 2020 as the test data and use the data from April and May 2019 and 2020 respectively as training data to compare their prediction performance. We choose this time period is because there were confinement in Europe during April and May 2020, and the data volume is lower compared to the same period in 2019. The experimental results are shown in Figure \ref{fig:vs}. Globally speaking, 2020 data have better perdition performance than 2019 data even though the 2020 data volume is much smaller. The smallest difference happens in Country3 with 1.05\% accuracy gap, while the biggest difference happens in Country4 with 4.75\% performance gap. One possible reason may be the change of user interest. Another possible reason may be that the covid-19 crisis has changed user's behavior (e.g. more local travel than international travel). \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{20192020.PNG} \caption{The comparison of the prediction performance between 2019 data and 2020 data. X-axis is the country, Y-axis is the accuracy.} \label{fig:vs} \end{figure} \section{Conclusion} In this work, we have presented the challenges of recommending destinations in the travel industry and the differences of recommending items other e-commerce sectors. The challenges of extra sparseness, dispersed user history, change of user interest, few direct or indirect feedbacks make it much harder to understand travelers and make recommendations more difficult than for other online consumers. To tackle these challenges and to understand travelers, we decide to measure the destination similarity in terms of traveler's implicit user interest. There are many similarity measures proposed in the field of collaborative filtering. However, most of these measures are designed for user-item interaction with ratings. Hence we propose a new similarity measure for user-item interaction without ratings to deal with the challenges in travel industry. The proposed $PCCS$ is inspired from Random Forest Similarity to take user interest into account. The destination popularity is added to adjust the magnitude of each similarity vector so that the similarity vectors of different destinations can be fused correctly. After comparing seven different similarity measures on real world data, the proposed $PCCS$ is proved to be the best solution. However, there are some improvements can be done. Firstly, we can expand single-source to multi-sources. Users can be limited by the knowledge of destinations, which means that there are destinations users never search because they don't know them not because they are not interested in them. Search logs can only reflect the similarity among destinations known to users. In this way, apart from the implicit user interest from search logs, other sources can be added such as destination images or descriptions. By using multi-source data, the implicit user interest can be fused with the destination information to provide a more meaningful similarity measure. Secondly, more user information and session information can be collected to provide a more personalized similarity measure. But more input information may also limit its use in real world use cases. \iffalse Thirdly, a unsymmetrical similarity can be proposed instead of a symmetrical one. Most similarity or distance measures are symmetrical due to the thinking that: if A is similar to B, B is similar to A. \fi \bibliographystyle{ieeetr} \section{Introduction} The travel industry is more and more relying on e-commerce nowadays as online solutions have made life more convenient and comfortable for everyone. However, unlike the online shopping industry, the travel industry is more complicated to analyse in four ways: 1. user data are much more sparse. A user on amazon may have a lot of search and purchasing history on Amazon during a short period like one month. But in the travel industry, a passenger may only reserve fight tickets once a year. 2. For platforms such as Amazon, each good/item has a clear hierarchical category (e.g, diapers belong to the category of Baby care, which belongs to a higher level category of Baby), which may be used as the definition of user interest. In this application for the travel industry, we consider the destination as the item such as Paris, London or Shanghai. But it is hard to define a category for Paris in terms of explicit user interest. 3. Account and user information is necessary for online purchasing on Amazon, which makes it easier to group the history of searches and purchases by user. For online flight bookings for example, the user does not have to create an account during the booking phase. Travelers often book flight tickets, train tickets, hotel, activities on different platforms, which makes it harder to group the purchases by user. 4. Most online products such as movies or Amazon products have ratings as user can give feedback easily. However, given that a traveler has been to Paris for example, it is hard to get a rating from the traveler on the Paris trip. Because a trip includes tickets booking, hotel booking, Point Of Interests (POIs) visits, food, etc. On the other hand, it is difficult for travelers to rate a trip to a certain destination as there are many variables. All these differences make it a great challenge to collect, analyse and understand user data in the travel industry. \iffalse 5. There are many choices (airline companies, travel agencies and meta search websites) when searching for a flight, while there are fewer choices for shopping diapers online. This means that, the search and reservation data for the same user is distributed on many different online platforms. Due to GDPR, it is impossible to share information among platforms to match the user information. \fi One important way to help understand traveler trends is destination similarity. Destination similarity is very important for the travel industry: \begin{enumerate} \item For travelers: we want to help travelers to find similar destinations that they may be interested in. For example, with the search or booking history of a traveler, similar destinations can be recommended to the traveler for the next trip. Another example is that the unprecedented COVID-19 crisis has introduced a number of travel restrictions which will prevent leisure travelers from reaching to their dream destination. With the destination similarity, we can recommend them alternative non-restricted destinations they might also be interested in. \item For tourism offices: tourism offices can better identify their competitors for each origin market. This can allow them to better distinguish themselves and target travelers considering trips to similar locations. \item For online advertising companies, destination similarity can be used to identify if the current user who is searching for destination A would be interested in their impression of destination B, to improve the click through rate or conversion rate. \item For a sustainable travel industry, the destination similarity can be used to suggest destinations that travelers might be willing to visit (so with the potential to convert a search into a booking), but closer to their home or simply better served by direct flights, thus reducing the CO2 emissions linked to transport. It can also be a solution to fight over-tourism problems by recommending similar destinations with fewer tourists or offering a more local an authentic experience, making travel even more rewarding. \end{enumerate} In this work, we propose to measure destination similarity from the search logs based on anonymized cookie ID. Various similarity measures (among users or items) have been proposed in the literature for collaborative filtering (CF) based recommender systems. However, most of these measures are proposed based on the users' ratings, while there's no rating information for a destination (city) in the travel industry. This makes many similarity measures not suitable for our problem. To fill this gap in the literature, we investigate different possible solutions and propose a new similarity measure, which has superior performance than the state of the art methods. The remainder of the paper is organized as follows: the background and related works are introduced in Section \rom{2}. In Section \rom{3}, the proposed similarity measures are introduced. We describe the data sets chosen in this study and provide the protocol of our experiments in Section \rom{4} and the experimental results in Section \rom{5}. The final conclusion and future works are given in Section \rom{6}. \section{Background and related works} Generally speaking, before a traveler makes a booking for a holiday, there are two preliminary steps: inspiration and search. There are a lot of information sources that can inspire travelers such as a movie scene or recommendations from relatives. During the inspiration step, the user interest is broad and general, thus we can only estimate the implicit user interest. With enough accumulated motivation, travelers will then search for more detailed information from travel websites, blogs or reviews to enrich their general picture of the destination. Then, the traveler will start to search flight and hotel information. If the prices agree with the budget, the traveler will pass to the next step: booking. Otherwise, the traveler may search for another similar destination and compare the price. The general motivation here is that: when a user searches for travel to a destination, this action shows that the user is interested in this destination. But we don't know what exactly the user is interested in (e.g. the museum, beach, mountains or other activities), which can be called 'implicit user interest'. The explicit user interest is difficult and expensive to get. Many companies ask the customers directly, while others try to infer from customer shopping/spending behavior. However, apart from the costs, the explicit user interest may not be clear for travelers themselves either. Because tourist attractions or POIs are not the only reason travelers get interested in a destination, it may also due to the culture (local people, food, etc.), weather, events and so on. Capturing implicit user interest seems easier and more direct. When a user searches both destination A and destination B, there must be some similarities between these two destinations for this user. However, the user interest changes overtime. If the time difference between two different searches is 10 months for example, these two destinations may not be similar as the user interest may shift due to the season or travel context. Hence, limiting the time period between two different searches is important. The research question is: given that a user has searched for several destinations, how can we determine the destination similarity? In the literature, there are many item similarity measures that can be applied. For example, in recommender systems, the CF has become the most widely used method to recommend items for users \cite{ricci2011introduction, gazdar2020new}. The core of CF is to calculate similarities among users or items \cite{liu2014new}. Destination similarity can be seen as recommending destinations for users from the point of view of CF. The classic CF problem has a rating matrix. Let $R = [r_{u,i}]^{m\times n}$ be a rating matrix with $m$ users and $n$ items. Each entry $r_{ui}$ is a rating value given by user $u$ to item $i$. There are different ranges of $r_{ui}$ in real world datasets. Among which, the range 1,2,3,4,5 is adopted in many datasets such as movie reviews and restaurant reviews. Many predefined similarity measures can be used for CF. The most commonly used similarity measure are the Pearson correlation coefficient (PCC) \cite{su2009survey} and the Cosine similarity (Cos) \cite{adomavicius2005toward}: \begin{equation} \label{pcc} PCC(u, v) = \frac{\Sigma_{i\in I}(r_{u,i} -\overline{r}_u )(r_{v,i} -\overline{r}_v)}{\sqrt{\Sigma_{i\in I}(r_{u,i} -\overline{r}_u)^2} \sqrt{\Sigma_{i\in I}(r_{v,i} -\overline{r}_v)^2}} \end{equation} \begin{equation} \label{cos} Cos(u, v) = \frac{\Sigma_{i\in I}r_{u,i}r_{v,i}}{\sqrt{\Sigma_{i\in I}r_{u,i}^2 \Sigma_{i\in I}r_{v,i} ^2}} \end{equation} where $I$ represents the set of common rating items by users $u$ and $v$. $\overline{r}_u$ and $\overline{r}_v$ are the average rating value of user $u$ and $v$ respectively. Many variants of PCC and Cos have also been proposed. In \cite{shardanand1995social}, Constrained Pearson Correlation Coefficient (CPCC) has been proposed to take the negative rating values into account. Other studies suggest that the number of co-rated items can impact the performance of the similarity measure. Hence, the Weighted Pearson Correlation Coefficient (WPCC) \cite{herlocker2017algorithmic} and the Sigmoid Function based Pearson Correlation Coefficient (SPCC) \cite{jamali2009trustwalker} have also been proposed. Another widely used similarity measure is the Jaccard. Jaccard similarity coefficient is one of the popular methods for calculating the similarity between users/items \cite{arsan2016comparison, jain2020survey}: \begin{equation} \label{jac} Jaccard(u, v) = \frac{|I_u\cap I_v|}{|I_u\cup I_v|} \end{equation} where $I_u$ and $I_v$ are the sets of items rated by users u and v respectively. Unlike the previous two similarity measures, the Jaccard similarity considers only the number of co-rated items between two users regardless the rating values, which seems to be suitable for our problem. Apart from previously introduced widely used similarity measures, there are some recent advances in the literature too. In \cite{gazdar2020new}, the authors list the limitations of popular similarity measures used in CF and propose a new similarity measure by combining the percentage of non common ratings with the absolute difference of ratings. However, all these similarity measures are proposed for measuring user similarities with item ratings \cite{jain2020survey,al2018similarity}. However, this is not adapted to recommend destinations to travelers for two reasons. Firstly, there are no ratings for destinations from travelers. From search logs, we can only get binary response that if a traveler searched this destination or not. Secondly, due to the user interest change, only recent searches should be used to recommend destinations to travelers, which means that there are very few searched cities. It is difficult to measure the user similarity with little information. Hence, we need to measure the destination similarity instead of user similarity. Here, we propose a new similarity measure for items without any user ratings in the next section, and apply it to destination similarity in our experiments. \iffalse The main goal of CF is to recommend a subset (topN) of the unknown items to a given user. The most popular way of realizing this for datasets with ratings is to : 1. measure user similarity; 2. predict unknown ratings. \fi \section{Proposed similarity measures} To recommend a destination to a traveler who has made one or more searches recently, we can directly measure the destination similarity and recommend destinations similar to travelers' recent searches. Let $R = [r_{u,i}]^{m\times n}$ be a binary matrix of with $m$ users and $n$ destinations. $r_{u,i} = 1$ means user $u$ recently searched destination $i$, while $r_{u,i} = 0$ means user $u$ didn't search destination $i$. The matrix $R$ is very sparse due to two reasons: 1. there are many destinations while each traveler only knows few of them; 2. people don't plan travel frequently. As the matrix is binary, many commonly used CF similarity measures based on ratings are less meaningful. In this work, a simple and easy to understand similarity measure is proposed inspired from Random Forest Similarity (RFS) \cite{cao2019random, cao2019random1}. Random Forest (RF) classifier \cite{breiman2001random} is one of the most successful and widely used classifiers. A RF $\textbf{H}$ is an ensemble made up with $M$ decision trees , denoted as in Equation \eqref{e2}: \begin{equation}\label{e2} \mathbf{H}(\mathbf{X}) = \{h_k(\mathbf{X}),k=1,\dots,M\} \end{equation}. RFS is a similarity measure inferred from RF, which is also widely used \cite{shi2006unsupervised,cao2018improve,farhadi2015gene}. For each tree $h_k$ in the forest $\textbf{H}$, if two different instances $\mathbf{X}_i$ and $ \mathbf{X}_j$ fall in the same terminal node, they are considered as similar: \begin{equation}\label{sk} RFS^{(k)}(\mathbf{X}_i, \mathbf{X}_j)= \begin{cases} 1, & \text{if}\ l_k(\mathbf{X}_i) = l_k(\mathbf{X}_j)\\ 0, & \text{otherwise} \end{cases} \end{equation} where $l_k(\mathbf{X})$ is a function that returns the leaf node of tree $k$ given input $\mathbf{X}$. The final RFS measure $RFS^{(\mathbf{H})}$ consists in calculating the $RFS^{(k)}$ value for each tree ${h_k}$ in the forest, and to average the resulting similarity values over the $M$ trees as in Equation \eqref{simil}: \begin{equation}\label{simil} RFS^{(\mathbf{H})}(\mathbf{X}_i, \mathbf{X}_j) = \frac{1}{M}\sum_{k=1}^{M} RFS^{(k)}(\mathbf{X}_i, \mathbf{X}_j) \end{equation} \textbf{Cluster Consensus Similarity ($CCS$)}: RFS is mostly designed for classification problems, which is not suitable for the destination similarity problem with only user-destination interaction matrix. However, inspired by the idea of RFS, we propose a simple method named $CCS$. For RFS, each tree is trained to give a different data partition: each leaf node groups one or several instances together. In this work, the destinations searched by each user can be seen as a cluster. The destinations in this cluster share some similarity in terms of this user's interest. With this intuition, the similarity between destination $i$ and $j$ for user $u$ can be defined as: \begin{equation}\label{ccs1} CCS^{(u)}(i, j) = \begin{cases} 1, & \text{if}\ r_{u,i} = r_{u,j} = 1 \\ 0, & \text{otherwise} \end{cases} \end{equation} Similar to RFS, the final similarity between destination $i$ and $j$ is then averaged over all users: \begin{equation}\label{ccs2} CCS^{(\mathbf{U})}(i, j)= \frac{1}{m}\sum_{u=1}^{m} CCS^{(u)}(i, j) \end{equation} With the proposed $CCS$ measure, a $n\times n$ destination similarity matrix can be provided (\eqref{matrix}). \begin{equation}\label{matrix} \textbf{S}^{CCS} = \begin{bmatrix} CCS^{(\mathbf{U})}(1,1) & CCS^{(\mathbf{U})}(1,2) & \dots & CCS^{(\mathbf{U})}(1,n) \\ CCS^{(\mathbf{U})}(2,1) & CCS^{(\mathbf{U})}(2,2) & \dots & CCS^{(\mathbf{U})}(2,n) \\ \vdots & \vdots & \vdots & \vdots \\ CCS^{(\mathbf{U})}(n,1) & CCS^{(\mathbf{U})}(n,2) & \dots & CCS^{(\mathbf{U})}(n,n) \end{bmatrix} \end{equation} In the similarity matrix, each row is a similarity vector, represents its similarity to all other destinations. To avoid recommend the searched destination, the diagonal values (the similarity value to itself) are set to 0. \textbf{ Normed Cluster Consensus Similarity ($CCS_{norm}$)}: The proposed $CCS$ method is simple and easy to understand. The destination similarity for each user is calculated at first to reflect this user's travel interest. Then, the similarity values of all users are averaged, which can be seen as a majority voting process. However, there are two requirements for a similarity measure. The first one is that each similarity vector can provide a good ranking, so that it can answer which are the most similar destinations given a known destination. The second one is to provide a solution when given multiple inputs. This requires that different similarity vectors (e.g. $\textbf{S}^{CCS}_{i,}$ and $\textbf{S}^{CCS}_{j,}$) should be comparable so that simple operations such as summing make sense. CCS can meet the first requirement, but not the second one. Because less popular destinations have very small value, especially when the user number $m$ is very large, while popular destinations have larger value. This means that if we average two similarity vectors to find destinations similar to both given searched cities, the popular one dominates the averaged results. This means that we only focus on the destinations to the popular one and ignore the less popular one. To mitigate this effect, the $CCS_{norm}$ is proposed to re-scale each similarity vector: \begin{equation}\label{ccsnorm} \textbf{S}^{CCS_{norm}}_{i,}= \frac{\textbf{S}^{CCS}_{i,}}{max(\textbf{S}^{CCS}_{i,})} \end{equation} \iffalse where $\textbf{S}^{CCS}_{i,}$ is the vector of similarity between destination i and all other destinations calculated with CCS: \begin{equation}\label{ccsnorm} \textbf{S}^{CCS}_{i,}= [CCS^{(\mathbf{U})}(i,1), CCS^{(\mathbf{U})}(i,2),...,CCS^{(\mathbf{U})}(i,n)] \end{equation} \fi \textbf{ Popularity based Cluster Consensus Similarity ($PCCS$)}: The proposed CCS\_norm method re-scales the similarity vectors for all destinations to the same range [0,1] so that they are comparable between destinations. CCS\_norm can help to avoid the situation that popular destination dominates the final similarity values when merging with less popular destinations. However, this brings another problem. Popular destinations are searched by most people, which means there are more data and we have more confidence on the similarity vector for popular destinations. For example, if unpopular destination $i$ has been searched by 2 users among 1 million users and another unpopular destination $j$ has been searched by another 2 users, the similarity between $i$ and $j$ is 0. However, the fact can be that they are quite similar, we just don't have enough data to support their similarity. Hence, we have more confidence for the similarity vector of popular destinations. With this intuition, the $PCCS$ is proposed based on the $CCS_{norm}$: \begin{equation}\label{PCCS} \textbf{S}^{PCCS}_{i,}= \frac{1}{1+e^{p_i- \textbf{S}^{CCS_{norm}}_{i,}}} \end{equation} where $p_i$ is the popularity of destination $i$, defined on $b_i$, the rank of destination $i$ based on the number of searches: \begin{equation}\label{popularity} p_i= 1-w\times \frac{b_i}{m} \end{equation} Here, $w \in (0,1)$ is a parameter to control the difference between the similarity values for popular and unpopular destinations. This allows a trade-off between putting more confidence on popular destinations ($ w > 0$) and putting same confidence for all destinations ($w = 0$). When $w$ is bigger, the popular destination vectors are more weighted. However, too big $w$ may lead to over focus on popular destinations and decrease the recommendation diversity. \section{Experiments} \subsection{Description of datasets} In this work, we use the destination search data. In this dataset, we use anonymized user cookie id to group the searches of the same user together. Users from 5 countries are selected due to business interest. The data contain activity related to search sessions, from which only 3 columns are preserved: the anonymized user cookie id, the searched destination location and the country from which the search was made. This last field is only used to create multiple market specific datasets to preserve cultural differences in destination preferences. \subsection{Protocol of experiments} The main objective of our experiments is to find the most suited similarity measure for search logs (binary user-items interaction). Apart from comparing different measures, the cultural difference and user interest change should also be taken into consideration. \textbf{Culture difference} The travel preferences of a French person, for example, may be different from those of a Chinese person. To deal with this challenge, we took country/culture into account when measuring destination similarity. The destination similarity is measured by the country from which the search was made. \textbf{User interest change} For most countries, user interest can changes over time. For example, summer destinations are usually different from winter destinations. To cope with this challenge, we regularly update the destination similarity to adapt to this shift in user interest. In the previous solution, recent two months data are used to calculate the similarity matrix and the results are updated weekly. \textbf{Training} For the training procedure, we have data from 5 countries. For each country, 10 time periods are selected across 2019 and 2020. The three proposed methods $CCS$, $CCS_{norm}$ and $PCCS$ are compared to 4 widely used similarity measures in the literature: Cosine similarity, Pearson similarity, Jaccard similarity and Kulsinski similarity. Like Jaccard similarity, Kulsinski similarity is also a measure for binary vectors. It is less popular, but tested in many different fields \cite{lewis2019data,smailagic2018medal,levine2017acquiring} and achieving some good results \cite{vinayan2018amritanlp}. For $PCCS$ method, different $w$ values are tested and the best $w$ is selected. \textbf{Testing} The similarity matrix is calculated from the 8 weeks of data, and tested on the data of the following week. During the test phase, we randomly mask one searched destination for each user, and use the rest of the searched destinations to predict the masked destination. To realize this, the average of the rest searched destinations' similarity vectors is calculated. Then, we check if the masked destination is in the list of top 5 most similar destinations. \section{Results} \subsection{Comparison of different similarity measures} To compare seven similarity measures, the experiments on 5 countries data with 10 time periods for each country have been done. The means and standard deviations of the top 5 accuracy (relative) over 10 periods for each method are shown in Table \ref{table1}. \iffalse \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth]{f1.PNG} \caption{The experimental results of seven similarity measures on 5 countries' data. X-axis is the country. Y-axis is the average top 5 accuracy. } \label{fig:f1} \end{figure} \fi \begin{table*}[htbp] \centering \caption{The experimental results on five countries data, with ten time periods for each country. The mean and standard deviation of top 5 accuracy (relative) is shown. The baseline is Pearson method, and the numbers in this table show the improvement against Pearson method.} \label{table1} \begin{adjustbox}{width=0.75\textwidth} \begin{tabular}{|l |p{1cm}| p{1.cm}| p{1.cm}| p{1.2cm}|p{1.cm}| p{1.1cm}|p{1.1cm}|} \hline & Pearson & Cos & Jaccard & Kulsinski & CCS & $CCS_{norm}$ & PCCS \\ \hline Country1 & $baseline$ & $2.22\% \pm 0.80$ & $2.90\% \pm 1.13$ & $9.38\% \pm 2.06$ & $9.39\% \pm 1.84$ & $9.38\% \pm 1.97$ & $\mathbf{9.89}\% \pm 1.82$ \vspace*{0.0mm} \\ Country2 & $baseline$ & $2.10\% \pm 1.21$ & $2.49\% \pm 1.57$ & $4.39\% \pm 1.86$ & $4.24\% \pm 1.93$ & $5.09\% \pm 1.81$ & $\mathbf{5.78}\% \pm 1.90$ \vspace*{0.0mm} \\ Country3 & $baseline$ & $1.36\% \pm 0.28$ & $2.29\% \pm 0.75$ & $4.70\% \pm 1.39$ & $4.51\% \pm 1.36$ & $4.96\% \pm 1.29$ & $\mathbf{5.71}\% \pm 1.36$ \vspace*{0.0mm} \\ Country4 & $baseline$ & $2.92\% \pm 0.42$ & $2.81\% \pm 0.53$ & $5.14\% \pm 0.51$ & $5.02\% \pm 0.48$ & $5.80\% \pm 0.51$ & $\mathbf{6.47}\% \pm 0.48$ \vspace*{0.0mm} \\ Country5 & $baseline$ & $4.22\% \pm 0.80$ & $3.90\% \pm 0.77$ & $5.81\% \pm 0.48$ & $5.55\% \pm 0.45$ & $6.32\% \pm 0.41$ & $\mathbf{7.18}\% \pm 0.44$ \vspace*{0.0mm} \\ \hline Avg rank &7.00 &5.60 &5.40 &3.20 &3.60 &2.20 &1.00 \\ \hline Avg improvement &0.00 &2.56\% &2.88\% &5.89\% &5.74\% &6.31\% &7.01\% \vspace*{0.5mm} \\ \hline \end{tabular} \end{adjustbox} \end{table*} The results illustrate that among all five countries, $PCCS$ always has the best performance, followed by $CCS_{norm}$. $Pearson$ is always the worst performing method among all similarity measures. Table \ref{table1} gives more details on comparison. On average, the proposed $PCCS$ improves the performance of $Pearson$ by 7.01\%. Compared to the previous selected method $Cos$, $PCCS$ gains an improvement of 4.44\% on average. From the results of average ranking in Table \ref{table1}, it can be observed that two best performing methods are $PCCS$ and $CCS_{norm}$, while the widely used similarity measures $Cos$ and $Pearson$ are the worst. One reason is that these two measures are not designed for the comparison for binary vectors. \iffalse From Table \ref{table1}, it can be seen that Country2, Country3, Country4 and Country5 have similar average performances for each similarity measure (e.g., $PCCS$ has around 54.30\% accuracy on all these four countries). However, the performance on RU is obviously worse (e.g., $PCCS$ has 48.67\% accuracy on RU). One possible reason is that compared to GB, FR, ES and DE, we have fewer user data from RU. For RU, $PCCS$ even has an improvement of 9.89\% over $Pearson$ and an improvement of 7.67\% over $Cos$. Compared to the performance gap on other countries, $PCCS$ has bigger improvement on RU, which may indicate that $PCCS$ is more suited to small data problems than other similarity measures. \fi The Wilcoxon-Holm post hoc test with Critical Differences (CD) is also done to have an overall statistical comparison. The statistical test result is shown in Figure \ref{fig:cd}. Generally speaking, all three $CCS$ based methods are significantly better than $Pearson$ and $Cos$. The proposed $PCCS$ is significantly better than all other 6 similarity measures. \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth]{cdtest.PNG} \caption{The CD diagram according to the Wilcoxon-Holm post hoc test result when alpha is 0.00001. Methods connected with bold black line have no significant difference. } \label{fig:cd} \end{figure} \subsection{Comparison of different training size} \begin{table*}[htbp] \centering \caption{The experimental results with four weeks of training data instead of eight weeks. The mean and standard deviation of top 5 accuracy (relative) is shown. The baseline is Pearson method, and the numbers in this table show the improvement against Pearson method.} \label{table2} \begin{adjustbox}{width=0.75\textwidth} \begin{tabular}{|l |p{1cm}| p{1.cm}| p{1.cm}| p{1.2cm}|p{1.cm}| p{1.1cm}|p{1.1cm}|} \hline & Pearson & Cos & Jaccard & Kulsinski & CCS & $CCS_{norm}$ & PCCS \\ \hline Country1 & $baseline$ & $2.21\% \pm 1.22$ & $2.89\% \pm 1.50$ & $8.88\% \pm 1.51$ & $8.81\% \pm 1.43$ & $8.87\% \pm 2.46$ & $\mathbf{9.45}\% \pm 2.04$ \vspace*{0.0mm} \\ Country2 & $baseline$ & $1.91\% \pm 0.70$ & $2.54\% \pm 0.90$ & $4.37\% \pm 1.41$ & $4.29\% \pm 1.38$ & $4.99\% \pm 1.25$ & $\mathbf{5.70}\% \pm 1.34$ \vspace*{0.0mm} \\ Country3 & $baseline$ & $1.20\% \pm 0.29$ & $2.26\% \pm 0.80$ & $5.01\% \pm 1.41$ & $4.99\% \pm 1.40$ & $5.31\% \pm 1.36$ & $\mathbf{6.01}\% \pm 1.40$ \vspace*{0.0mm} \\ Country4 & $baseline$ & $2.85\% \pm 0.36$ & $2.86\% \pm 0.59$ & $5.62\% \pm 0.73$ & $5.52\% \pm 0.74$ & $6.24\% \pm 0.74$ & $\mathbf{6.90}\% \pm 0.70$ \vspace*{0.0mm} \\ Country5 & $baseline$ & $3.79\% \pm 0.94$ & $3.87\% \pm 0.93$ & $5.97\% \pm 0.64$ & $5.76\% \pm 0.66$ & $6.40\% \pm 0.58$ & $\mathbf{7.26}\% \pm 0.66$ \vspace*{0.0mm} \\ \hline Avg rank &7.00 &6.00 &5.00 &2.80 &4.00 &2.20 &1.00 \\ \hline Avg improvement &0.00 &2.39\% &2.88\% &5.97\% &5.87\% &6.36\% &7.07\% \vspace*{0.5mm} \\ \hline \end{tabular} \end{adjustbox} \end{table*} In the previous section, all the experiments use the previous eight weeks of data as training data for the following week of test data. However, using eight weeks of training data for each market and updating the results weekly can be very time consuming and computationally expensive. In this section, we try to reduce the training data to four weeks only to see to which extend the prediction performance is affected. The experimental results are presented in Table \ref{table2}. Similar to the analysis in the previous section using eight week training data, the results trained on only four weeks data also show that the proposed $PCCS$ is the best performing method while $Cos$ and $Pearson$ are the worst. On average, $PCCS$ increases the performance of $Pearson$ by 7.06\% and increases the performance of $Cos$ by 4.67\%, which is also similar to the conclusion in the previous section. However, compared to the results in Table \ref{table1}, the average performance of each similarity measure is not strongly impacted by the reduction of training data size. The method that has the biggest difference is $Cos$, with a reduction of 0.65\% of accuracy. $PCCS$ has a performance reduction of 0.42\% on average. But when we look into each country, it can be found that the differences on Country2, Country3, Country4 and Country5 are negligible, but the performance on Country1 has a drop around 1.76\% (this analysis is based on the comparison of the absolute performances, which is not disclosed). One possible reason can be that, the data coverage in Country1 is not very good and the data size is much smaller than other countries, which can also explain this big performance drop on Country1 than other countries. The objective of this experiment is to answer the research question: how many data are enough to have a good prediction performance but at the same time keep the computational efficiency. The experimental results show that for countries with good data coverage, four weeks training data are enough compared to eight weeks training data. However, for countries without a proper data coverage, it's better to use eight weeks training data. \iffalse \subsection{Comparison of updating frequency} In the previous section, one possibility of improving the computational efficiency has been discussed in terms of reducing the training data size. Another way to reduce the computational cost is to update the results less frequently. \fi \subsection{Comparison of data from 2019 and 2020} Due to the crisis of covid-19, the data volume of 2020 is much smaller than the volume of 2019 data. The question in this section is: should we use data from 2019 or 2020 as training data for the prediction of 2020? To answer this question, we choose the first week of June 2020 as the test data and use the data from April and May 2019 and 2020 respectively as training data to compare their prediction performance. We choose this time period is because there were confinement in Europe during April and May 2020, and the data volume is lower compared to the same period in 2019. The experimental results are shown in Figure \ref{fig:vs}. Globally speaking, 2020 data have better perdition performance than 2019 data even though the 2020 data volume is much smaller. The smallest difference happens in Country3 with 1.05\% accuracy gap, while the biggest difference happens in Country4 with 4.75\% performance gap. One possible reason may be the change of user interest. Another possible reason may be that the covid-19 crisis has changed user's behavior (e.g. more local travel than international travel). \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{20192020.PNG} \caption{The comparison of the prediction performance between 2019 data and 2020 data. X-axis is the country, Y-axis is the accuracy.} \label{fig:vs} \end{figure} \section{Conclusion} In this work, we have presented the challenges of recommending destinations in the travel industry and the differences of recommending items other e-commerce sectors. The challenges of extra sparseness, dispersed user history, change of user interest, few direct or indirect feedbacks make it much harder to understand travelers and make recommendations more difficult than for other online consumers. To tackle these challenges and to understand travelers, we decide to measure the destination similarity in terms of traveler's implicit user interest. There are many similarity measures proposed in the field of collaborative filtering. However, most of these measures are designed for user-item interaction with ratings. Hence we propose a new similarity measure for user-item interaction without ratings to deal with the challenges in travel industry. The proposed $PCCS$ is inspired from Random Forest Similarity to take user interest into account. The destination popularity is added to adjust the magnitude of each similarity vector so that the similarity vectors of different destinations can be fused correctly. After comparing seven different similarity measures on real world data, the proposed $PCCS$ is proved to be the best solution. However, there are some improvements can be done. Firstly, we can expand single-source to multi-sources. Users can be limited by the knowledge of destinations, which means that there are destinations users never search because they don't know them not because they are not interested in them. Search logs can only reflect the similarity among destinations known to users. In this way, apart from the implicit user interest from search logs, other sources can be added such as destination images or descriptions. By using multi-source data, the implicit user interest can be fused with the destination information to provide a more meaningful similarity measure. Secondly, more user information and session information can be collected to provide a more personalized similarity measure. But more input information may also limit its use in real world use cases. \iffalse Thirdly, a unsymmetrical similarity can be proposed instead of a symmetrical one. Most similarity or distance measures are symmetrical due to the thinking that: if A is similar to B, B is similar to A. \fi \bibliographystyle{ieeetr}
{ "attr-fineweb-edu": 2.148438, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbds5qsMAIoJibUts
\section{Introduction} Association football (hereafter football) is by far the most popular sport globally with almost every country in the world having a national team and often a multitude of domestic leagues. In England alone, there are over 100 professional teams. The vast popularity of the sport has led to demand for predictive information regarding the outcomes of matches, competitions and leagues, often driven by the desire to gamble on them. In recent years, a vast number of betting markets have opened up, providing opportunities for those with useful predictive information and/or insight to make a profit. Betting strategies are usually underpinned with predictive models that attempt to predict the probability of different outcomes, thus informing which bets to take. \par Whilst early attempts at building predictive models focused on the number of goals scored by each team in previous matches, in recent years it has become clear that there is predictive value in other match events such as the number and nature of shots and corners taken by each team (\cite{wheatcroft2019evaluating,wheatcroft2020}). The key insight is that the number of goals scored by each team is subject to a higher level of chance than events such as shots and corners, which can be more reflective of the quality of the performances of the teams. Take, for example, a match in which the home team takes a large number of shots but is unable to score, whilst the away team takes few shots and happens to score from one of them and win the match. A predictive model that only takes goals into account would not reflect the fact that the home team dominated the match and may wrongly downgrade the forecast probability that they win future matches. \par In this paper, we consider a large number of football matches in which match data such as the number of shots, shots on target and corners is provided. In order to build a set of match forecasts, we are then interested in (i) the number of shots taken by each team and, (ii) the probability that any shot results in a goal. Given these two ingredients, we can then predict the number of goals scored by each team in a match. For (i), we make use of a recently developed methodology which uses a rating system to predict the number of shots taken by each team. For (ii), we propose a simple model to predict the probability of scoring from a shot. The latter is tested on over one million shots from European football matches in 22 different leagues and, when calibrated, is shown to be capable of producing skillful probabilistic forecasts. Forecasts of the number of shots are then combined with forecasts for the probability of shot success to construct forecasts of both the match outcome and whether the total number of goals in a match will exceed $2.5$. \par The focus of this paper is on assessing the probability of a team scoring conditioned on them taking a shot at goal. In fact, the question of how to assess the probability of scoring from a shot is one that has received a lot of attention in the football forecasting literature. However, the focus has almost exclusively been on factors such as the location and nature of the shot, position of players etc. Here, we do not attempt to take this information into account and rather estimate the probability of shot success on past data, focusing on the strengths of the teams. This is not simply a limitation of our methodology but a property of the question we are trying to address. We look to estimate the probability of shot success before the match has started and therefore we cannot condition on the specific nature of each shot. Whilst we can attempt to predict the number of shots taken by each team, it is not realistic to be able to predict the nature of those shots. The output of our model is therefore a fixed forecast probability of shot success for each team in a match. \par Typically, the nature of football prediction models is that each team involved in a league or cup competition is given a `rating'. These rating systems often take one of two different approaches. In the first, each team's rating is a variable which is updated as new information emerges. The nature of those updates are governed by a small number of parameters which determine aspects such as the effect of the result of the last match or the margin of victory/defeat. We refer to these as \emph{Variable Rating Systems}. The other category assigns each team one or more parameters which determine their strength and these are usually estimated using maximum likelihood (\cite{ley2019ranking}). In that case, a large number of parameters are required to be estimated simultaneously and fairly sophisticated optimisation algorithms are often needed. We deviate from the terminology used by \cite{ley2019ranking}, who refer to such models as `Maximum Likelihood models' and, instead, use the more general term \emph{Parametric Rating Systems}. In this paper, we make use of both approaches. Our shot probability model (the novel model in this research) is a Parametric Rating System which assigns attacking and defensive ratings to each team and these are estimated using maximum likelihood estimation. In addition, we make use of a Variable rating system in the form of Generalised Attacking Performance (GAP) ratings which estimate the number of shots achieved by each team (\cite{wheatcroft2020}). \par There is a large body of literature proposing approaches to building ratings systems for sports teams or players. By far the most well known approach is the Elo rating system which has a long history in sport and has inspired many other systems. Elo ratings were initially designed with the intention of providing rankings for chess players and the system was implemented by the United States Chess Federation in 1960 (\cite{elo1978rating}). The Elo system assigns ratings to each player or team, which are then used to estimate probabilities of the outcome of a game. The rating of each player is then updated to take the result of the game into account. Whilst the system was initially designed for cases in which the outcomes are binary (i.e. there are no ties), more recently, it has been extended to account for draws so that they are applicable to sports such as football, in which draws are common. After each match, the system takes the difference between the estimated probabilities and the outcome (assigned a one, a zero, or $0.5$ for a draw) and adjusts the ratings accordingly. The system in its original form therefore does not account for the \emph{size} of a win. Elo ratings have been demonstrated in the context of football and shown to perform favourably with respect to six other rating systems (\cite{hvattum2010using}). FIFA switched to an Elo rating system in 2018 to produce its international football world rankings (\cite{Fifa_elo}). Elo ratings are also common in other sports such as Rugby League (\cite{carbone2016rugby}), American Football (\cite{538}) and Basketball (\cite{538NBA}). \par Whilst Elo ratings have been an important part of sports prediction for many years, they are limited in that they do not directly take home advantage into account. This is important because home advantage has a very big effect in football (\cite{pollard2008home}). Adjustments have been made to the Elo rating system to account for this but this typically consists of a single parameter that doesn't account for variation in the home advantage of different teams (\cite{538NBA,538}). Rating systems such as the GAP rating system used in this paper distinguish between home and away performances by giving separate ratings for each. This is also true of the pi-rating system introduced by \cite{constantinou2013determining}, for example. \par Variable Ratings Systems such as the GAP rating system assign ratings to each team which are updated each time they are involved in a match. Similar approaches have been taken by a large number of authors. For example, \cite{maher1982modelling} assigned fixed ratings (i.e. not time varying) to each team and used them in combination with a Poisson model in order to estimate the number of goals scored. A similar approach was used by \cite{dixon1997modelling} to estimate match probabilities. It was shown that the forecasts were able to make a statistically significant profit for matches in which there was a large discrepancy between the estimated probabilities and the probabilities implied by the odds. The Dixon and Coles model was modified by \cite{dixon2004value} who were able to demonstrate a profit using a wider range of published bookmaker odds. A Bayesian model which produced time-varying attacking and defensive ratings was defined by \cite{doi:10.1111/1467-9884.00243}. There are many other examples of systems that use attacking and defensive ratings and these can be found in, for example, \cite{karlis2003analysis}, \cite{lee1997modeling} and \cite{baker2015time}. A number of authors have taken a Parametric Rating System approach to modelling football matches. An overview can be found in \cite{ley2019ranking} in which a Bivariate Poisson model is shown to produce the most favourable results according to the Ranked Probability Score (RPS). A profitable betting strategy has also been demonstrated by \cite{koopman2015dynamic} using a Bivariate Poisson model. The approach taken by \cite{ley2019ranking}, in which less recent matches are weighted lower than more recent matches, provides inspiration for our shot success model. \par Related to the prediction of shot success is the concept of `expected goals' which has been growing significantly in prominence in football analysis in recent years. The rationale is that the nature of a team's attempts at goal can be used to estimate the number of goals they would be `expected to score' in a match. For a particular shot, the `expected' number of goals is simply the estimated probability of scoring given characteristics such as the location, angle to goal, position of defenders etc. As a result, a great deal of effort has been made to model the probability of scoring based on information of this kind. For example, \cite{ruiz2015measuring} attempt to evaluate the efficacy of football teams in terms of converting shots into goals by taking account of characteristics such as the location and type of shot (e.g. whether the shot was taken from open play). \cite{gelade2014evaluating} built a model to evaluate the performance of goalkeepers by taking the factors such as the location, deflections and swerve of the ball into account. Many other papers have been written on the subject and a good overview can be found in \cite{eggels2016expected} and \cite{rathke2017examination} who also present their own models. \par The main aim of this paper is to define and demonstrate a model for the probability of a team scoring from a shot in a football match. To our knowledge, whilst significant effort has been made to estimate probabilities of scoring given the specific nature of a shot (such as location), none of these approaches attempt to provide predictions of shot success before the match and cannot be used for this purpose. In short, the aim of those models is to predict the probability of scoring from a particular shot given various characteristics, whilst the purpose of our model is to predict the probability of scoring given the strengths of the teams involved and the location of the match (i.e. which team is at home). The latter can easily be combined with predictions of the number of shots achieved to predict the overall number of goals for each team. \par This paper is organised as follows. In section~\ref{section:data}, we describe the data set used to demonstrate our model. In section~\ref{section:model}, we describe our model of shot success and assess its performance in terms of forecast skill and reliability in 22 different football leagues. In section~\ref{section:predicting_match_outcomes}, we demonstrate the use of our shot success model in combination with the GAP rating system to provide forecasts of match outcomes and whether the total number of goals in a match will exceed 2.5. Section~\ref{section:discussion} is used for discussion. \section{Data} \label{section:data} In this paper, we make use of the football data repository available at \url{www.football-data.co.uk}, which supplies match-by-match data for 22 European Leagues. For each match, a variety of statistics are provided including the number of shots, shots on target and corners. In addition, odds data from multiple bookmakers are provided for the match outcome market, the over/under 2.5 goal market and the Asian Handicap match outcome market. For some leagues, match statistics are available from the 2000/2001 season onwards whilst, in others, these are available for later seasons only. Since we require shot data, only matches from the 2000/2001 season onwards are considered. A summary of the data used in this paper is shown in table~\ref{table:Leagues_available}. Here, the total number of matches since 2000/2001, the number of matches in which shots and corner data are available and the number of these excluding a `burn-in' period for each season are shown. The `burn-in' period is simply the first six matches of the season for the each team. This is excluded from forecast evaluation to allow the forecasts time to `learn' sufficiently about the strengths and weaknesses of the teams in a given season. All leagues include data up to and including the end of the 2018/19 season. \par \begin{table}[!htb] \begin{center} \begin{tabular}{|l|rrr|} \hline League & No. matches & Match data available & Excluding burn-in \\ \hline Belgian Jupiler League & 5090 & 480 & 384 \\ English Premier League & 9120 & 7220 & 5759 \\ English Championship & 13248 & 10484 & 8641 \\ English League One & 13223 & 10460 & 8608 \\ English League Two & 13223 & 10459 & 8613 \\ English National League & 7040 & 5352 & 4642 \\ French Ligue 1 & 8718 & 4907 & 4126 \\ French Ligue 2 & 7220 & 760 & 639 \\ German Bundesliga & 7316 & 5480 & 3502 \\ German 2.Bundesliga & 5670 & 1057 & 753 \\ Greek Super League & 6470 & 477 & 381 \\ Italian Serie A & 8424 & 5275 & 4439 \\ Italian Serie B & 8502 & 803 & 680 \\ Netherlands Eredivisie & 5814 & 612 & 504 \\ Portuguese Primeira Liga & 5286 & 612 & 504 \\ Scottish Premier League & 5208 & 4305 & 3427 \\ Scottish Championship & 3334 & 524 & 297 \\ Scottish League One & 3335 & 527 & 298 \\ Scottish League Two & 3328 & 525 & 297 \\ Spanish Primera Liga & 8330 & 5290 & 4449 \\ Spanish Segunda Division & 8757 & 903 & 771 \\ Turkish Super lig & 5779 & 612 & 504 \\ \hline Total & 162435 & 77124 & 62218 \\ \hline \end{tabular} \caption{Data used in this paper.} \label{table:Leagues_available} \end{center} \end{table} \section{A model for predicting shot success} \label{section:model} We propose a simple model for predicting the probability of a football team scoring from a shot at goal. We are primarily interested in estimating the probability pre-match and therefore we do not take into account any specific information about the location or nature of a shot. In short, in a match between two teams, we ask the question `If a particular team takes a shot, what is the probability that they score as a result?' \par Consider a football league with $T$ teams that play each other over the course of a season. Let $a_{1},...,a_{T}$ and $d_{1},...,d_{T}$ be attacking and defensive ratings respectively for each team. In a match with the $i$-th team at home to the $j$-th team, the forecast probability of a home goal given a home shot is given by \begin{equation} p(G_{h})=\frac{1}{1+exp\{-m_{h}\}} \end{equation} where $m_{h}=c+h+\frac{1}{2}(a_{i}+d_{j})$. Here, $c$ is a constant parameter and $h$ a parameter that allows for home advantage (if any). \par The forecast probability of an away goal given an away shot is given by \begin{equation} p(G_{a})=\frac{1}{1+exp\{-m_{a}\}} \end{equation} where $m_{a}=c-h+\frac{1}{2}(a_{j}+d_{i})$. \par Here, we have a total of $2T+2$ parameters to be estimated. We take a maximum likelihood approach with a slight adjustment such that more recent matches are given a higher weight than those that were played longer ago. To do this, we make use of the `half life' approach taken by \cite{ley2019ranking} in which the weighting placed on the $m$-th match is determined by \begin{equation} \label{eq:wtime} w_{time,m}(x_m)=\left(\frac{1}{2}\right)^{\frac{x_m}{H}}, \end{equation} where $x_m$ is the number of days since the $m$-th match was played and $H$ is the `half life', that is the number of days until the weighting halves. The likelihood function, adjusted with the half life parameter, is given by \begin{equation} L=\prod_{m=1}^{M} \phi(p_m,O_m)^{w_{time,m}(x_m)} \end{equation} where \begin{equation} \phi(a,b)=\begin{cases} a & \text{if } b=1, \\ 1-a & \text {if } b = 0. \end{cases} \end{equation} The model requires the simultaneous optimisation of $2T+2$ parameters. In the experiments performed in this paper, we use the `fmincon' function in Matlab and select the `interior point' algorithm which provides a compromise between speed and accuracy. We set the constraints $\sum_{i=1}^{T} a_{i} = 0$ and $\sum_{i=1}^{T} d_{i} = 0$ so that all of the ratings are distributed around zero. All parameters are initialised to zero in the optimisation algorithm. \par \subsection{Forecast skill and reliability} If our forecast model of shot success described in section~\ref{section:model} is to be useful, it is important to show that the forecasts it produces are informative in terms of predicting the probability of scoring from a shot at goal. In this section, we evaluate the performance of the forecasts and examine the effect of the half life parameter. \par To evaluate whether the forecasts are informative at all, we can investigate whether they outperform a very simple system in which forecasts consist of the historical shot success frequency over all past matches. If our forecasts are able to outperform this simple system, we have shown there is value in taking into account the strengths of the teams involved. \par In weather forecasting, the simple forecasting system described above is often called the `climatology' and we adopt this terminology. The climatology is commonly used as a benchmark for the skill of a set of forecasts and if the forecasts cannot outperform the climatology, the forecast system is of little value (\cite{katz2005economic}). Formally, in our case, the climatological probability $p(G)$ of scoring given a shot at goal takes the form \begin{equation} \label{eq:clim} p_c=\frac{\sum_{m=1}^M G_{m}}{\sum_{m} S_{m}} \end{equation} where $G_m$ and $S_m$ are the total number of goals and shots respectively in the $m$-th match and $M$ is the number of past matches considered. \par Probabilistic forecasts are best evaluated using scoring rules. The Ignorance and Brier scores, described in appendix~\ref{section:scoring_rules}, are two examples of scoring rules that are suitable for evaluating binary probabilistic forecasts and we consider the skill according to both. For context, in each case, the score is given with that of the climatology subtracted such that, if the relative score is negative, the forecasts can be considered to be skillful. \par The mean Ignorance and Brier scores of the forecasts relative to the climatology are shown as a function of the half life parameter in figure~\ref{figure:ign_RPS_function_halflife_just_unblended}. Here, the forecast skill under both scoring rules is positive for all values of the half life parameter implying that the forecasts do not outperform the climatology, on average. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.9]{ign_RPS_function_halflife_just_unblended.png} \caption{Mean Ignorance (blue line, left axis) and Brier (red line, right axis) scores for forecasts of the probability of scoring from a shot at goal, given relative to the climatology as a function of the half life parameter.} \label{figure:ign_RPS_function_halflife_just_unblended} \end{figure} To investigate why the forecasts are unable to outperform the climatology, we can make use of reliability diagrams to attempt to diagnose whether there are any systematic biases. Reliability diagrams are used to visualise the `reliability' of a set of forecasts, that is whether the observed frequencies are consistent with the forecast probabilities (\cite{brocker2007increasing}). The forecasts are divided into `bins' and the mean forecast probability within each bin is plotted against the relative frequency of the outcomes. If the points are close to the diagonal, the forecasts are `reliable'. We make use of the approach taken by \cite{brocker2007increasing} in which `consistency bars' are added which provide a 95 percent interval for the relative frequency under the assumption that the forecasts are perfectly reliable (that is, the outcomes occur at the rate implied by the forecasts). \par Reliability diagrams for different values of the half life parameter $H$ are shown in figure~\ref{figure:reliability_shotprobs}. Here, in all cases, it is clear that the forecasts are overdispersed. The highest forecast probabilities tend to correspond to far lower relative frequencies than would be expected if they were reliable, whilst the lowest forecast probabilities tend to correspond to much higher relative frequencies than expected. To understand why we see the above pattern, it is useful to recall how the forecasts are formed. The model assigns attacking and defensive parameters to each team as well as constant and home advantage parameters. This means that a large number of parameters are required to be optimised simultaneously and this risks overfitting, in which the model does not generalise well out of sample. For example, suppose a team happens to score with a large proportion of its shots in recent matches. This will be reflected in their rating but may be unsustainable in the longer term, leading to an overestimate of the probability of scoring from a shot. Conversely, a team that happens to have scored from a low proportion of its shots may have its probability of scoring in future matches underestimated. \par To attempt to deal with overfitting, we adjust the forecasts using two different approaches. In the first, we attempt to calibrate the forecasts using Platt Scaling, a simple approach in which the original forecast is used as an input to a logistic regression with a `calibrated' forecast as the output (\cite{platt1999probabilistic}). The adjusted forecast $\tilde{p}$ is therefore given by \begin{equation} \tilde{p}=\frac{1}{1+exp(A+bp)}, \end{equation} where $p$ is the original forecast and $A$ and $b$ are parameters to be optimised over past forecasts and outcomes. We use Maximum Likelihood to optimise the parameters over all available past forecasts. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.75]{reliability_shotprobs.png} \caption{Reliability diagrams for forecasts of shot success for different values of the half life parameter. The consistency bars show the region in which there is a 95 percent probability of the relative frequencies falling if the forecasts are perfectly reliable.} \label{figure:reliability_shotprobs} \end{figure} Our second approach is `Blending' (\cite{brocker2008ensemble}). Under this approach, the adjusted forecasts are a weighted average of the original forecast and the climatology (that is the historical average, see equation~\ref{eq:clim}). Formally, the blended forecast is given by \begin{equation} \tilde{p}=\alpha p +(1-\alpha) p_c \end{equation} where $p$ is the original forecast, $p_c$ is the climatology and $\alpha$ is a parameter to be estimated. Parameter estimation is done by minimising the mean ignorance score over all past forecasts (note this is equivalent to the Maximum Likelihood approach used in Platt Scaling). \par The mean Ignorance and Brier scores (both shown relative to that of the climatology) of the Platt scaled and blended forecasts are shown in figure~\ref{figure:ign_RPS_function_halflife_just_blended_and_cal} (note the change in scale on the $y$ axis from figure~\ref{figure:ign_RPS_function_halflife_just_unblended}). Here, unlike the original forecasts, both the Platt scaled and blended forecasts produce negative mean Ignorance and Brier scores and are therefore able to outperform the climatology, demonstrating forecast skill. \par It is clear that the choice of the half life parameter is crucial in determining the skill of the forecasts. If it is too high, matches that were played a long time ago and have low relevance to the current time are given too much weight. If it is too low, recent matches are given too little weight and the ratings assigned to each team are not robust. Here, under both scores and both approaches, the optimal half life parameter (out of those considered) is 60 days indicating that relatively recent matches play the biggest role in determining the probability of scoring. It is also clear that the blending approach consistently outperforms Platt Scaling. Reliability diagrams for the forecasts produced under Blending and Platt Scaling with a half life parameter of 60 days are shown in figure~\ref{figure:reliability_shotprobs_cal_blend_side_by_side}. Under both approaches, it is clear that the effect is to moderate the forecasts by moving them closer to the climatology, creating improved reliability and skill. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.9]{ign_RPS_function_halflife_just_blended_and_cal.png} \caption{Mean Ignorance (blue line, left axis) and Brier (red line, right axis) scores for Blended (solid lines) and Platt Scaled (dashed lines) forecasts of the probability of scoring from any shot, given relative to the climatology, as a function of the half life parameter.} \label{figure:ign_RPS_function_halflife_just_blended_and_cal} \end{figure} \begin{figure}[!htb] \centering \includegraphics[scale=0.65]{reliability_shotprobs_cal_blend_side_by_side.png} \caption{Reliability diagrams for forecasts of shot success adjusted using blending (left) and Platt Scaling (right) with a half life parameter of 60 days. The consistency bars show the region in which there is a 95 percent probability of the relative frequencies falling if the forecasts are perfectly reliable.} \label{figure:reliability_shotprobs_cal_blend_side_by_side} \end{figure} In summary, the results here show that, when combined with Platt Scaling or Blending, our model is able to make skillful predictions of the probability of scoring from a given shot. Having shown that we are able to construct skillful shot success forecasts, we now investigate whether they are effective in improving the skill of forecasts of match outcomes and whether the total number of goals in a match will exceed 2.5. \par \section{Forecasting match outcomes and total goals} \label{section:predicting_match_outcomes} In this section, we investigate whether our shot success model can be used alongside predictions of the number of shots to make informative probabilistic forecasts for (i) the outcomes of football matches (i.e. whether the match will end as a home win, draw or away win) and (ii) whether the total number of goals will exceed $2.5$ (henceforth `over/under 2.5 goal forecasts'). In each case, we assess both the forecast skill and the profitability when using the resulting forecasts alongside the two betting strategies defined in appendix~\ref{section:betting_strategy}. Given a point prediction of the number of shots and the forecast probability of each of those shots being successful, we can obtain a point estimate for the number of goals scored by each team in a match by simply multiplying them together. To predict the number of shots achieved by each team, we make use of the Generalised Attacking Performance (GAP) rating system proposed by \cite{wheatcroft2020} which has been shown to be a useful predictor variable for producing over/under 2.5 goal forecasts and forecasts of the match outcome (\cite{wheatcroft2019evaluating}). The system is described in detail in appendix~\ref{section:GAP_ratings}. Define a point prediction of the number of goals for the home team in a match to be \begin{equation} \label{equation:exp_goals_h} E_{h}=\hat{S}_{h}P(G_{h}) \end{equation} and, for the away team, \begin{equation} \label{equation:exp_goals_a} E_{a}=\hat{S}_{a}P(G_{a}), \end{equation} where $\hat{S}_{h}$ and $\hat{S}_{a}$ are the predicted number of shots for the home and away teams respectively, and $P(G_{h})$ and $P(G_{a})$ are the predicted probabilities that the home or away team will score given they have taken a shot at goal. Note that $E_{h}$ and $E_{a}$ will usually not be integer values and represent a prediction of the `expected' number of goals achieved by each team. \par For comparison, we can define a point prediction for the number of goals adjusted with the climatological probability such that \begin{equation} \label{equation:exp_goals_h_clim} C_{h}=\hat{S}_{h}p_c \end{equation} and \begin{equation} \label{equation:exp_goals_a_clim} C_{a}=\hat{S}_{a}p_c \end{equation} for the home and away teams respectively where $p_c$ is the climatological probability of shot success (i.e. the probability of a team scoring from a shot regardless of ability). \par We make use of ordered logistic regression to map predictor variables into forecast probabilities for the match outcome. The ordered logistic regression model is chosen because the outcomes of football matches can be considered `ordered'. In a sense, a home win and a draw are `closer together' than a home win and an away win and this is reflected in the parametrisation of the model. The ordered regression model allows $K$ predictor variables to be mapped into forecast probabilities. A sensible choice of predictor variable for the match outcome is the difference in the predicted number of goals scored by each team defined by \begin{equation} \label{equation:exp_goals} V=E_{h}-E_{a}. \end{equation} We use logistic regression to build probabilistic forecasts of whether the total number of goals in a match will exceed $2.5$. Since this is a binary event, logistic regression is a suitable model for mapping predictor variables to probabilities. Since we are interested in the total number of goals scored in a match, we use as a predictor variable the sum of the predicted number of goals scored by the home and away teams. The predictor variable is therefore \begin{equation} \label{equation:total_goals} V=E_{h}+E_{a}. \end{equation} For our model of shot success to be effective in terms of predicting the match outcome and whether the number of goals will exceed 2.5, our predictor variables should be more informative than when $E_{h}$ and $E_{a}$ are replaced with $C_{h}$ and $C_{a}$, that is the case in which the probability of shot success is taken to be that of the climatology. This comparison is the main focus of our experiment. \par In addition to the predictor variables specified above, we consider the use of odds-implied probabilities as additional predictor variables. The rationale of this is that we may be able to `augment' the substantial information in the odds with additional information to provide more skillful forecasts. \par \subsection{Experimental design} We make use of the data described in section~\ref{section:data} to produce probabilistic forecasts both for the match outcome and for whether the total number of goals in a match will exceed $2.5$. We do this for each match in which both shot data and the relevant odds are available. This means we have a total of 62218 forecasts of the match outcome and 53447 over/under 2.5 goal forecasts. We produce two sets of forecasts in each case. First we include in the model only our chosen predictor variable based on the predicted number of goals. Second, we include an odds-implied probability as an additional variable. In the match outcome case, this is the odds-implied probability of a home win and, in the total goals case, the odds-implied probability that the total number of goals will exceed $2.5$. \par In all cases, the forecasts for each match are constructed using regression parameters fitted with least squares estimation on all available matches in all leagues up to the day before the match is played. In order to allow the forecasts to have sufficiently learned about the quality of the teams, we follow the approach of \cite{wheatcroft2020} and allow a `burn-in' period, thus excluding from calculations of forecast skill and profit the first six matches of the season for each team. \par Since we are primarily interested in the potential value added by our shot success model, our comparison of interest is between the forecasts produced using as predictor variables the predicted number of goals calculated using our shot success model (that is formed using equations~(\ref{equation:exp_goals_h}) and~(\ref{equation:exp_goals_a})), and those produced using the climatological shot success probability defined in equation~(\ref{eq:clim}). The latter case includes no information about the strength of the teams and therefore the extent to which it is able to be outperformed by our shot success model demonstrates its value to the forecasts. We therefore present the skill of the forecasts formed using our shot success forecasts `relative' to those formed using the climatological probability of shot success. This is done by subtracting the skill of the latter from the former such that negative values imply better relative skill. \par We also compare the betting performance under the Level Stakes and Kelly betting strategies described in section~\ref{section:betting_strategy}. To calculate the overall profit, we use the maximum odds available from the BetBrain odds-comparison website, which are included in the `football-data' data set. \par \subsection{Forecasts of the match outcome} We begin by considering forecasts of the match outcome. The mean relative Ignorance and Ranked Probability Scores for the case in which the odds-implied probability is not included as an additional predictor variable are shown as a function of the half life parameter $H$ in the top panel of figure~\ref{figure:skill_and_profit_func_halflife_without_odds}. As described above, in both cases, the skill is given relative to (i.e is subtracted from) that of forecasts formed using the predicted number of goals adjusted using the climatological probability of shot success. Since both the mean relative ignorance and RPS are negative, the shot success forecasts are shown to add skill to the match outcome forecasts for all considered values of the half life. \par The overall profit under the Level Stakes (magenta) and Kelly (green) betting strategies are shown in the lower panel. The dashed line shows the overall profit for the case in which the predicted number of goals is calculated using the climatological probability of the rate of shot success. Interestingly, despite the fact that our shot success model improves forecast skill, the overall profit is slightly decreased and there is therefore no evidence of improved gambling performance under either strategy. Both forecast skill and the overall profit are optimised by setting the half life parameter to 30 days, implying that shot success in relatively recent matches is the most informative in terms of the match outcome. \par Figure~\ref{figure:skill_and_profit_func_halflife_with_odds} is the same as figure~\ref{figure:skill_and_profit_func_halflife_without_odds} but for the case in which the odds-implied probability of a home win is included as an additional predictor variable. Here, both the relative ignorance and RPS are positive, implying that our model of shot success is counterproductive. Similarly, there is a reduction in profit under both betting strategies. We can provide a speculative answer as to why this is the case. Betting odds are complex and reflect a great deal of information brought together by participants in the market. We suggest that differences in the probability of shot success are efficiently reflected in the odds (punters may account for efficient goal scorers/goalkeepers etc.) and therefore, by including this information, there is an element of double counting which negatively impacts the forecasts. It is worth noting that finding information that can `augment' the information in the betting odds is a much more difficult task than finding information to produce forecasts from scratch. We discuss this further in section~\ref{results_summary}. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.9]{skill_and_profit_func_halflife_without_odds.png} \caption{Top panel: Mean ignorance (blue line, left axis) and RPS (red line, left axis) for forecasts of the match outcome as a function of the half life parameter when the odds-implied probability is not included as an additional predictor variable. Both scores are given relative to that of the case in which the predicted number of goals is calculated using the climatological probability of scoring. Lower panel: Overall profit from the Level Stakes (magenta) and Kelly (green) strategies as a function of half life. The dashed horizontal lines show the overall profit when the predicted number of goals is calculated using the climatological probability of scoring.} \label{figure:skill_and_profit_func_halflife_without_odds} \end{figure} \begin{figure}[!htb] \centering \includegraphics[scale=0.9]{skill_and_profit_func_halflife_with_odds.png} \caption{Top panel: Mean ignorance (blue line, left axis) and RPS (red line, left axis) for forecasts of the match outcome as a function of the half life parameter when the odds-implied probability is included as an additional predictor variable. Both scores are given relative to that of the case in which the predicted number of goals is calculated using the climatological probability of scoring. Lower panel: Overall profit from the Level Stakes (magenta) and Kelly (green) strategies as a function of half life. The dashed horizontal lines show the overall profit when the predicted number of goals is calculated using the climatological probability of scoring.} \label{figure:skill_and_profit_func_halflife_with_odds} \end{figure} \subsection{Over/under 2.5 goal forecasts} We now turn to the over/under 2.5 goal forecasts. The results for the case in which the odds-implied probability is not included as an additional predictor variable are shown in figure~\ref{figure:skill_and_profit_func_halflife_without_odds_OU}. Similarly to the forecasts of the match outcome, here, the top panel shows the mean Ignorance and Brier scores given relative to the case in which the forecasts are formed using the predicted number of goals produced using the climatological probability of the rate of shot success. Since both relative scores are negative, our shot success model is able to increase the skill of the forecasts. \par The overall profit achieved using the Kelly and Level Stakes betting strategies is shown in the lower panel of figure~\ref{figure:skill_and_profit_func_halflife_without_odds_OU}. Here, as before, the solid lines show the overall profit for the case in which the predicted number of goals are produced using our shot success model and the dashed lines the case in which the climatological rate of shot success is used. Here, there is a major improvement in the gambling return from using our model of shot success, although the profit is still slightly negative for all values of the half life parameter. The optimal half life parameter of 90 days is slightly longer than for the match outcome forecasts but this still suggests that relatively recent matches are most relevant. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.9]{skill_and_profit_func_halflife_without_odds_OU.png} \caption{Top panel: Mean ignorance (blue line, left axis) and Brier score (red line, left axis) for the over/under 2.5 goal forecasts as a function of the half life parameter when the odds-implied probability is not included as a predictor variable. Both scores are given relative to that of the case in which the predicted number of goals is calculated using the climatological probability of scoring. Lower panel: Overall profit from the Level Stakes (magenta) and Kelly (green) strategies as a function of the half life parameter. The dashed horizontal lines show the overall profit when the predicted number of goals is calculated using the climatological probability of shot success.} \label{figure:skill_and_profit_func_halflife_without_odds_OU} \end{figure} Figure~\ref{figure:skill_and_profit_func_halflife_with_odds_OU} shows the same results as figure~\ref{figure:skill_and_profit_func_halflife_without_odds_OU} but for the case in which the odds-implied probability is included as an additional predictor variable. Here, the relative skill under both the Ignorance and Brier scores is negative, implying that our shot success model increases the skill of the over/under 2.5 goal forecasts. For most values of the half life parameter, there is also an increase in profit under both strategies. This is a very different result to the match outcome case in which we were unable to improve the forecasts using our model of shot success. Interestingly, the most effective choice of half life parameter is 300 days, suggesting that shot success over a longer period of time is relevant here. \par \begin{figure}[!htb] \centering \includegraphics[scale=0.9]{skill_and_profit_func_halflife_with_odds_OU.png} \caption{Top panel: Mean ignorance (blue line, left axis) and Brier score (red line, left axis) for the over/under 2.5 goal forecasts as a function of the half life parameter when the odds-implied probability is included as a predictor variable. Both scores are given relative to that of the case in which the predicted number of goals is calculated using the climatological probability of scoring. Lower panel: Overall profit from the Level Stakes (magenta) and Kelly (green) strategies as a function of the half life parameter. The dashed horizontal lines show the overall profit when the predicted number of goals is calculated using the climatological probability of shot success.} \label{figure:skill_and_profit_func_halflife_with_odds_OU} \end{figure} \subsection{Summary} \label{results_summary} The results above demonstrate that our model for predicting shot success can improve the skill of shot-based forecasts for both match outcomes and for whether the total number of goals will exceed 2.5. For the case in which the odds-implied probability is not included in the forecasts, gains in forecast skill are demonstrated for both sets of forecasts. For the case in which the odds-implied probability is included, the results are more mixed with an improvement in the skill of over/under 2.5 goal forecasts and a reduction in the skill of forecasts of the match outcome. \par It is worth noting the philosophical difference between forecasts formed with and without the odds-implied probability included as an additional predictor variable. In the latter case, we are building forecasts effectively from scratch and therefore it should be relatively straightforward to find information that adds to the skill. We know that we are able to build skillful forecasts using predicted match statistics and, logically, if we can incorporate skillful forecasts of the rate of shot success, we should be able to improve the forecasts and this has proven to be the case. In the former case, we have a very different situation. Betting odds are generally considered to be highly informative reflections of the underlying probability of an outcome (though there are a number of known biases), taking into account a wide range of factors. Finding information that can `augment' this information is therefore a much more difficult task. Further, there is likely to be a complicated relationship between our forecasts of the rate of shot success and the extent to which this information is reflected in the odds. The fact that we are able to improve the over/under 2.5 goal forecasts but not the match outcome forecasts is testament to this complex relationship. \par It is less clear whether the general improvement in skill achieved from the forecasts of the rate of shot success leads to increased gambling profit. This probably reflects the complex relationship between forecast probabilities and gambling returns. For both the Level Stakes and Kelly strategies, gambling success is dependent on finding bets that offer a positive expected return. Success at doing this, however, will not necessarily increase with forecast skill. Consider the Level Stakes case. Here, a bet is taken if the forecast probability is higher than the odds-implied probability and the forecast therefore implies that there is value. The success of the strategy is dependent on the forecasts successfully identifying bets in which there is genuine value. If improvements in skill are largely seen in forecasts in which the decision as to whether to bet or not is unchanged and reductions in skill are in `borderline' cases, it is easy to see how the profit may fall with increased average forecast skill. This is not a criticism of the approach of using scoring rules to evaluate forecasts but rather a demonstration of the difference between forecast skill and the utility of using the forecasts for a particular decision process. \par \section{Discussion} \label{section:discussion} In this paper, we have presented a model for predicting the probability of a football team scoring from a shot at goal. Whilst the model suffers from overfitting, we are able to calibrate the forecasts to produce good forecast skill. We have also demonstrated that the model of shot success can be used alongside predictions of the number of shots achieved by each team to provide improved skill for both match outcome and over/under 2.5 goal forecasts. \par Whilst our shot success model has been shown to be able to produce improved forecast skill, there is also an economic interpretation of the results. The experiments we have conducted were partly inspired by the results shown in \cite{wheatcroft2020} and \cite{wheatcroft2019evaluating} that showed that predicted match statistics, formed using GAP ratings, can provide forecast skill beyond that reflected in the odds. We have built on this and shown that, for over/under 2.5 goal forecasts, we can provide further improvement using our forecasts of the rate of shot success. As described in the aforementioned papers, the fact that predicted match statistics can improve a set of forecasts has implications for the efficiency of the betting markets, implying that the market does not efficiently account for this information. The results in this paper build on that and suggest that the over/under 2.5 goal market does not adequately account for the probability of scoring from a shot. We do not have evidence that this is the case for the match outcome market, however. \par In our opinion, there is potential value in the model beyond those applications demonstrated here. It is, of course, desirable for a team to score with a relatively high proportion of shots, since doing so would result in more goals and better match results. Similarly, it is desirable to concede from a relatively small proportion of shots. A manager looking to improve their team's results may be interested both in the quality of their players' shot conversion and the ability of their defence to prevent the opposition from converting their shots. However, simply looking at observed rates of shot conversion in recent matches would likely not give a robust estimate of their skill in converting shots to goals. Our shot success forecasts are a potentially useful alternative to looking at observed numbers because they provide a more robust measure of the skill of each team since the half life and blending parameters have been chosen with respect to objective forecast skill. This objectivity allows some of the inevitable biases of the manager to be removed when assessing the performance of their team. \par Another interesting question regards the value of combining the model presented here with expected goals methodologies. The idea behind expected goals is that the location and nature of each shot is used to provide an estimate of the probability of a shot ending with a goal. The sum of the probabilities assigned to the shots in a match can then be interpreted as a measure of the number of goals a team would be `expected' to score, given the shots it has taken. Importantly, expected goals typically don't take into account the relative abilities of the teams or players. Conversely, it is important to note that, under our model, the nature of a shot is not taken into account. This is potentially important because the location from which shots are taken have a big impact on the probability of scoring and some teams may be more likely to take shots from locations in which it is difficult to score, reducing their shot conversion rate. In order to determine whether a rate of shot conversion is due to the nature of the shots or poor shooting ability, one could compare the probability of scoring from each shot under the expected goals methodology with the forecast probability of scoring under our model (which, unlike expected goals, takes into account the ability of the teams). If the latter is typically higher than the former, one might conclude that a team's shooting ability is high. \par In conclusion, it is becoming increasingly clear that forecasts based on the number of shots at goal have great value in predicting the outcomes of football matches. An obvious weakness of this approach is that the ability of the two teams involved is not taken into account. This paper provides a potential solution to that weakness. \par
{ "attr-fineweb-edu": 2.382812, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcag5qsNCPfdFeVm8
\section{Introduction} Predicting the outcome of contests in organized sports can be attractive for a number of reasons such as betting on those outcomes, whether in organized sports betting or informally with colleagues and friends, or simply to stimulate conversations about who ``should have won''. We would assume that this task is easier in professional leagues, such as Major League Baseball (MLB), the National Basketball Association (NBA), or the National Football Association (NFL), since there are only relatively few teams and their quality does not vary too widely. As an effect of this, match statistics should be meaningful early on since the competition is strong, and teams play the same opponents frequently. Additionally, professional leagues typically play more matches per team per season, e.g. 82 in the NBA or 162 in MLB, than in college or amateur leagues in which the sport in question is (supposed to be) only a side aspect of athletes' lives. National College Athletics Association Basketball (NCAAB) matches therefore offer a challenging setting for predictive learning: more than 300 teams that have strongly diverging resource bases in terms of money, facilities, and national exposure and therefore attractiveness for high quality players, play about 30 games each per season, can choose many of their opponents themselves (another difference to professional teams), and often have little consistency in the composition of teams from one season to the next since especially star players will quickly move on to professional sports. Lopsided results and unrealistic match statistics will therefore not be uncommon, distorting the perception of teams' quality. Most of the existing work in the field is more or less statistical in nature, with much of the work developed in blog posts or web columns. Many problems that can be addressed by statistical methods also offer themselves up as Machine Learning settings, with the expected advantage that the burden of specifying the particulars of the model shifts from a statistician to the algorithm. Yet so far there is relatively little such work in the ML literature. The main goal of the work reported in this paper was therefore to assess the usefulness of classifier learning for the purpose of predicting the outcome of individual NCAAB matches. Several results of this work were somewhat unexpected to us: \begin{itemize} \item Multi-layer perceptrons, an ML technique that is currently not seeing wide-spread use, proved to be most effective in the explored settings. \item Explicitly modeling the differences between teams' attributes \emph{does not} improve predictive accuracy. \item Most interestingly, there seems to be a ``glass ceiling'' of about 74\% predictive accuracy that cannot be exceeded by ML or statistical techniques. \end{itemize} \section{\label{definitions}Definitions} The most straight-forward way of describing basketball teams in such a way that success in a match can be predicted relate to scoring points -- either scoring points offensively or preventing the opponent's scoring defensively. Relatively easy to measure offensive statistics include field goals made (FGM), three-point shots made (3FGM), free throws after fouls (FT), offensive rebounds that provide an additional attempt at scoring (OR), but also turnovers that deprive a team of an opportunity to score (TO). Defensively speaking, there are defensive rebounds that end the opponent's possession and give a team control of the ball (DR), steals that have the same effect and make up part of the opponent's turnovers (STL), and blocks, which prevent the opponent from scoring (BLK). And of course, there are points per game (PPG) and points allowed per game (PAG). The problem with these statistics is that they are all raw numbers, which limits their expressiveness. If a team collects 30 rebounds in total during a game, we cannot know whether to consider this a good result unless we know how many rebounds were there to be had in the first place. 30 of 40 is obviously a better rebound rate than 30 of 60. Similar statements can be made for field goals and free throws, which is why statistics like offensive rebound rate (ORR), turnover rate (TOR), or field goals attempted (FGA) will paint a better picture. Even in that case, however, such statistics are not normalized: 40 rebounds in a game in which both teams combined to shoot 100 times at the basket is different from 40 rebounds when there were only 80 scoring attempts. For normalization, one can calculate the number of possessions in a given game: $$Possessions = 0.96*(FGA - OR - TO + (0.475*FTA))$$ and normalize teams' points scored and allowed per 100 possessions, deriving offensive and defensive \emph{efficiencies}: $$OE = \frac{Points\ scored * 100}{Possessions}, DE = \frac{Points\ allowed * 100}{Possessions}$$ It should be noted that the factor $0.475$ is empirically estimated -- when first introducing the above formulation for the NBA, Dean Oliver estimated the factor as $0.4$ \cite{basketball-on-paper}. Dean Oliver has also singled out four statistics as being of particular relevance for a team's success, the so-called ``Four Factors'' (in order of importance, with their relative weight in parentheses): \begin{enumerate} \item Effective field goal percentage (0.4): $$eFG\% = \frac{FGM + 0.5 \cdot 3FGM}{FGA}$$ \item Turnover percentage (0.25): $$TO\% = \frac{TO}{Possessions}$$ \item Offensive Rebound Percentage (0.2): $$OR\% = \frac{OR}{(OR + DR_{Opponent})}$$ \item Free throw rate (0.15): $$FTR = \frac{FTA}{FGA}$$ \end{enumerate} While such statistics are normalized w.r.t. the ``pace'' of a game, they do not take the opponent's quality into account, which can be of particular importance in the college game: a team that puts up impressive offensive statistics against (an) opponent(s) that is (are) weak defensively, should be considered less good than a team that can deliver similar statistics against better-defending opponents. For best expected performance, one should therefore normalize w.r.t. pace, opponent's level, and national average, deriving \emph{adjusted} efficiencies: $$AdjOE = \frac{OE * avg_{all\ teams}(OE)}{AdjDE_{opponent}}, AdjDE = \frac{DE * avg_{all\ teams}(DE)}{AdjOE_{opponent}}$$ To gain a comprehensive picture of a team's performance during the season, such statistics would have to be averaged over all games (we describe two approaches for doing so in Section \ref{subsec:adjusted-effs}), and a state-of-the-art way of using the derived statistics in predicting match outcomes consists of using the so-called Pythagorean Expectation, e.g.: $$Win\ Probability = \frac{((Adjusted)\ OE_{avg})^y}{((Adjusted)\ OE_{avg})^y + ((Adjusted)\ DE_{avg})^y}$$ to calculate each team's win probability and predicting that the team with the higher probability wins. More generally, \emph{ranking systems} can by used by ranking the entire pool of teams and predicting for each match-up that the higher ranked team wins. \section{\label{related}Related Work} The use of the Pythagorean Expectation actually goes back to Bill James' work on baseball. It was adapted for the use in basketball prediction by numerous analysts, including such luminaries as Daryl Morey, John Hollinger, Ken Pomeroy, and Dean Oliver. The difference between the different approaches comes down to which measures of offensive and defensive prowess are used and how the exponent has been estimated. Dean Oliver was also the one who first introduced possession-based analysis formally in his book ``Basketball on Paper'' \cite{basketball-on-paper}, although he acknowledges that he had seen different coaches use such analysis in practice. In the same work, he introduced the ``Four Factors''. The adjustment of efficiencies to the opponent's quality is due to Ken Pomeroy who uses them as input in his version of the Pythagorean Expectation to rank NCAAB teams and predict match outcomes. His is far from the only ranking system, however, with other analysts like Jeff Sagarin, Ken Massey or Raymond Cheung running their own web sites and giving their own predictions. Comparisons of the results of different ranking systems can for instance be found at \url{http://masseyratings.com/cb/compare.htm} or \url{http://www.raymondcheong.com/rankings/perf13.html}. The worst accuracy for those systems is in the $62\%-64\%$ range, equivalent to predicting that the home team wins, the best ones achieve up to $74\%-75\%$. The NCAAB itself uses the so-called Ratings Percentage Index to rank teams, a linear weighted sum of a team's winning percentage, its opponents' winning percentage, and the winning percentage of those opponents' opponents. As an alternative approach, Kvam \emph{et al.} have proposed a logistic regression/Markov chain model \cite{journal/nrl/kvam2006}. In this method, each team is represented as a state in a Markov chain and state transitions occur if one team is considered better than its opponent. Logistic regression is used to estimate transition probability parameters from the data. The authors have proposed an updated version using Bayesian estimates \cite{journal/jqas/brown2010}, and recently published work in which they estimate their method's success in comparison to other ranking schemes \cite{conf:sloans/2012/brown}. \section{\label{predictions}Day-by-day predictions using ML} The approaches described in the preceding section are in many cases somewhat or even fully hand-crafted. This can be rather high-level, as in \emph{defining} the transition probabilities in LRMC's Markov chain by hand, or it can go as far as Ken Pomeroy taking home court advantage into consideration by \emph{multiplying} the home team's stats by $1.014$. Furthermore, especially the Pythagorean Expectation seems to be a rather simple model. Machine Learning promises to address both of these issues: we would expect to be able to \emph{learn} the relative importance of different descriptive measures, in particular if this importance changes for different numerical ranges, and to be able to \emph{learn} their relationships, automatically making the model as difficult (or simple) as needed. We therefore turned to classification learners representing several different paradigms and evaluated their performance. In a reversal of current practice, explicit prediction of match outcomes could be used to rank teams by predicting the outcome of all hypothetical pairings and ranking teams by number of predicted wins. \ \\ \noindent The evaluated learners were: \begin{itemize} \item Decision trees, represented by C4.5. \item Rule learners, represented by Ripper. \item Artificial neural networks, represented by a Multi-layer Perceptron (MLP). \item Na{\"i}ve Bayes \item Ensemble learners, by a random forest. \end{itemize} All algorithms were used in the form of their respective Weka implementations and run with default parameter settings, with the exception of Na{\"i}ve Bayes, for which the ``Kernel Estimator'' option was activated to enable it to handle numerical attributes effectively, J48, whose pre-pruning threshold we set to $1\%$ of the training data, and the Random Forest, which we set to consist of $20$ trees. All data has been downloaded from Ken Pomeroy's web site, \url{kenpom.com}, and we limit ourselves to matches involving two Division I teams. Matches were encoded by location (home, away, neutral court), the chosen numerical statistics up to the day the match was played, and the outcome (win, loss) from the perspective of the first team. We always chose the team with the lexicographically smaller name as first team. For each experiment run, one season was used as test set and the preceding seasons from 2008 onward as training data, leading to the training and test set sizes shown in Table \ref{ds-sizes}. \begin{table} \begin{center} \begin{tabular}{l|c|c|c|c|c} Season & 2009 &2010 &2011&2012&2013\\\hline Train & 5265 &10601& 15990&21373&26772\\ Test & 5336 &5389&5383&5399&5464\\ \end{tabular} \caption{Training and test set sizes per season\label{ds-sizes}} \end{center} \end{table} \subsection{Seasonal Averaging} Ken Pomeroy's web site features only the most recent averaged adjusted efficiencies (and averaged Four Factors), i.e. from the end of the season for completed seasons, and for seasons in progress the efficiencies up to the current date. We therefore calculated the day-to-day averaged adjusted efficiencies ourselves, following Pomeroy's description. While that description is very precise for the most part, the averaging is summarized as averaging over the season with more weight given to recent games. We chose to average via two methods: \begin{enumerate} \item an adjustable weight parameter $\alpha$: $$AdjE_{avg,post-match} = (1-\alpha) AdjE_{avg,pre-match} + \alpha AdjE_{post-match}$$ and evaluated a number of different alpha values. Both averaged efficiencies and Four Factors stabilized for $\alpha=0.2$. To have a pre-match value for the first game of the season, we used the preceding season's end-of-season efficiencies, and \item explicitly:\\ A side-effect of using an $\alpha$-parameter less than $0.5$ (e.g. 0.2) in averaging is that last season's end-of-season averaged adjusted efficiency is weighted rather highly since it is the only value whose weight is never multiplied with $\alpha$ itself but always with $(1-\alpha)$. We therefore evaluated a different weighting scheme in which each match's adjusted efficiency is weighted explicitly with the number of games played $+1$. This means that last season's end-of-season efficiency has weight one, the adjusted efficiency of the first game weight two etc. The sum is normalized with the total sum of weights up to the current date. \end{enumerate} We have to admit that using either way, we did not manage to arrive at the same end-of-season efficiencies as Ken Pomeroy. Typically, our values are more extreme, with adjusted offensive efficiencies higher and adjusted defensive efficiencies lower than Pomeroy's values. Also, since $\alpha$-weighting performed consistently worse, we will focus on the explicit averaging for the rest of the paper. \subsection{Using adjusted efficiencies\label{subsec:adjusted-effs}} In the first set of experiments, we aimed to identify which attributes out of the full set of raw statistics, normalized statistics, Four Factors, and adjusted efficiencies were most useful in predicting match outcomes. We found the combinations of location and adjusted offensive and defensive efficiencies, and location and Four Factors to work best. This result is supported by the outcome of using Weka's feature selection methods to winnow the attribute set down, which select location first, followed by adjusted efficiencies, and the Four Factors. A somewhat surprising result is the weak performance of the symbolic classifiers: MLP and Na{\"i}ve Bayes give consistently best results (Table \ref{adjusted-efficiencies}). We also see that more training data does not translate into better models, and that 2012 seems to have been an outlier season. \begin{table*} \begin{tabular}{c@{\hspace{1cm}}c} \begin{minipage}{0.45\textwidth} \begin{center} \begin{tabular}{l|c|c|c|c} Season & J48 & RF & NB & MLP\\\hline 2009 & 0.6839 & 0.6885 & 0.7101 & 0.7077\\ 2010 & 0.6899 & 0.6942 & 0.7172 & 0.7251\\ 2011 & 0.6905 & 0.6779 & 0.7028 & 0.716\\ 2012 & 0.7042 & 0.7137 & 0.7276 & 0.7446\\ 2013 & 0.6898 & 0.6881 & 0.7193 & 0.7215\\ \end{tabular} \caption{Match outcome prediction accuracies using adjusted efficiencies\label{adjusted-efficiencies}} \end{center} \end{minipage} & \begin{minipage}{0.45\textwidth} \begin{center} \begin{tabular}{l|c|c|c|c} Season & J48 & RF & NB & MLP\\\hline 2009 & 0.6647 & 0.6801 & 0.7121 & 0.7011\\ 2010 & 0.6645 & 0.6931 & 0.7202 & 0.7165\\ 2011 & 0.6622 & 0.6983 & 0.7206 & 0.7121\\ 2012 & 0.6788 & 0.702 & 0.7305 & 0.7311\\ 2013 & 0.6508 & 0.6892 & 0.7081 & 0.7092\\ \end{tabular} \caption{Match outcome prediction accuracies using adjusted four factor\label{explicitly-adjusted-ff}} \end{center} \end{minipage} \end{tabular} \end{table*} The accuracies for the different seasons are on par with those of the best-performing predictive systems, e.g. Ken Pomeroy's predictions and the LRMC, but unfortunately they are not better. \subsection{Using adjusted Four Factors} As mentioned in Section \ref{related}, Dean Oliver proposed the so-called ``Four Factors'' as being influential for a team's success. Since our experiments had indicated that the unadjusted Four Factors were already as useful in predicting match outcomes as adjusted efficiencies, we assumed that adjusted Four Factors should be more effective. We therefore performed adjusting in the same way as for efficiencies: multiplying with the national average and dividing by the opponent's counter-statistic, averaging using both methods. Averaging using $\alpha$ proved again to be worse, while explicitly averaging lead to similar yet slightly worse results compared to using adjusted efficiencies, as Table \ref{explicitly-adjusted-ff} shows. In a bid to improve the performance of the symbolic classifiers, we also experimented with encoding the differences between adjusted Four Factors explicitly, hypothesizing that for instance C4.5's over-fitting had to do with inducing branches for many different combinations of values that could be summarized by their difference. We either subtracted a team's defensive factor from the opponent's corresponding offensive factor, or subtracted offensive from corresponding offensive, and defensive from corresponding defensive factors. The former scheme severely underperformed, while the latter scheme with explicit weights for averaging showed very similar results to Table \ref{explicitly-adjusted-ff}. Finally, we attempted to address our more extreme adjusted values by calculating each season from stretch, not using the preceding season's values as input for the first game. While the resulting adjusted efficiencies are closer to those reported on Pomeroy's web site, prediction accuracies also decrease slightly. \subsection{Development of predictive accuracy as the season progresses} \begin{figure} \begin{center} \includegraphics[angle=270,width=0.8\textwidth]{adjusted_values-AdjEff-outcome-MLP} \caption{Development of predictive accuracy over the course of a season (MLP, AdjEff)\label{season-curve-adjeff-mlp}} \end{center} \end{figure} Figure \ref{season-curve-adjeff-mlp} shows how predictive accuracy develop as the season progresses. We chose MLP with adjusted efficiencies for this plot but the general trend is representative for other settings. With the exception for 2009, when only training data from 2008 was available, predictive accuracy is $100\%$ or close to it for the first few days of the season and then experiences a dip before it recovers, and shows only slight deterioration for the rest of the season. Interesting, but hard to spot in the plot, is that there are small up-and-downs in the playoffs, particularly in the last rounds, for instance predicting the semi-finals and final correctly after getting the quarter-finals wrong. \section{\label{conclusions}Lessons learned and open questions} In this work, we have explored the use of ML techniques, specifically classification learners, for making NCAAB match outcome predictions. These are just preliminary steps and the exploration is obviously far from complete. While the results were somewhat disappointing, we want to stress that they were not bad per se -- being on par with the state-of-the-art is only disappointing since we aimed to improve on it. Given our results, however, we believe that there are two first lessons that can be learned and that should guide our next steps. \subsection{It's in the attributes, not in the models} As stated above, one of our expectations was that more complex models could tease out relationships that simpler models would miss. Instead, we found that Na{\"i}ve Bayes, arguably the simplest of the classifiers, performs remarkably well. Similar observations can actually be made about existing techniques, since Ken Pomeroy's straight-forward Pythagorean Expectation performs as well as, or even better than, the much more complex LRMC model, Brown \emph{et al.}'s claims notwithstanding. Instead, whatever differences in performance we have observed essentially came down to the used attributes and how they were calculated: adjusted efficiencies and (adjusted) Four Factors are validated both by feature selection techniques and by the success of the classifiers trained on those representations but different ways of averaging over the season have an effect on the quality. Using other or additional features, on the other hand, leads to worse results. In a sense, this should not be surprising: any given match will be won by the team that scored more points than the other one, which is the information encoded in the adjusted efficiencies, for instance. Of course there is also ML/DM conventional wisdom that the main aspect of using such techniques effectively consists of constructing the right representation. Still, we found it surprising how stark the influence of choosing the right attributes was on achieving best results. \subsection{There seems to be a glass ceiling} Which brings us to the second lesson: the other invariant that we saw in our experiments is that there seems to be an upper limit to predictive accuracy for match outcomes, at around $74\%-75\%$. This holds not only for Na{\"i}ve Bayes and the MLP, but when one considers comparisons of non-ML methods, e.g. \url{http://www.raymondcheong.com/rankings/perf13.html} or \cite{conf:sloans/2012/brown}, one finds similar results. Additionally there are works in fields such a soccer \cite{DBLP:conf/ijcnn/HuangC10} (76.9\%), American Football \cite{535226} (78.6\%), NCAA Football \cite{pardee1999artificial} (76.2\%), and the NBA \cite{RePEc:bpj:jqsprt:v:5:y:2009:i:1:n:7} (74.33\%) that show best results in a similar region. It is difficult to determine why this is the case. If the claim made in the preceding section holds and the performance of predictors comes down to attribute construction, then maybe this glass ceiling is an artifact of the attributes we and others use. It is also possible, however, that there is simply a relatively large residue of college basketball matches that is in the truest sense of the world unpredictable. \subsection{Where to next?} First off, there is need to verify that our first lesson is correct and attributes are indeed what make or break success. To this end, different feature selection and modeling techniques need to be contrasted to get a clear understanding of attributes' effects, and how to best aggregate them over the course of a season. Following (or parallel to) that, both of the possible explanations for the glass ceiling given above offer themselves up for exploration that we intend to pursue in the near future: 1) Most existing attributes do not encode so-called ``intangibles'' such as experience, leadership, or luck. Attempts have been made to construct objective indicators, as in \url{http://harvardsportsanalysis.wordpress.com/2012/03/14/survival-of-the-fittest-a-new-model-for-ncaa-tournament-prediction/}, whose author proposes a ``Returning Minutes Percentage'', Dean Oliver's attempts to measure positional stability, or Ken Pomeroy's work that takes the luck of teams into account. Pomeroy incidentally credits Dean Oliver (once again) with having introduced this into basketball analysis. Hence, constructing new attributes that include additional information could improve predictive power. 2) A better understanding of incorrectly predicted matches is necessary. The weak performance of ensembles indicates that misclassified matches are not easily modeled. However, identifying similarities of misclassified matches or learning a model that can discriminate correctly and incorrectly classified instances, would help in gaining an understanding whether those matches are different or simply unpredictable. At this point, we would also finally come back to whether we can determine which team ``should have won''. Finally, somewhat unrelated, it could be interesting to separate training data by conference and learn models particular to the involvement of certain conferences teams. The most challenging question would probably have to do with how to decide which model's prediction to use if the two models disagree. \bibliographystyle{plain}
{ "attr-fineweb-edu": 2.003906, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcg825V5ha7jY8M0a
\section*{INTRODUCTION} Everyday, billions of individuals around the world travel. These movements form a socio-economic complex network, backbone for the transport of people, goods, money, information or even diseases at different spatial scales. The study of such spatial networks is consequently the subject of an intensive scientific activity \cite{Barthelemy2011}. Some examples include the estimation of population flows \cite{Murat2010,Gargiulo2012,Simini2012,Lenormand2012,Thomas2013,Lenormand2014,Yang2014,Sagarra2015}, transport planning and modeling \cite{Rouwendal2004,Ortuzar2011}, spatial network analysis \cite{DeMontis2007,DeMontis2010}, study of urban traffic \cite{DeMontis2007} and modeling of the spreading of infectious diseases \cite{Viboud2006,Balcan2009,Tizzoni2014}. Trip distribution modeling is thus crucial for the prediction of population movements, but also for an explanatory purpose, in order to better understand the mechanisms of human mobility. There are two major approaches for the estimation of trip distribution at an aggregate level. The traditional gravity approach, in analogy with the Newton's law of gravitation, is based on the assumption that the amount of trips between two locations is related to their populations and decays with a function of the distance \cite{Carey1858, Zipf1946,Wilson1970, Erlander1990}. In contrast to the gravity law, the Stouffer's law of intervening opportunities \cite{Stouffer1940} hinges on the assumption that the number of opportunities plays a more important role in the location choices than the distance, particularly in the case of migration choices. The original law proposed by Stouffer has been reformulated by Schneider \cite{Schneider1959} and extensively studied since then \cite{Heanue1966,Ruiter1967,Wilson1970,Haynes1973,Fik1990,Akwawua2001}. The two approaches have been widely compared during the second half of the twentieth century \cite{David1961,Pyers1966,Lawson1967,Zhao2001} showing that generally both approaches performed comparably. However, the simplicity of the mathematical form of the gravity approach appears to have weighted in its favor \cite{Ortuzar2011}. Indeed, the gravity approach has been extensively used in the past few decades to model, for instance, flows of population \cite{Viboud2006,Griffith2009,Balcan2009,Murat2010,Gargiulo2012,Lenormand2012,Thomas2013,Masucci2013,Liang2013,Lenormand2014,Tizzoni2014,Liu2014}, spatial accessibility to health services \cite{Luo2003}, volume of international trade \cite{Anderson1979,Bergstrand1985}, traffic in transport networks \cite{Jung2008,Kaluza2010} and phone communications \cite{Krings2009}. \begin{figure*} \begin{center} \includegraphics[width=14cm]{Fig1} \caption{\textbf{Position of the units' centroids for the six countries.} $\langle S \rangle$ represents the average surface of the census units (i.e. municipalities, counties or wards). \label{Fig1}} \end{center} \end{figure*} However, the concept of intervening opportunities has recently regained in popularity thanks to the recently proposed radiation approach \cite{Simini2012,Simini2013,Ren2014,Yang2014}. This approach is inspired by a simple diffusion model where the amount of trips between two locations depends on their populations and the number of opportunities between them. The gravity law and the radiation law have been compared several times during the last years giving the superiority to either of the approaches depending on the study \cite{Simini2012,Lenormand2012,Masucci2013,Liang2013,Yang2014}. Two main issues can be identified in these comparisons. First, the inputs used to simulate the flows are not always identical. For example, in the comparison proposed in \cite{Masucci2013}, the gravity law tested takes as input the population, whereas the radiation law is based on the number of jobs. Second, in all these studies, the models used to generate the trips from the radiation and the gravity laws are not constrained in the same way. The radiation models are always production constrained, this means that the number of trips, or at least an estimation of the number of trips generated by census unit, is preserved. The models used to generate the trips with the gravity laws can be either, unconstrained \cite{Simini2012,Masucci2013}, only the total number of trips is preserved or doubly constrained \cite{Lenormand2012,Yang2014}, both the trips produced and attracted by a census unit are preserved. Therefore, to fairly compare different approaches the same input data must be used and, most importantly, we need to differentiate the law, gravity or intervening opportunities, and the modeling framework used to generate the trips from this law. Indeed, both the gravity laws and the intervening opportunities laws can be expressed as a probability to move from one place to another, called trip distribution law, and based on these probability distributions, the total number of trips can then be simulated using different trip distribution models including different level of constraints. In this work, we test and compare, in a systematic and rigorous way, gravity and intervening opportunities laws against commuting census data coming from six different countries using four different constrained models to generate the networks: unconstrained model, single constrained models (production or attraction) and the well-known doubly constrained model. For the gravity law, since the form of the distance decay functions may vary from one study to another \cite{Fotheringham1981,Viboud2006,Vries2009,Balcan2009,Barthelemy2011,Lenormand2014,Chen2015} both the power and the exponential forms are tested to model the impact of the distance. The intervening opportunities law is given by the Schneider's version of the Stouffer's original law as it is usually the case. We also considered two versions of the radiation law, the original free-parameter model \cite{Simini2012} and the extended version proposed in \cite{Yang2014}. The simulated networks are compared with the observed ones on different aspects showing that, globally, the gravity law with an exponential distance decay function outperforms the other laws in the estimation of commuting flows, the conservation of the commuting network structure and the fit of the commuting distance distribution even if it fails at predicting commuting flows at large distances. Finally, we show that the different laws can be used in absence of detailed data for calibration since their only parameter depends only on the scale of the geographic census unit. \section*{DATA} In this study, the trip distribution laws and models are tested against census commuting data of six countries: England and Wales, France, Italy, Mexico, Spain and the United States of America (hereafter called E\&W, FRA, ITA, MEX, SPA and USA, respectively) and two cities: London and Paris (hereafter called LON and PAR, respectively). \begin{itemize} \item The England \& Wales dataset comes from the $2001$ Census in England and Wales made available by the Office for National Statistics (data available online at \url{https://www.nomisweb.co.uk/query/construct/summary.asp?mode=construct&version=0&dataset=124}). \item The French dataset was measured for the $1999$ French Census by the French Statistical Institute (data available upon request at \url{http://www.cmh.ens.fr/greco/adisp_eng.php}). \item The Italian's commuting network was extracted from the $2001$ Italian Census by the National Institute for Statistics (data available upon request at \url{http://www.istat.it/it/archivio/139381}). \item Data on commuting trips between Mexican's municipalities in $2011$ are based on a microdata sample coming from the Mexican National Institute for Statistics (data available online at \url{http://www3.inegi.org.mx/sistemas/microdatos/default2010.aspx}). \item The Spanish dataset comes from the $2001$ Spanish Census made available by the Spanish National Statistics Institute (data available upon request at \url{http://www.ine.es/en/censo2001/index_en.html}). \item Data on commuting trips between United States counties in $2000$ comes from the United State Census Bureau (data available online at \url{https://www.census.gov/population/www/cen2000/commuting/index.html}). \end{itemize} Each case study is divided into $n$ census units of different spatial scale: from the Output Area in London with an average surface of $1.68\mbox{ km}^2$ to the counties in the United States with an average surface of $2596.8\mbox{ km}^2$. See Table \ref{tab1} for a detailed description of the datasets. \begin{table*}[!ht] \caption{Presentation of the datasets \label{tab1}} \label{Datasets} \begin{tabular}{>{}m{3cm}>{\centering}m{4cm}>{\centering}m{3cm}m{3cm}<{\centering}} \hline \textbf{Case study} & \textbf{Number of units} & \textbf{Number of links} & \textbf{Number of Commuters}\\ \hline England \& Wales & 8,846 wards & 1,269,396 & 18,374,407\\ France & 3,645 cantons & 462,838 & 12,193,058\\ Italy & 7,319 municipalities & 419,556 & 8,973,671\\ Mexico & 2,456 municipalities & 60,049 & 603,688 \\ Spain & 7,950 municipalities & 261,084 & 5,102,359\\ United State & 3,108 counties & 161,522 & 34,097,929\\ London & 4,664 Output Areas & 750,943 & 4,373,442\\ Paris & 3,185 municipalities & 277,252 & 3,789,487\\ \hline \end{tabular} \end{table*} \begin{figure}[!ht] \begin{center} \includegraphics[width=\linewidth]{Fig2} \caption{\textbf{Position of the units' centroids around London (left) and Paris (right).} The black contours represent the boundaries of the Greater London Authority (left) and the french \textit{d{\'e}partement} {I}le de France (right). $\langle S \rangle$ represents the average unit surface. \label{Fig2}} \end{center} \end{figure} Figures \ref{Fig1} and \ref{Fig2} display the centroids of the census units for the eight case studies. For each unit, the statistical offices provide the following information: \begin{itemize} \item $T_{ij}$, the number of trips between the census units $i$ and $j$ (i.e. number of individuals living in $i$ and working in $j$); \item $d_{ij}$, the great-circle distance between the unit $i$ and the unit $j$ computed with the Haversine formula; \item $m_i$, the number of inhabitants in unit $i$. \end{itemize} In this work we consider only inter-unit flows (i.e. $T_{ii}=0$), mainly because it is not possible to estimate intra-units flows with the radiation laws \footnote{~Note that it is possible to estimate intra-unit flows with the gravity laws by approximating intra-unit distances with, for example, half the square root of the unit's area or half the average distance to the nearest neighbors.}. We note $N=\sum_{i,j=1}^n T_{ij}$ the total number of commuters, $O_i=\sum_{j=1}^n T_{ij}$ the number of out-commuters (i.e. number of individuals living in $i$ and working in another census unit) and $D_j=\sum_{i=1}^n T_{ij}$ the number of in-commuters (i.e. number of individuals working in $j$ and living in another census unit ). \section*{COMPARISON OF TRIP DISTRIBUTION LAWS AND MODELS} The purpose of the trip distribution models is to split the total number of trips $N$ in order to generate a trip table $\tilde{T}=(\tilde{T}_{ij})_{1 \leq i,j \leq n}$ of the estimated number of trips form each census area to every other. Note that by trip we are referring to commuting travels from home to work, there is a return trip not considered in $\tilde{T}$ and $N$ is also equivalent to the number of unique commuters. The trip distribution depends on, on one hand, the characteristics of the census units and the way they are spatially distributed, and, on the other hand, the level of constraints required by the model. Therefore, to fairly compare different trip distribution modeling approaches we have to consider separately the law used to calculate the probability to observe a trip between two census units, called trip distribution law, and the trip distribution model used to generate the trip allocation from this law. \subsection*{Gravity and intervening opportunities laws} The purpose of this study is to test the capacity of both the gravity and the intervening opportunities approaches to estimate the probability $p_{ij}$ that out of all the possible travels in the system we have one between the census unit $i$ and $j$. This probability is asymmetric in $i$ and $j$ as the flows themselves, and, by convention, the self-loops are excluded of the analysis $p_{ii}=0$. This probability is normalized to all possible couples of origins and destinations, $\sum_{i,j=1}^n p_{ij} =1$. Note that $p_{ij}$ does not refer to the conditional probability of a trip starting in $i$ finishes in $j$ $\mathbb{P}(1|i,j)$. There exists a relation between both of them: \begin{equation} p_{ij} = \mathbb{P}(i) \, \mathbb{P}(1|i,j) \end{equation} where $\mathbb{P}(i)$ stands for the probability of a trip starting in $i$. $\mathbb{P}(1|i,j)$ will appear later for the intervening opportunities laws as a function of the populations of origin $m_i$, destination $m_j$ and the number of opportunities between them $s_{ij}$, $\mathbb{P}(1|m_i,m_j,s_{ij})$, but the basis of our analysis will be $p_{ij}$. \subsubsection*{Gravity laws} In the simplest form of the gravity approach, the probability of commuting between two units $i$ and $j$ is proportional to the product of the origin population $m_i$ and destination population $m_j$, and inversely proportional to the travel cost between the two units: \begin{equation} p_{ij} \propto m_i \, m_j \, f(d_{ij}),\,\,\,\,\,\,i\ne j \label{grav} \end{equation} The travel cost between $i$ and $j$ is usually modeled with an exponential distance decay function, \begin{equation} f(d_{ij})=e^{-\beta \, d_{ij}} \label{exp} \end{equation} or a power distance decay function, \begin{equation} f(d_{ij})={d_{ij}}^{-\beta} \label{pow} \end{equation} As mentioned in \cite{Barthelemy2011}, the form of the distance decay function can change according to the dataset, therefore, both the exponential and the power forms are considered in this study. In both cases, the importance of the distance in commuting choices is adjusted with a parameter $\beta$ with observed data. \subsubsection*{Intervening opportunities laws} In the intervening opportunity approach, the probability of commuting between two units $i$ and $j$ is proportional to the origin population $m_i$ and to the conditional probability that a commuter living in unit $i$ with population $m_i$ is attracted to unit $j$ with population $m_j$, given that there are $s_{ij}$ job opportunities in between. The conditional probability $\mathbb{P}(1|m_i,m_j,s_{ij})$ needs to be normalized to ensure that all the trips end in the region of interest. \begin{equation} p_{ij} \propto m_i \, \frac{\mathbb{P}(1|m_i,m_j,s_{ij})}{\sum_{k=1}^n\mathbb{P}(1|m_i,m_k,s_{ik})},\,\,\,\,\,\,i\ne j \label{IO} \end{equation} In the Schneider's version of the intervening opportunities approach the conditional probability is given by \begin{equation} \mathbb{P}(1|m_i,m_j,s_{ij})= e^{\displaystyle -\gamma s_{ij}}-e^{\displaystyle -\gamma (s_{ij}+m_j)} \label{schneider} \end{equation} where $s_{ij}$ is the number of opportunities (approximated by the population in this case) in a circle of radius $d_{ij}$ centered in $i$ (excluding the source and destination). The parameter $\gamma$ can be seen as a constant probability of accepting an opportunity destination. Note that in this version the number of opportunities $m_i$ at the origin is not taken into account. More recently, \cite{Simini2012} reformulated the Stouffer's intervening opportunities law in terms of radiation and absorption processes. This model is inspired by a diffusion model where each individual living in an unit $i$ has a certain probability of being ''absorbed'' by another unit $j$ according to the spatial distribution of opportunities. The original radiation model is free of parameters and, therefore, it does not require calibration. The conditional probability $\mathbb{P}(1|m_i,m_j,s_{ij})$ is expressed as: \begin{equation} \mathbb{P}(1|m_i,m_j,s_{ij})=\frac{m_i \, m_j}{(m_i+s_{ij})\, (m_i+m_j+s_{ij})} \label{rad} \end{equation} This conditional probability needs to be normalized because the probability for an individual living in a census unit $i$ of being absorbed by another census unit is not equal to $1$ in case of finite system but equal to $1-\frac{m_i}{M}$ where $M$ is the total population \cite{Masucci2013}. Some recent works have shown that the model fails to describe human mobility compared to more classic approaches particularly on a small scale \cite{Lenormand2012,Masucci2013, Liang2013}. To circumvent these limitations, an extended radiation model has been proposed by \cite{Yang2014}. In this extended version, the probability $\mathbb{P}(1|m_i,m_j,s_{ij})$ is derived under the survival analysis framework introducing a parameter $\alpha$ to control the effect of the number of job opportunities between the source and the destination on the job selection, \begin{equation} \hspace*{-0.6cm} \mathbb{P}(1|m_i,m_j,s_{ij})=\frac{[{(m_i+m_j+s_{ij})}^\alpha - {(m_i+s_{ij})}^\alpha]\, ({m_i}^\alpha + 1)}{[{(m_i+s_{ij})}^\alpha + 1]\, [{(m_i+m_j+s_{ij})}^\alpha + 1]} \label{extrad} \end{equation} \subsection*{Constrained models} After the description of the probabilistic laws, the next step is to materialize the people commuting. The purpose is to generate the commuting network $\tilde{T}=(\tilde{T}_{ij})_{1 \leq i,j \leq n}$ by drawing at random $N$ trips from the trip distribution law $(p_{ij})_{1 \leq i,j \leq n}$ respecting different level of constraints according to the model. We are going to consider four different types of models: \begin{enumerate} \item {\it Unconstrained model.} The only constraint of this model is to ensure that the total number of trips $\tilde{N}$ generated by the model is equal to the total number of trips $N$ observed in the data. In this model, the $N$ trips are randomly sample from the multinomial distribution, \begin{equation} \displaystyle \mathcal{M}\left(N,\left(p_{ij}\right)_{1 \leq i,j \leq n}\right) \label{NC} \end{equation} \item {\it Production constrained model.} This model ensures that the number of trips ''produced'' by a census unit is preserved. For each unit $i$, $O_i$ trips are produced from the multinomial distribution, \begin{equation} \displaystyle \mathcal{M}\left(O_i,\left(\frac{p_{ij}}{\sum_{k=1}^n p_{ik}}\right)_{1 \leq j \leq n}\right) \label{PCM} \end{equation} \item {\it Attraction constrained model.} This model ensures that the number of trips ''attracted'' by a unit is preserved. For each census unit $j$, $D_j$ trips are attracted from the multinomial distribution, \begin{equation} \displaystyle \mathcal{M}\left(D_j,\left(\frac{p_{ij}}{\sum_{k=1}^n p_{kj}}\right)_{1 \leq i \leq n}\right) \label{ACM} \end{equation} \item {\it Doubly constrained model.} This model, also called production-attraction constrained model ensures that both the trips attracted and generated by a census unit are preserved using two balancing factors $K_i$ and $K_j$ calibrated with the \textit{Iterative Proportional Fitting} procedure \cite{Deming1940}. The relation between $K_i$, $K_j$, $p_{ij}$ and the trip flows is given by \begin{equation} \left\{ \begin{array}{l} \tilde{T}_{ij} = K_i \, K_j \, p_{ij} \\ \sum_{j=1}^n \tilde{T}_{ij}=O_i, \,\,\sum_{i=1}^n \tilde{T}_{ij}=D_j \\ \end{array} \right. \label{DC} \end{equation} Unlike the unconstrained and single constrained models, the doubly constrained model is a deterministic model. Therefore, the simulated network $\tilde{T}$ is a fully connected network in which the flows are real numbers instead of integers. This can be problematic since we want to study the capacity of both the gravity and the radiation approaches to preserve the topological structure of the original network. To bypass this limitation $N$ trips are randomly sample from the multinomial distribution, \begin{equation} \displaystyle \mathcal{M}\left(N,\left(\frac{\tilde{T}_{ij}}{\sum_{k,l=1}^n \tilde{T}_{kl}}\right)_{1 \leq i,j \leq n}\right) \label{DC2} \end{equation} \end{enumerate} \subsection*{Goodness-of-fit measures} \paragraph*{Common part of commuters} We calibrate the parameters $\beta$, $\gamma$ and $\alpha$ using the common part of commuters (CPC) introduced in \cite{Gargiulo2012,Lenormand2012}: \begin{equation} \displaystyle CPC(T,\tilde{T}) = \frac{2\sum_{i,j=1}^n min(T_{ij},\tilde{T}_{ij})}{\sum_{i,j=1}^n T_{ij} + \sum_{i,j=1}^n \tilde{T}_{ij}} \label{CPC} \end{equation} This indicator is based on the S{\o}rensen index \cite{Sorensen1948}. It varies from $0$, when no agreement is found, to $1$, when the two networks are identical. In our case, the total number of commuters $N$ is preserved, therefore the Equation (\ref{CPC}) can be simplified to \begin{equation} \displaystyle CPC(T,\tilde{T}) = 1 - \frac{1}{2}\frac{\sum_{i,j=1}^n |T_{ij}-\tilde{T}_{ij}|}{N} \label{CPC2} \end{equation} which represents the percentage of good prediction as defined in \cite{Lenormand2013}. In order to assess the robustness of the results regarding the choice of goodness-of-fit measures, we also test the results obtained with the normalized root mean square error, \begin{equation} \displaystyle NRMSE(T,\tilde{T}) = \frac{\sum_{i,j=1}^n (T_{ij}-\tilde{T}_{ij})^2}{\sum_{i,j=1}^n T_{ij}} \label{NRMSE} \end{equation} and the information gain statistic, \begin{equation} \displaystyle I(T,\tilde{T}) = \sum_{i,j=1}^n \frac{T_{ij}}{N}ln\left(\frac{T_{ij}}{\tilde{T}_{ij}}\right) \label{I} \end{equation} \paragraph*{Common part of links} The ability of the models to recover the topological structure of the original network can be assessed with the common part of links (CPL) defined as \begin{equation} \displaystyle CPL(T,\tilde{T}) = \frac{2\sum_{i,j=1}^n \mathds{1}_{T_{ij}>0} \cdot \mathds{1}_{\tilde{T}_{ij}>0}}{\sum_{i,j=1}^n \mathds{1}_{T_{ij}>0} + \sum_{i,j=1}^n \mathds{1}_{\tilde{T}_{ij}>0}} \label{CPL} \end{equation} where $\mathds{1}_X$ is equal to one if the condition $X$ is fulfilled and zero otherwise. The common part of links measures the proportion of links in common between the simulated and the observed networks (i.e. links such as $T_{ij}>0$ and $\tilde{T}_{ij}>0$). It is null if there is no link in common and one if both networks are topologically equivalent. \begin{figure*} \begin{center} \includegraphics[width=12cm]{Fig3} \caption{\textbf{Common part of commuters according to the unconstrained models, the gravity and intervening opportunities laws for the eight case studies.} The circles represent the normalized gravity law with the exponential distance decay function (the circles with a cross inside represent the original version); The squares represent the normalized gravity law with the power distance decay function (the squares with a cross inside represent the original version); The point down triangles represent the Schneider's intervening opportunities law; The green diamonds represent the extended radiation law; The purple triangles represent the original radiation law. Error bars represent the minimum and the maximum values observed in the $100$ realizations but in most cases they are too close to the average to be seen. \label{Fig3}} \end{center} \end{figure*} \paragraph*{Common part of commuters according to the distance} In order to measure the similarity between the observed commuting distance distribution and the ones simulated with the models, we introduce the common part of commuters according to the distance (CPC$_d$). Let us consider $N_k$ the number of individuals having a commuting distance in the bin between $2k-2$ and $2k$ kms. The CPC$_d$ is equal to the CPC based on $N_k$ instead of $T_{ij}$ \begin{equation} \displaystyle CPC_d(T,\tilde{T}) = \frac{\sum_{k=1}^{\infty} min(N_{k},\tilde{N}_{k})}{N} \label{CPCd} \end{equation} \section*{RESULTS} In this section, we compare the five laws: gravity with an exponential or a power distance decay function, the Schneider's intervening opportunities law and the original and the extended radiation laws. We test these laws against empirical data coming from eight different case studies using four constrained models to estimate the flows. For each constrained model, the parameters $\beta$, $\gamma$ and $\alpha$ are calibrated so as to maximize the CPC. Since the models are stochastic, we consider an average CPC value measured over $100$ replications of the trip distribution. Similarly, all the goodness-of-fit measures are obtained by calculating the average measured over $100$ network replications. It is important to note that the networks generated with the constrained models are very stable, the stochasticity of the models does not affect the statistical properties of the network. Therefore, the goodness-of-fit measures does not vary much with the different realizations of the multinomial sampling. For example, within the $100$ network instances for all models and case studies, the CPC varies, at most, by $0.09\%$ around the average. \begin{figure*} \begin{center} \includegraphics[width=12cm]{Fig4} \caption{\textbf{Performance of the unconstrained model (UM), the production constrained model (PCM), the attraction constrained model (ACM) and the doubly constrained model (DCM) according to the gravity and the intervening opportunities laws (a)-(c) and a uniform distribution (d).} (a) Average CPC. (b) Average CPL. (c) Average CPC$_d$. The red circles represent the normalized gravity law with the exponential distance decay function; The blue squares represent the normalized gravity law with the power distance decay function; The point down triangles represent the Schneider's intervening opportunities law; The green diamonds represent the extended radiation law; The purple triangles represent the original radiation law. The grey point down triangles represent the uniform distribution, form dark to light grey, the CPC, the CPL and the CPC$_d$. \label{Fig4}} \end{center} \end{figure*} \subsection*{Estimation of commuting flows} Figure \ref{Fig3} displays the common part of commuters obtained with the different laws and models for the eight case studies. Globally, the gravity laws give better results than the intervening opportunities laws. For the gravity laws, the results improve with the exponential rather than with the power distance decay function. For the intervening opportunities laws, the extended radiation law outperforms the original one and achieves slightly better results than the Schneider law. In the top left panel, we observe the results for the unconstrained model. In this case, the extended radiation law and the Schneider law give better results than the gravity ones for most case studies. However, these better performances are due to the normalization factor used in Equation \ref{IO}. Indeed, this normalization implies that the probability of having a trip originating in a census unit $i$ is proportional to the population of $i$, which is not necessarily the case for the gravity laws. If we use the same type of normalization for the gravity trip distribution law $p_{ij}$ (Equation \ref{NGrav}), we observe that the ''normalized'' gravity laws give better results than the intervening opportunities laws. In the following, we will refer to the normalized version when mentioning the gravity law. \begin{equation} p_{ij} \propto m_i \frac{m_j f(d_{ij})}{\sum_{k=1}^n m_k f(d_{ik})} ,\,\,\,\,\,\,i\ne j \label{NGrav} \end{equation} To compare the constrained models performances, we plot in Figure \ref{Fig4}a the CPC obtained with the four models according to the laws averaged over the eight case studies. As expected, more constrained the model is, higher the CPC becomes. Unconstrained models are able to reproduce on average around $45\%$ of the observed commuting network against $65\%$ for the doubly constrained model. It is interesting to note that, the attraction constrained model gives better results than the production constrained model. This can be explained by the fact that the job demand is easier to estimate than the job offer, which can be related to extra economic questions. This is in agreement with the results obtained with a uniform distribution ($p_{ij}\propto 1$) plotted in Figure \ref{Fig3}d. Although the results obtained with the normalized root mean square error and the information gain statistic are very similar to the ones obtained with the CPC, it is worth noting that globally the extended radiation law gives smaller normalized root mean square error values than the normalized gravity laws with the unconstrained model (see Table \ref{tab2} for more details about the laws exhibiting the best performances). \begin{figure*} \begin{center} \includegraphics[width=12cm]{Fig5} \caption{\textbf{Ratio between the simulated and the observed number of links according to the unconstrained models, the gravity and intervening opportunities laws for the eight case studies.} The red circles represent the normalized gravity law with the exponential distance decay function; The blue squares represent the normalized gravity law with the power distance decay function; The point down triangles represent the Schneider's intervening opportunities law; The green diamonds represent the extended radiation law; The purple triangles represent the original radiation law. Error bars represent the minimum and the maximum but in most cases they are too close to the average to be seen. \label{Fig5}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=14cm]{Fig6} \caption{\textbf{Probability density function of the commuting distance distribution observed in the data and simulated with the production constrained model.} (a) France and (b) United States. The red circles represent the normalized gravity law with the exponential distance decay function; The blue squares represent the normalized gravity law with the power distance decay function; The point down triangles represent the Schneider's intervening opportunities law; The green diamonds represent the extended radiation law; The purple triangles represent the original radiation law. The black stars represent the census data. \label{Fig6}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=14cm]{Fig7} \caption{\textbf{Performance of the constrained models according to the gravity and the intervening opportunities laws.} (a) Average CPC. (b) Average CPL. (c) Average CPC$_d$. The red circles represent the normalized gravity law with the exponential distance decay function; The blue squares represent the normalized gravity law with the power distance decay function; The point down triangles represent the Schneider's intervening opportunities law; The green diamonds represent the extended radiation law; The purple triangles represent the original radiation law. \label{Fig7}} \end{center} \end{figure*} \subsection*{Structure of the commuting network} We consider next the capacity of the gravity and the intervening opportunities laws to recover the structure of the empirical commuting networks. Figure \ref{Fig4}b shows the average common part of links obtained with the different laws and models. We observe that the gravity law with an exponential distance decay function outperforms the other laws when the unconstrained and the single constrained models are used to generate the flows. However, when the doubly constrained models is considered, very similar results are obtained except for the Schneider law and the original version of the radiation law. In any case, the common part of links never exceed $0.55$, this can be explained by the fact that, globally, the different laws fail at reproducing the number of links. Indeed, as it can be seen in Figure \ref{Fig5}, which displays the ratio between the number of links generated with the models and the observed ones, the radiation law and the exponential gravity law tend to underestimate the number of links whereas the extended radiation law and the power gravity law overestimate it. The flows networks generated with the Schneider law have globally a number of links closer to the observed values than the networks generated with the other laws. \subsection*{Commuting distance distribution} Another important feature to study is the commuting distance distribution. Figure \ref{Fig4}c shows the average common part of commuters according to the distance obtained with the different models and laws. The results obtained with the exponential gravity law are slightly better than the ones obtained with the other laws. However, the results are globally good, and except the original radiation law, the gravity and intervening opportunities laws are able to reproduce more than $80\%$ of the commuting distances. To go further, we plot in Figure \ref{Fig6} the observed and the simulated commuting distance distributions obtained with the production constrained model in France and United States. We can clearly see that the exponential gravity law is better for estimating commuting distances which are below a certain threshold equal to $50$ km in France and $150$ km in United States. After this threshold, it fails at estimating the commuting flows as it is the case for the Schneider's intervening opportunities law. On the contrary, the radiation laws and the gravity law with a power distance decay function are able to estimate commuting flows at large distances. However, we have to keep in mind that the proportion of commuters traveling such long distances are less than $6\%$ in France and $5\%$ in United States. Besides, one can legitimately wonder whether these long travels are repeated twice per day or if they may be an artifact of the way in which the census information is collected. \subsection*{Robustness against changes in the inputs} In Equations \ref{grav} and \ref{IO}, the population is used as input instead of the outflows $O_i$ and the inflows $D_j$, which are usually preferred since they are a more faithful reflection of the job demand and offer. The job demand and offer are considered to be related to the population but the proportion is rarely direct (it needs to be adjusted with an exponent) and according to the case study, the fit can be bad. In order to assess the robustness of the results to changes in the input data, we consider the results obtained with the gravity law (Equation \ref{gravOi}) and the general intervening opportunities law (Equation \ref{IOOi}) based on the in and out flows. In the case of the intervening opportunities laws, $s_{ij}$ is the number of in-commuters in a circle of radius $d_{ij}$ centered in $i$ (excluding the source and destination) and the role of the populations in the gravity law is taken by $O_i$ and $D_j$. To be more specific, the gravity law becomes: \begin{figure*} \begin{center} \includegraphics[scale=0.6]{Fig8} \caption{\textbf{Parameter value as a function of the average unit surface.} (a) Normalized gravity laws with an exponential distance decay function. (b) Normalized gravity laws with a power distance decay function. (c) Schneider's intervening opportunities law. (d) Extended radiation law. \label{Fig8}} \end{center} \end{figure*} \begin{equation} p_{ij} \propto O_i \frac{D_j f(d_{ij})}{\sum_{k=1}^n D_k f(d_{ik})},\,\,\,\,\,\,i\ne j \label{gravOi} \end{equation} while the intervening opportunities law can be written as \begin{equation} p_{ij} \propto O_i \frac{\mathbb{P}(1|D_i,D_j,s_{ij})}{\sum_{k=1}^n\mathbb{P}(1|D_i,D_k,s_{ik})},\,\,\,\,\,\,i\ne j \label{IOOi} \end{equation} Figure \ref{Fig7} displays the average CPC, CPL and CPC$_d$ obtained with the four models according to the laws averaged over the eight case studies. As it can be seen on these plots the results observed in Figure \ref{Fig4} are quite stable to changes in the input data. \begin{figure*} \begin{center} \includegraphics[width=12cm]{Fig9} \end{center} \caption{\textbf{Observed commuting distance distributions.} (a) Probability density function of the commuting distance distribution according to the case study. (b) Average commuting distance as a function of the average unit surface. (c) Pearson's measure of Kurtosis as a function of the average unit surface. \label{Fig9}} \end{figure*} \subsection*{Parameter calibration in the absence of detailed data} An important issue with the estimation of commuting flows is the calibration of the parameters. Indeed, how to calibrate the parameters $\beta$, $\gamma$ and $\alpha$ in the absence of detailed data? This problem has already been tackled in previous studies \cite{Balcan2009,Lenormand2012,Yang2014}. In \cite{Lenormand2012}, the authors have shown that, in the case of the exponential form of the gravity law, the value of $\beta$ can be directly inferred from the average census unit surface with the relationship $\beta=0.3\,<S>^{-0.18}$. Similarly, \cite{Yang2014} proposed to estimate the value of $\alpha$ in the extended radiation law with the average spatial scale $l=\sqrt{<S>}$ using the functional relationship $\alpha=0.0085\,l^{1.33}$. In Figure \ref{Fig8}, we plot the calibrated value of $\beta$, $\gamma$ and $\alpha$ obtained with the laws based on the population as a function of the average census unit surface $<S>$ for the four constrained models. Figure \ref{Fig8}a shows the relationship obtained with the gravity law with an exponential distance decay function. We observe that the coefficients of the relationship are the same than the ones obtained in \cite{Lenormand2012}. This is not surprising since three datasets out of the six used here coincide. In this case, the value of $\beta$ decreases with larger spatial scales. This can be explained by the fact that $\beta$ in the exponential form of the gravity law is inversely proportional to the average commuting distance and such distance increases with the average unit surface since the shorter distance trips are excluded (Figure \ref{Fig9}b). Figure \ref{Fig8}b displays the same relationship for the power form of the gravity law, in this case the value of $\beta$ increases with the scale to fit the tail of the commuting distance distribution. In fact, we observe in the data that, globally, the steepness of the curve (measured with the Pearson's Kurtosis) increases with the scale (Figure \ref{Fig9}c). Figure \ref{Fig8}c shows the results obtained with the parameter $\gamma$ of the Schneider intervening opportunities law. The value of $\gamma$ seems to decrease slightly with the scale but the existence of a relationship between the two variables is not significant. Finally, we plot in Figure \ref{Fig8}d the relationship between the parameter $\alpha$ of the extended radiation law and the average unit surface, the exponent obtained is similar to the one reported in \cite{Yang2014}. In the extended version of the radiation law, the parameter $\alpha$ controls the effect of the number of job opportunities between home and work on the job selection. In particular, for a given number of job opportunities, higher the value of $\alpha$, higher the probability of not accepting a job among these opportunities. This implies that $\alpha$ is directly proportional to the average commuting distance and, by extension, to the average unit surface (Figure \ref{Fig9}b). As mentioned in \cite{Yang2014}, the value of $\alpha$ is also influenced by the heterogeneity of the distribution of opportunities. As it can be seen in Figure \ref{Fig8}d, the three case studies presenting the largest deviation from the regression line are also the most heterogeneous ones (Paris, Spain and Italy which have the second, fourth and fifth smallest average unit surface, respectively). As in \cite{Lenormand2012}, it is possible to assess the quality of the parameter estimation by measuring its impact on the CPC. The idea is to measure for each law, model and case study, the difference between the CPC obtained with the calibrated value of the parameter and the CPC obtained with the estimated one. The parameter value is estimated with the regression model obtained with the laws based on the population and the difference between the original CPC and the ``estimated'' one is measured with the absolute percentage error (i.e. absolute error as a percent of the original CPC value). In order to assess the robustness of the estimation in changes in the input we have also measured the CPC percentage error obtained with an estimation of the parameters for the laws based on the in/out flows. Note that in this case the parameters' estimation come also from regression models obtained with the laws based on the population. The results are presented in Figure \ref{Fig10}. The CPC percentage errors obtained with the gravity laws are globally small and robust to the change of inputs. They vary at most by $4\%$ of the original CPC values for the exponential form and $10\%$ for the power form. Similar results are obtained for the extended radiation law where the majority of the errors are below $10\%$ and vary at most by $22\%$ of the original CPC values. This means that for these laws the parameter value can be directly inferred from the scale, and thus, commuting networks at different scales can be generated without requiring detailed data for calibration. The situation is different for the Schneider's intervening opportunities law very sensible to change in inputs. For the law based on the population, the errors obtained for the CPC are reasonable, the majority of them are below $10\%$. However when we try to estimate the value of $\gamma$ for the law based on in/out flows with a regression model obtained with the law based on the population the CPC percentage error increases dramatically, meaning that the value of $\gamma$ is highly dependent on the variable uses as a surrogate measure of the number of ``real'' opportunities. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{Fig10} \end{center} \caption{\textbf{CPC absolute percentage error.} Boxplots of the absolute percentage error between the CPC obtained with a calibrated value of the parameters and the CPC obtained with values estimated with the regression models obtained with the laws based on the population. The notched and classic boxplots represent the percentage error obtained with the laws based on the population and the number of in/out flows, respectively. The boxplot is composed of the first decile, the lower hinge, the median, the upper hinge and the last decile. \label{Fig10}} \end{figure} \section*{DISCUSSION} In summary, we have compared different versions of the gravity and the intervening opportunities laws. These two approaches have already been compared in the past but using different inputs, number of parameters and/or type of constraints. For this reason, the aim of this work has been to bring some light into the discussion by systematically comparing the intervening opportunities and the gravity laws taking care of dissociating the probabilistic laws and the constrained models used to generate the trip networks. We have shown that, globally, the gravity approach outperforms the intervening opportunities approach to estimate the commuting flows but also to preserve the commuting network structure and to fit of the commuting distance distribution. More particularly the gravity law with the exponential distance decay function give better results than the other laws even if it fails at estimating commuting flows at large distances. The reason for this is that most of the travels are short-range, which are better capture by the gravity law with exponential decay in the distance. The large distance commuting trips are few and probably associated with weekly rather than daily commuting. To handle these different types of mobility, it may be necessary to investigate further the nature of the trips and to consider even mixed models for different displacement lengths. The superiority of the gravity law is very robust to the choice of goodness-of-fit measure and to the change of input. Regarding a more practical issue which is the calibration of the parameters without detailed data, we shown that the parameter values can be estimated with the average unit surface. We also demonstrated that, except for the Schneider's intervening opportunities law, this estimation is robust to changes in input data. This allows for a direct estimation of the commuting flows even in the absence of detailed data for calibration. Although more research is needed to investigate the link between mobility, distances and intervening opportunities for other types of movements such as migrations, tourism or freight distribution, the distance seems to play a more important role than the number of intervening opportunities in work location choices. More specifically, the superiority of the gravity approach seems to be due to its flexibility, and, what was considered as a weakness by \cite{Simini2012}, the lack of theoretical guidance to choose the distance-decay function, emerges as a strength. Indeed, people do not choose their place of work as they choose their new place of residence, therefore, having the possibility of adjusting the effect of the distance in the decision process is clearly an advantage which does not apply to the intervening opportunities approach in its present form. The objective of this work has been to establish the basis for a fair and systematic comparison separating probabilistic laws and different degrees of constraint trip generation models. Our results emphasize the importance of identifying and separating the different processes involved in the estimation of flows between locations for the comparison of spatial interaction models. Indeed, the use of these models in contexts such as urban and infrastructure planning, where large investments are at stake, imposes the need for the selection of the aptest model before taking decisions based on its results. The software package to generate spatial networks using the approach described in the paper can be downloaded from \url{https://github.com/maximelenormand/Trip-distribution-laws-and-models}. \section*{ACKNOWLEDGEMENTS} Partial financial support has been received from the Spanish Ministry of Economy (MINECO) and FEDER (EU) under the project INTENSE@COSYP (FIS2012-30634), and from the EU Commission through projects INSIGHT. The work of ML has been funded under the PD/004/2013 project, from the Conselleria de Educación, Cultura y Universidades of the Government of the Balearic Islands and from the European Social Fund through the Balearic Islands ESF operational program for 2013-2017. JJR acknowledges funding from the Ram\'on y Cajal program of MINECO. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 2.951172, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbZ04ubnjoqgDU4BY
\section{Introduction} The Pythagorean Win/Loss Formula, also known as the Pythagorean formula or Pythagorean expectation, was invented by Bill James in the late 1970s to use a team's observed runs scored and allowed to predict their winning percentage. Originally given by \be {\rm Won-Loss\ Percentage} \ = \ \frac{{\rm RS}^2}{{\rm RS}^2+{\rm RA}^2}, \ee with ${\rm RS}$ the runs scored and ${\rm RA}$ the runs allowed, it earned its name from the similarity of the denominator to the sums of squares in the Pythagorean formula from geometry.\footnote{Though of course the more natural shape in baseball is the diamond, save for some interesting stadium features, such as the triangle in Fenway Park.} Later versions found better agreement by replacing the exponent 2 with numbers near 1.83, leading to an average error of about three to four games per season. The formula is remarkably simple, requiring only the runs scored and allowed by a team in a season, and the calculation (even with the improved exponent) is easily done on any calculator or phone. It is one of the most commonly listed expanded statistics on websites. One reason for its prominence is its accuracy in predicting a team's future performance through a simple calculation and not through computationally intense simulations. Additionally, it allows sabermetricians and fans to assess a manager's impact on a team, and estimate the value of new signings by seeing how their presence would change the predictions. Because of its widespread use and utility, it is very desirable to have improvements. In his senior thesis, the first named author, supervised by the second named author, explored various attempted improvements to the Pythagorean formula. These included replacing the observed runs scored and allowed each game with adjusted numbers, with the adjustments coming from a variety of sources (such as ballpark effects, game state\footnote{For example, if a team is up by a large amount late in a game, they frequently use weaker relief pitchers and rest some starters, while the trailing team makes similar moves; thus the offensive productions from this point onward may not be indicative of the team's true abilities and a case can be made to ignore such data.}, WHIP, ERA+, and WAR of the pitcher, ...). As these led to only minor improvements\footnote{The only adjusted formula that was at least on par or very near the accuracy of the original Pythagorean W/L Formula was that of ballpark factor.} (see \cite{Luo} for a detailed analysis of these and other adjustments), we turned our attention to the successful theoretical model used by Miller and his colleagues \cite{DaMil1,DaMil2,Mil,MCGLP}, where it was assumed runs scored and allowed were independently drawn from Weibull distributions with the same shape parameter. Recall the three parameter Weibull density is given by \be \twocase{f(x;\alpha,\beta,\gamma) \ = \ }{\frac{\gamma}{\alpha}\ ((x-\beta)/\alpha)^{\gamma-1}\ e^{- ((x-\beta)/\alpha)^{\gamma}}}{if $x \ge \beta$}{0}{otherwise.} \ee The effect of $\alpha$ is to control the spread of the output, while $\beta$ translates the distribution. The most important parameter is $\gamma$, which controls the shape. See Figure \ref{fig:weibullplots} for some plots. \begin{center} \begin{figure} \includegraphics[scale=0.65]{weibulldist-eps-converted-to} \caption{\label{fig:weibullplots} The varying distributions of the Weibull family with $\alpha=1$ and $\beta=0$.} \end{figure} \end{center} Their success is due to the fact that the three parameter Weibull is a very flexible family of distributions, capable of fitting many one hump distributions, including to a statistically significant degree the observed runs scored and allowed data. Miller chose to use Weibulls for two reasons. First, they lead to double integrals for the probabilities that can be evaluated in closed form. This is extremely important if we desire a simple expression such as the one posited by James (see \cite{HJM} for alternative simple formulas). Second, in addition to being flexible, special values of the Weibulls correspond to well-known distributions ($\gamma = 1$ is an exponential, while $\gamma = 2$ is the Rayleigh distribution). The goal of this paper is to show that one can significantly improve the predictive power if instead of modeling runs scored and allowed as being drawn from independent Weibulls, we instead model them as being drawn from linear combinations of independent Weibulls. The advantage of this approach is that we are still able to obtain tractable double integrals which can be done in closed form. There is a cost, however, as now more analysis is needed to find the parameters and the correct linear combinations. While this results in a more complicated formula than the standard variant of James' formula, it is well worth the cost as on average it is better by one game per season (thus a typical error is 3 games per team per year, as opposed to 4 which is the typical result in the current formula). Comparing it to \url{baseball-reference.com}'s calculated Expected WL from 1979 to 2013, we find that the linear combination of Weibulls is approximately .06 of a game better, a difference is not statistically significant. However, there are noticeable trends that appear in certain eras. \section{Theoretical Calculations} \label{sec:theoretical} \subsection{Preliminaries} It is important to note that we assume that runs scored and allowed are taken from continuous, not discrete, distributions. This allows us to deal with continuous integrals rather than discrete sums, which most of the time leads to easier calculations. While a discrete distribution would probably more effectively map runs in baseball, the assumption of drawing runs from a continuous distribution allows for more manageable calculations, and is a very sensible estimate of those runs observed. It also will lead to closed form expressions, which are much easier to work with and allow us to avoid having to resort to simulations. The Weibulls lead to significantly easier calculations because if we have a random variable $X$ chosen from a Weibull distribution with parameters $\alpha$, $\beta$, and $\gamma$, then $X^{1/\gamma}$ is exponentially distributed with parameter $\alpha^\gamma$; thus, a change of variables yields a simpler integral of exponentials, which can be done in closed form (see Appendix 9.1 in \cite{MCGLP} for details). In all arguments below we always take $\beta=-1/2$, though we often write $\beta$ to keep the discussion more general for applications to other sports. The reason we do this is that we use procedures such as the Method of Least Squares to find the best fit parameters, and this requires binning the observed runs scored and allowed data. As baseball scores are discrete, there are issues if these values occur at the boundary of bins; it is much better if they are at the center. By taking $\beta=-1/2$ we break the data into bins \begin{equation} \label{bins} \left[-\frac{1}{2},\ \frac{1}{2}\right),\ \ \ \left[\frac{1}{2},\ \frac{3}{2}\right),\ \ \ \left[\frac{3}{2},\ \frac{5}{2}\right),\ \ \ \cdots. \end{equation} Our final assumption is that runs scored and runs allowed are independent. This obviously cannot be true, as a baseball game never ends in a tie. For example, if the Orioles and the Red Sox are playing and the O's score 5 runs, then the Sox cannot score 5. Statistical analyses support this hypothesis; see the independence tests with structural zeros in \cite{Mil} or Appendix 9.2 in \cite{MCGLP} for details. We end this subsection with the mean and the variance of the Weibull. The calculation follows from standard integration (see the expanded version of \cite{Mil} or Appendix 9.1 in \cite{MCGLP} for a proof of the formula for the mean; the derivation of the variance follows in a similar fashion). \begin{lemma} \label{weibullmv} Consider a Weibull with parameters $\alpha, \beta, \gamma$. The mean, $\mu_{\alpha,\beta,\gamma}$, equals \be \mu_{\alpha,\beta,\gamma}\ = \ \alpha \Gamma (1+\gamma^{-1})+\beta \label{weibullmean} \ee while the variance, $\sigma^2_{\alpha,\beta,\gamma}$, is \be \sigma^2_{\alpha,\beta,\gamma}\ = \ \alpha^2 \Gamma(1+2\gamma^{-1})-\alpha^2\Gamma(1+\gamma^{-1})^2, \ee where $\Gamma(x)$ is the Gamma function, defined by $\Gamma(x)=\int_0^\infty e^{-u}u^{x-1}du$. \end{lemma} \subsection{Linear Combination of Weibulls} We now state and prove our main result for a linear combination of two Weibulls, and leave the straightforward generalization to combinations of more Weibulls to the reader. The reason such an expansion is advantageous and natural is that, following \cite{Mil}, we can integrate pairs of Weibulls in the regions needed and obtain simple closed form expressions. The theorem below also holds if $\gamma < 0$; however, in that situation the more your runs scored exceeds your runs allowed, the worse your predicted record due to the different shape of the Weibull (in all applications of Weibulls in survival analysis, the shape parameter $\gamma$ must be positive). \begin{theorem} \label{combweib} Let the runs scored and allowed per game be two independent random variables drawn from linear combinations of independent Weibull distributions with the same $\beta$'s and $\gamma$'s. Specifically, if $W(t;\alpha,\beta,\gamma)$ represents a Weibull distribution with parameters $(\alpha,\beta,\gamma)$, and we choose non-negative weights\footnote{If we had more terms in the linear combination, we would simply choose non-negative weights summing to 1.} $0\le c_i, c_j' \le 1$ (so $c_1 + c_2 = 1$ and $c_1'+c_2' = 1$), then the density of runs scored, $X$ is \be f(x;\alpha_{{\rm RS}_1},\alpha_{{\rm RS}_2}, \beta, \gamma,c_1, c_2) \ = \ c_1 W(x;\alpha_{{\rm RS}_1},\beta,\gamma)+c_2 W(\alpha_{{\rm RS}_2},\beta,\gamma)\ee and runs allowed, $Y$, is \be f(y;\alpha_{{\rm RA}_1}, \alpha_{{\rm RA}_2}, \beta, \gamma,c_1', c_2') \ = \ c_1' W(y;\alpha_{{\rm RA}_1},\beta,\gamma)+c_2'W(\alpha_{{\rm RA}_2},\beta,\gamma).\ee In addition, we choose $\alpha_{{\rm RS}_1}$ and $\alpha_{{\rm RS}_2}$ so that the mean of $X$ is ${\rm RS_{\rm obs}}$ and choose $\alpha_{{\rm RA}_1}$ and $\alpha_{{\rm RA}_2}$ such that the mean of $Y$ is ${\rm RA_{\rm obs}}$. For $\gamma>0$, we have \bea & & {\rm Won-Loss\ Percentage}(\alpha_{{\rm RS}_1}, \alpha_{{\rm RS}_2}, \alpha_{{\rm RA}_1}, \alpha_{{\rm RA}_2},\beta,\gamma,c_1, c_2, c_1',c_2') \nonumber\\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\ c_1 c_1' \frac{\alpha_{{\rm RS}_1}^\gamma}{\alpha_{{\rm RS}_1}^\gamma+\alpha_{{\rm RA}_1}^\gamma} +c_1 c_2'\frac{\alpha_{{\rm RS}_1}^\gamma}{\alpha_{{\rm RS}_1}^\gamma+\alpha_{{\rm RA}_2}^\gamma} \nonumber\\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +c_2 c_1' \frac{\alpha_{{\rm RS}_2}^\gamma}{\alpha_{{\rm RS}_2}^\gamma +\alpha_{{\rm RA}_1}^\gamma} +c_2 c_2'\frac{\alpha_{{\rm RS}_2}^\gamma}{\alpha_{{\rm RS}_2}^\gamma+\alpha_{{\rm RA}_2}^\gamma} \nonumber\\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \ \sum_{i=1}^2 \sum_{j=1}^2 c_i c_j' \frac{\alpha_{{\rm RS}_i}^\gamma}{\alpha_{{\rm RS}_i}^\gamma+\alpha_{{\rm RA}_j}^\gamma}. \eea \end{theorem} \begin{proof} As the means of $X$ (runs scored) and $Y$ (runs allowed) are ${\rm RS_{\rm obs}}$ and ${\rm RA_{\rm obs}}$, respectively, and the random variables are drawn from linear combinations of independent Weibulls, by Lemma \ref{weibullmv} \begin{align} {\rm RS_{\rm obs}} &\ =\ c_1(\alpha_{{\rm RS}_1}\Gamma(1+\gamma^{-1})+\beta)+(1-c_1)(\alpha_{{\rm RS}_2}\Gamma(1+\gamma^{-1})+\beta) \nonumber \\ {\rm RA_{\rm obs}} &\ =\ c_1'(\alpha_{{\rm RA}_1}\Gamma(1+\gamma^{-1})+\beta)+(1-c_1')(\alpha_{{\rm RA}_2}\Gamma(1+\gamma^{-1})+\beta). \end{align} We now calculate the probability that $X$ exceeds $Y$. We constantly use the fact that the integral of a probability density is 1. We need the two $\beta$ and the two $\gamma$'s to be equal in order to obtain closed form expressions.\footnote{If the $\beta$'s are differen then in the integration below we might have issues with the bounds of integration, while if the $\gamma$'s are unequal we get incomplete Gamma functions, though for certain rational ratios of the $\gamma$'s these can be done in closed form.} We find \begin{align} & {\rm Prob}(X>Y) = \int_{x=\beta}^\infty\int_{y=\beta}^x f(x; \alpha_{{\rm RS}_1},\alpha_{{\rm RS}_2},\beta,\gamma, c_1, c_2)f(y; \alpha_{{\rm RA}_1},\alpha_{{\rm RA}_2},\beta,\gamma, c_1',c_2')dydx \nonumber \\ &=\ \sum_{i=1}^2\sum_{j=1}^2 \int_{x=0}^\infty \int_{y=0}^x c_ic_j' \frac{\gamma}{\alpha_{{\rm RS}_i}}\left(\frac{x}{\alpha_{{\rm RS}_i}}\right)^{\gamma-1}e^{-(\frac{x}{\alpha_{{\rm RS}_i}})^\gamma}\frac{\gamma}{\alpha_{{\rm RA}_j}}\left(\frac{x}{\alpha_{{\rm RA}_j}}\right)^{\gamma-1}e^{-(\frac{x}{\alpha_{{\rm RA}_j}})^\gamma}dydx \nonumber \\ & = \sum_{i=1}^2\sum_{j=1}^2 c_ic_j' \int_{x=0}^\infty \frac{\gamma}{\alpha_{{\rm RS}_i}} \left( \frac{x}{\alpha_{{\rm RS}_i}} \right)^{\gamma-1}e^{-(\frac{x}{\alpha_{{\rm RS}_i}})^\gamma}\left[ \int_{y=0}^x \frac{\gamma}{\alpha_{{\rm RA}_j}} \left( \frac{y}{\alpha_{{\rm RA}_j}} \right)^{\gamma-1} e^{-(\frac{y}{\alpha_{{\rm RA}_j}})^\gamma} dy \right] dx \nonumber \\ &=\ \sum_{i=1}^2\sum_{j=1}^2 c_ic_j' \int_{x=0}^\infty \frac{\gamma}{\alpha_{{\rm RS}_i}} \left( \frac{x}{\alpha_{{\rm RS}_i}} \right)^{\gamma-1}e^{-(\frac{x}{\alpha_{{\rm RS}_i}})^\gamma} \ast \left[ 1- e^{-(\frac{x}{\alpha_{{\rm RA}_j}})^\gamma} \right] dx \nonumber \\ &=\ \sum_{i=1}^2\sum_{j=1}^2 c_ic_j' \left[ 1- \int_{x=0}^\infty \frac{\gamma}{\alpha_{{\rm RS}_i}} \left( \frac{x}{\alpha_{{\rm RS}_i}}\right)^{\gamma-1} e^{-(\frac{x}{\alpha_{{\rm RS}_i}})^\gamma-(\frac{x}{\alpha_{{\rm RA}_j}})^\gamma} dx\right]. \label{eqqq} \end{align} We set \begin{align} \frac{1}{\alpha_{ij}^\gamma}\ = \ \frac{1}{\alpha_{{\rm RS}_i}^\gamma}+\frac{1}{\alpha_{{\rm RA}_j}^\gamma}\ = \ \frac{\alpha_{{\rm RS}_i}^\gamma+\alpha_{{\rm RA}_j}^\gamma}{\alpha_{{\rm RS}_i}^\gamma \alpha_{{\rm RA}_j}^\gamma} \nonumber \\ \nonumber \end{align} for $1\leq i,j\leq 2$, so that \eqref{eqqq} becomes \begin{align} & \sum_{i=1}^2\sum_{j=1}^2 c_ic_j' \left[ 1- \int_{x=0}^\infty \frac{\gamma}{\alpha_{{\rm RS}_i}}\left( \frac{x}{\alpha_{{\rm RS}_i}}\right)^{\gamma-1}e^{-(\frac{x}{\alpha_{ij}})^\gamma} dx \right] \nonumber \\ &=\ \sum_{i=1}^2\sum_{j=1}^2 c_ic_j' \left[ 1- \frac{\alpha_{ij}^\gamma}{\alpha_{{\rm RS}_i}^\gamma} \int_{x=0}^\infty \frac{\gamma}{\alpha_{ij}} \left( \frac{x}{\alpha_{ij}} \right)^{\gamma-1}e^{-(\frac{x}{\alpha_{ij}})^\gamma} dx \right] \nonumber \\ &=\ \sum_{i=1}^2 \sum_{j=1}^2 c_ic_j' \left[1-\frac{\alpha_{ij}^\gamma}{\alpha_{{\rm RS}_i}^\gamma} \right] \nonumber \\ &=\ \sum_{i=1}^2\sum_{j=1}^2 c_i c_j'\left[1-\frac{1}{\alpha_{{\rm RS}_i}^\gamma}\ast \frac{\alpha_{{\rm RS}_i}^\gamma\alpha_{{\rm RA}_j}^\gamma}{\alpha_{{\rm RS}_i}^\gamma+\alpha_{{\rm RA}_j}^\gamma}\right] \nonumber \\ &=\ \sum_{i=1}^2\sum_{j=1}^2 c_i c_j'\left[\frac{\alpha_{{\rm RS}_i}^\gamma}{\alpha_{{\rm RS}_i}^\gamma+\alpha_{{\rm RA}_j}^\gamma}\right] \nonumber \\ &=\ c_1c_1' \frac{\alpha_{{\rm RS}_1}^\gamma}{\alpha_{{\rm RS}_1}^\gamma+\alpha_{{\rm RA}_1}^\gamma} +c_1c_2' \frac{\alpha_{{\rm RS}_1}^\gamma}{\alpha_{{\rm RS}_1}^\gamma+\alpha_{{\rm RA}_2}^\gamma} \nonumber \\ &\indent +c_2c_1' \frac{\alpha_{{\rm RS}_2}^\gamma}{\alpha_{{\rm RS}_2}^\gamma +\alpha_{{\rm RA}_1}^\gamma} +c_2c_2' \frac{\alpha_{{\rm RS}_2}^\gamma}{\alpha_{{\rm RS}_2}^\gamma+\alpha_{{\rm RA}_2}^\gamma}, \end{align} completing the proof of Theorem \ref{combweib}.\end{proof} \section{Curve Fitting} \subsection{Theory} We now turn to finding the values of the parameters leading to the best fit. We require $\beta = -1/2$ (for binning purposes), but otherwise the parameters ($\alpha_{{\rm RS}_1}, \alpha_{{\rm RS}_2}$, $\alpha_{{\rm RA}_1}$, $\alpha_{{\rm RA}_2}$, $\gamma$, $c_1$, $c_2$, $c_1'$, $c_2'$) are free.\footnote{Subject to, of course, $0 \le c_i, c_j' \le 1$ and $c_1+c_1=c_1'+c_2'=1$.} Our first approach was to use the Method of Moments, where we compute the number of moments equal to the number of parameters. Unfortunately the resulting equations were too involved to permit simple solutions for them in terms of the observed data; for completeness they are given in Appendix \ref{sec:AppMoment} (or see \cite{Luo}). We thus turned to the Method of Least Squares (though one could also do an analysis through the Method of Maximum Likelihood). We looked at the 30 teams of the entire league from the 2004 to 2012 season. We display results from the 2011, but the results from any other season are similar readily available (see \cite{Luo}). We implemented the Method of Least Squares using the bins in \eqref{bins}, which involved minimizing the sum of squares of the error of the runs scored data plus the sum of squares of the error of the runs allowed data. There were seven free parameters: $\alpha_{{\rm RS}_1}$, $\alpha_{{\rm RS}_2}$, $\alpha_{{\rm RA}_1}$, $\alpha_{{\rm RA}_2}$, $\gamma$, $c_1$, and $c_1'$. Letting Bin$(k)$ be the $k$\textsuperscript{th} bin of $\eqref{bins}$, ${\rm RS_{\rm obs}}(k)$ and ${\rm RA_{\rm obs}}(k)$ represent the observed number of games with number of runs scored and allowed in Bin$(k)$, and $A(\alpha_1,\alpha_2,\beta,\gamma,c_1,k)$ denote the area under the linear combination of two Weibulls with parameters $(\alpha_1,\alpha_2,\beta,\gamma,c_1)$ in Bin$(k)$, then for each team we found the values of $(\alpha_{{\rm RS}_1},\alpha_{{\rm RS}_2},\alpha_{{\rm RA}_1},\alpha_{{\rm RA}_2}, \gamma, c_1, c_1')$ that minimized \begin{align} & \sum_{k=1}^{\textnormal{Num. Bins}} ({\rm RS_{\rm obs}}(k)-\#\textnormal{Games}\ast A(\alpha_{{\rm RS}_1},\alpha_{{\rm RS}_2},-.5,\gamma,c_1,k))^2 \nonumber \\ &\indent +\sum_{k=1}^{\textnormal{Num. Bins}} ({\rm RA_{\rm obs}}(k)-\#\textnormal{Games}\ast A(\alpha_{{\rm RA}_1},\alpha_{{\rm RA}_2},-.5,\gamma,c_1',k))^2 . \end{align} \subsection{Results} For each team, we found the best fit linear combination of Weibulls. In Figure \ref{leastsquarestable}, we compared the predicted wins, losses, and won-loss percentage with the observed ones. \begin{figure}[h!] \includegraphics[scale=0.5]{leastsquarestable-eps-converted-to} \caption{Results for the 2011 season using Method of Least Squares. \label{leastsquarestable}} \end{figure} The code used is available in \cite{Luo}. Using the Method of Least Squares, the mean $\gamma$ over all 30 teams is 1.83 with a standard deviation of 0.18 (the median is 1.79). We can see that the exponent 1.83, considered as the best exponent, is clearly within the region of one standard deviation from the mean $\gamma$. Considering the absolute value of the difference between observed and predicted wins, we have a mean of 2.89 with a standard deviation of 2.34 (median is 2.68). Without considering the absolute value, the mean is 0.104 with a standard deviation of 3.75 (and a median of 0.39). We only concern ourselves with the absolute value of the difference, as this really tells how accurate our predicted values are. These values are significant improvements on those obtained when using a single Weibull distribution to predict runs (which essentially reproduces James' original formula, though with a slightly different exponent), which produces a mean number of games off of 4.43 with standard deviation 3.23 (and median 3.54) in the absolute value case. We display the results over seasons from 2004 to 2012 in Figure \ref{sidebyside}. It is apparent that the linear combination of Weibulls better estimates teams' win/loss percentage; in fact, it is over one game better at estimating than the single Weibull! The mean number of games off for a single Weibull from 2004 to 2012 was 4.22 (with a standard deviation of 3.03), while that of the linear combination of Weibulls was 3.11 (with a standard deviation of 2.33). In addition, there is less standard deviation in the estimates. Thus, it appears that the linear combination of Weibulls provides a much tighter, better estimate than the single Weibull does. \begin{figure}[h!] \includegraphics[scale=0.4]{sidebyside-eps-converted-to} \caption{Mean number of games off (with standard deviation) for single Weibull and linear combination of Weibulls from 2004-2012. \label{sidebyside}} \end{figure} To further demonstrate how accurate the quality of the fit is, we compare the best fit linear combination of Weibulls of runs scored and allowed with those observed of the 2011 Seattle Mariners in Figure \ref{mariners}; we can see that the fit is visually very good. Of course, the fit \emph{cannot} be worse, as we can always set $c_1 = 0=c_1'$; however, we can see the linear combination of Weibulls does a better job tracking the shape of the runs scored. \begin{figure} \centering \begin{subfigure}{.53\textwidth} \centering \includegraphics[width=.8\linewidth]{singleseattle-eps-converted-to} \caption{Single Weibull mapping runs scored and \\ allowed.} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{.53\textwidth} \centering \includegraphics[width=.8\linewidth]{doubleseattle-eps-converted-to} \caption{Linear Combination of Weibulls mapping runs \\ scored and allowed.} \label{fig:sub2} \end{subfigure} \caption{Comparison of best fit linear combination of Weibulls versus single Weibull for runs scored (top) and allowed (bottom) for the 2011 Seattle Mariners against the observed distribution of scores.} \label{mariners} \end{figure} We then performed an independent two-sample t-test with unequal variances in $R$ using the t.test command to see if the difference between the games off determined by the single Weibull and those by linear combinations of Weibulls is statistically significant in Figure \ref{test}. With a $p$-value less than 0.01 and a 95\% confidence interval that does not contain 0, we can see that the difference is in fact statistically significant. \begin{figure}[h!] \includegraphics[scale=0.7]{ttest-eps-converted-to} \caption{t-test to determine whether the difference between the games off determined by the single Weibull and those by linear combinations of Weibulls is statistically significant. \label{test}} \end{figure} In addition, we compared the mean number of games off of \url{baseball-reference.com}'s Pythagorean Win-Loss statistic (pythWL) and those of the linear combination of Weibulls from 1979 to 2013. Originally, we used ESPN's ExWL statistic\footnote{See the bottom of the page \url{http://espn.go.com/mlb/stats/rpi/_/year/2011}.} which used an exponent of 2; however, ESPN only went down to the year 2002, and it has been shown that using the exponent 1.83 is more accurate than using ESPN's exponent of 2. Using \url{baseball-reference.com}'s Pythagorean Win-Loss statistic (pythWL) to obtain more data (\url{baseball-reference.com} allowed us to go all the way down to 1979, rather than just 2002, which ESPN gives), the pythWL statistic\footnote{At \url{http://www.sports-reference.com/blog/baseball-reference-faqs/} see the section ``What is Pythagorean Winning Percentage?".} is calculated as \begin{center} $({\text{runs scored}^{1.83}})/({\text{runs scored}^{1.83}+\text{runs allowed}^{1.83}})$. \end{center} We display the results of our comparisons in Figure \ref{espnplot}. The mean number of games off for the pythWL statistic was 3.09 with a standard deviation of 2.26, numbers only slightly worse than those of the linear combination of Weibulls (mean of 3.03 with standard deviation of 2.21). So, we can see that the linear combination of Weibulls is doing, on average, about .06 of a game better than the pythWL statistic. We performed an independent two-sample t-test with unequal variances in $R$ using the t.test command to see if the difference between the games off determined by the pythWL statistic and the linear combinations of Weibulls is statistically significant in Figure \ref{ttest2}. With a very large $p$-value, we fail to reject the null hypothesis, suggesting that the difference in mean number of games off is not in fact statistically significant. We also display a plot (Figure \ref{difference}) that models the difference in the number of games between the pythWL statistic and the linear combination of Weibulls; it seems to be a constant positive value for the most part, suggesting that the linear combination of Weibulls is doing slightly better than the pythWL statistic. \begin{figure}[h!] \includegraphics[scale=0.46]{espnplot-eps-converted-to} \caption{Mean number of games off (with standard deviation) for \url{baseball-reference.com}'s pythWL statistic and linear combination of Weibulls from 1979-2013. \label{espnplot}} \end{figure} Looking at Figure \ref{difference} more closely, we can see that there are parts/eras of the graph in which the pythWL statistic does better, and parts where the linear combination of Weibulls does better. In the era from 1979-1989, the pythWL statistic is more accurate, beating the linear combination of Weibulls in 7 out of the 11 years. However, from 1990 to 2013, the linear combination of Weibulls wins in 15 out of the 24 years, and does so by around 0.3 games in those years. Furthermore, when the pythWL statistic does beat the linear combination of Weibulls in the years from 1990 to 2013, it does so by around 0.25 games, including the point at 2004, which seems very out of the ordinary; without this point, the pythWL statistic wins by about .2 games in the years between 1990 and 2013 that it does beat the linear combination of Weibulls. Thus, in more recent years, it may make more sense to use the linear combination of Weibulls. In addition, with respect to the standard deviation of number of games off of the pythWL statistic (2.26) and the linear combination of Weibulls (2.21), we can see that the linear combination of Weibulls provides on average a tighter fit, i.e., there is less fluctuation in the mean number of games off for each team in each year (from 1990 to 2013, the pythWL statistic standard deviation in games off is 2.34 while that of the linear combination of Weibulls is 2.22, so we again see that the linear combination does noticeably better in recent years). It is important to note that the pythWL statistic just takes the functional form of the Pythagorean Win/Loss Formula with an exponent ($\gamma$) of 1.83, while we give theoretical justification for our formula. \begin{figure}[h!] \includegraphics[scale=0.6]{ttest2-eps-converted-to} \caption{t-test to determine whether the difference between the games off determined by ESPN ExWL and those by linear combinations of Weibulls is statistically significant. \label{ttest2}} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.4]{difference-eps-converted-to} \caption{Difference in mean number of games off for \url{baseball-reference.com}'s pythWL statistic and linear combination of Weibulls from 1979-2013. \label{difference}} \end{figure} We also performed $\chi^2$ tests to determine the goodness of fit to see how well the linear combination of Weibulls maps the observed data, and whether runs scored and allowed are independent. We used the bins as in \eqref{bins} and test statistic \begin{align} &\sum_{k=1}^{\textnormal{\# Bins}} \frac{({\rm RS_{\rm obs}}(k)-\# \textnormal{Games} \ast A(\alpha_{{\rm RS}_1},\alpha_{{\rm RS}_2}, -.5,\gamma,c_1,k))^2}{\# \textnormal{Games} \ast A(\alpha_{{\rm RS}_1},\alpha_{{\rm RS}_2},-.5,\gamma,c_1,k)} \nonumber \\ &\indent + \sum_{k=1}^{\textnormal{\# Bins}} \frac{({\rm RA_{\rm obs}}(k)-\# \textnormal{Games} \ast A(\alpha_{{\rm RA}_1},\alpha_{{\rm RA}_2}, -.5,\gamma,c_1',k))^2}{\# \textnormal{Games} \ast A(\alpha_{{\rm RA}_1},\alpha_{RS_A},-.5,\gamma,c_1',k)} \end{align} for the goodness of fit tests, with $2\ast (\# \textnormal{Bins} -1)-1-7 = 16$ degrees of freedom, the factor of 7 coming from estimating 7 parameters, namely $\alpha_{{\rm RS}_1}$, $\alpha_{{\rm RS}_2}$, $\alpha_{{\rm RA}_1}$, $\alpha_{{\rm RA}_2}$, $\gamma$, $c_1$, and $c_1'$. We did not estimate $\beta$, as we took it to be -.5. Having 16 degrees of freedom gives critical threshold values of 26.3 (at the 95\% level) and 32.0 (at the 99\% level). However, since there are multiple comparisons being done (namely 30 for the different teams), we use a Bonferroni adjustment and obtain critical thresholds of 37.7 (95\%) and 42.5 (99\%). From the first column of Figure \ref{independence}, all the teams fall within the unadjusted 99\% threshold, with the exception of the Texas Rangers (just barely!), who easily fall into the Bonferroni adjusted 95\% threshold. Therefore, the observed data closely follows a linear combination of Weibulls with the proper estimated parameters. Since the test for independence of runs scored and allowed requires that the row and column of the contingency table have at least one non-zero entry, the bins used to bin the runs score and allowed were \be [0,1)\ \cup\ [1,2) \ \cup\ \cdots\ \cup\ [9,10)\ \cup\ [11,\infty). \ee We use integer endpoints because we are using the observed runs from games. We have a 12 by 12 contingency table with zeroes along the diagonal, since runs scored and allowed can never be equal. This leads to an incomplete 12 by 12 contingency table with $(12-1)^2-12=109$ degrees of freedom; constructing a test requires the use of structural zeroes. The theory behind tests using structural zeroes can be seen in \cite{Mil} or Appendix 9.2 of \cite{MCGLP}. We observe that 109 degrees of freedom give critical threshold values of 134.37 (at the 95\% level) and 146.26 (at the 99\% level). Again, since we are doing multiple comparisons, we use a Bonferroni adjustment, obtaining critical thresholds of 157.68 (95\%) and 166.45 (99\%). From the second column of Figure \ref {independence}, all the teams fall within the 99\% threshold, with the exception of the Los Angeles Angels (just barely!), who easily fall into the Bonferroni adjusted 95\% threshold. Thus, runs scored and allowed are acting as though they are statistically independent. \begin{figure}[h!] \includegraphics[scale=0.51]{independence-eps-converted-to} \caption{$\chi^2$ test results of the 2011 season from least squares of goodness of fit and independence of runs score and allowed. \label{independence}} \end{figure} A more in depth discussion of the justification behind the tests can be found in \cite{Mil}. \section{Future Work and Conclusions} While a one game improvement in prediction is very promising, as our formula requires us to fit the runs scored and allowed distributions we explored simplifications. We tried to simplify the formula, even giving up some accuracy, in order to devise a formula that could be easily implemented using just a team's runs scored and allowed (and the variance of each of these) in order to determine the team's winning percentage. Unfortunately, the weight parameters $c_1$ and $c_1'$ plays too much of a factor; in 2011, the mean of the parameter $c_1$ is 0.21 with a standard deviation of 0.39 (and a median of 0.21). With such large fluctuations in the weight parameters from team to team, the task of finding a simpler formula was almost impossible, as creating a uniform formula that every team could use was not feasible when two of the key parameters were so volatile. Taking this into account, we tried fixing the $\gamma$, $c_1$, and $c_1'$ parameters, allowing for us to just solve a quartic involving the first and second moments to find the other parameters ($\alpha_{{\rm RS}_1}$, $\alpha_{{\rm RS}_2}$, $\alpha_{{\rm RA}_1}$, and $\alpha_{{\rm RA}_2}$). However, while we were able to solve for the other parameters, plugging these values of the parameters gave us a significantly worse prediction of teams' win-loss percentage compared to linear combination of Weibulls and \url{baseball-reference.com}'s Pythagorean Win-Loss statistic. One of the great attractions of James' Pythagorean formula is its ease of use; we hope to return to other simplifications and approximations in a later paper. Our hope is to find a linearization or approximation of our main result, similar to how Dayaratna and Miller \cite{DaMil1} showed the linear predictor of Jones and Tappin \cite{JT} follows from a linearization of the Pythagorean formula. To summarize our results, using a linear combination of Weibulls rather than a single Weibull increases the prediction accuracy of a team's W/L percentage. More specifically, we saw that the single Weibull's predictions for a team's wins were on average 4.22 games off (with a standard deviation of 3.03), while the linear combination of Weibull's predictions for a team's wins were from 2004-2012 were on average 3.11 games off (with a standard deviation of 2.33), producing about a $25\%$ increase in prediction accuracy. We also performed $\chi^2$ goodness of fit tests for the linear combination of Weibulls and tested the statistical independence of runs scored and allowed (a necessary assumption), and see that in fact the linear combination of Weibulls with properly estimated parameters obtained from least squares analysis closely maps the observed runs and that runs scored and allowed are in fact statistically independent. In addition, when compared against \url{baseball-reference.com}'s Pythagorean Win-Loss statistic, the linear combination of Weibulls does .06 of a game better in the years from 1979 to 2013, but this improvement cannot be considered statistically significant. However, in more recent years, it is worth noting that it does appear that the linear combination of Weibulls is doing better than \url{baseball-reference.com}'s Pythagorean Win-Loss statistic.
{ "attr-fineweb-edu": 2.544922, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUa0A5jDKDyDAe7hBk
\section{Introduction} {\em The Economist} \cite{the.economist} is a well known weekly newspaper of economics and economics related subjects. Frequently, it publishes interesting quantitative information in paper and on the web, especially in the {\em Daily chart} section. In occasion of the 2014 World Cup it collected the amount of goals scored during all the World Cup competitions, from 1930 to 2014, minute by minute. All the data is presented as interactive plot on the web \cite{gooaall}. We give in Fig.\ref{fig:economist-reduction} a reduction of {\em The Economist} original graphic. In our plot {\em additional time} (called {\em allowance for time lost} in FIFA documentation \cite{fifa7} ) and {\em extra time} are not represented. No distinction is made between {\em goals}, {\em penalties} and {\em own goals}. In the original graphic it possible to see in which match each goal was scored and is possible to filter matches according to some categories. The reader interested in these facets should definitely refer to the web. All the data used for this paper and the analysis performed in {\em R} \cite{r} software are available at {\em mingotti.uc3m.es/floor1/paper-goals}. \begin{figure}[ht] \centering \includegraphics[scale=0.35]{my-similar-economist-bn.png} \caption{Goals per minute. Reduction of {\em The Economist} original plot \cite{gooaall} displaying all goals scored in all World Cup matches until July 2014, minute by minute. {\em Extra time} and {\em additional time} have been removed. } \label{fig:economist-reduction} \end{figure} Just under the original plot the authors summarize the graphical information with the following sentence: {\em "... One can expect a rush of goals in the last ten minutes of normal time, but the 18th and 75th minutes have proved fertile"}. We consider such conclusions hurried and possibly lacking in significance. In this paper we will propose a simple and reasonable model that describes the minute by minute goal scoring frequency. After that, we reconsider how much of the original analysis can be confidently accepted. The goal scoring frequency has been already analyzed in literature for the {\em National Scottish League 1991-1992} in \cite{reilly1996}, the {\em Australian Soccer League} in \cite{abt2002} and the {\em European Champions League} in \cite{michailidis2004}. The 1986 World Cup in \cite{jinshan1986}, 1990 World Cup in \cite{jinshan1993} and jointly 1998, 2002 and 2006 {\em World Cups} in \cite{armatas2007}. As far as we could establish, a comprehensive study about goal scoring frequency in all {\em World Cups} till present has never been approached. Article \cite{armatas2007} is the most similar to our work. It focuses on World Cup scoring frequency and it considers more than a single tournament. The authors concluded that dividing the match in two 45-min parts, most goals were scored in the second half. Dividing the match in 15-min parts, most of the goals were scored in the last period (76-90) and there was a trend towards more goals scored as time progressed. In this paper we will see all these conclusion can be confirmed and refined using the richer dataset of all World Cups scores. \section{Analysis} From the {\em The Economist} original data set we consider only goals scored in {\em regular time}, that is in minutes between $[1,45]$ and $[46,90]$. We ignore {\em additional time} because its occurrence is decided match per match by the referee \cite{fifa7} and {\em extra time} because it depends on the scoring situation at the end of regular time. Their goal scoring rate can not be directly compared with {\em regular time}. \begin{figure} \centering \includegraphics[scale=0.4]{my-goals-per-minute-1-bn.png} \caption{Smoothed goals per minute. Goals per minute with a {\em loess} smoother (thick line). Numbers near each circle represent minutes. For example, in the first half, the highest number of goals was scored on minute 18th, the least on minute 41th. Dashed line represent half match goal averages. The numerical value of each average is just under each dashed line, on its leftmost side. } \label{fig:loess} \end{figure} We redraw the dataset in Fig.\ref{fig:economist-reduction} with a {\em loess} smoother. In Fig.\ref{fig:loess} the thick continuous line is the smooth {\em loess} non parametric fit for the relation between goals and minute. The dashed lines represent the average number of goals in the first and second half respectively. Just under the dashed lines, on their left side, do lay the numerical values of each mean. The parameters used for the {\em loess} are the ones default with {\rm R} \cite{ venables2002, faraway2005, r} From Fig.\ref{fig:loess} we can see the average number of goals in the second half (28.16) is larger than in the first half (22.04). The difference between these averages $D := \bar{x}_2 - \bar{x}_1$ is statistically greater than zero. In this work {\em significance level} is always $\alpha = 0.05$. Bootstrapping 10000 times we get $D$ is normally distributed as $N(\mu = 6.10, \sigma = 1.32)$. Observing the {\em loess} smooth fit in Fig.\ref{fig:loess} we set the following hypothesis up. In the first half of the game the scoring rate is constant, in the second it grows linearly with time. We also notice that the first minutes could have an inferior scoring rate respect to the other minutes in the first half. That is reasonable, the game starts always in the middle of the field, far from a comfortable scoring position. On the first half, our probabilistic model is that goals are distributed according to a $\mbox{Multinomial}(n,(p_{1}, \dots, p_{45}))$ where $p_{i} = p_{j}$ for all $i,j$ in $1 \dots 45$. Performing a $\chi^2$ {\em goodness of fit} test we get $pval=0.0061$, homogeneity would be rejected. But, if we remove just minute 18th, we get $pval=0.1296$ and homogeneity would be far from rejected. We don't want to be too much lighthearted in dropping observations to satisfy our model so we reshape the dataset and repeat the test. Instead of considering a match as composed of a sequence of single minutes, we consider it as a sequence of 2, 3 and 5 minutes blocks\footnote{In order to have an even splitting of the dataset, when dividing a match in blocks of two minutes we ignore one observation, minute 45th.}. For each minutes block we set the associate number of goals as the average number goals scored in each minute making the block. Performing again the $\chi^2$ {\em goodness of fit} test on the {\em first half}, now seen as sequence of 2,3,5 minutes blocks, we get {\em pvalue} respectively $(0.23, 0.49, 0.94)$. With a minimal smoothing reshape we see homogeneity test in far from rejected, without dropping any observation. In conclusion, we are confident that the {\em first half} can be considered homogeneous and minute 18th an outlier. In the {\em second half} our working hypothesis is a linear relation between minutes elapsed and goals scored. We begin fitting a simple linear regression model with {\em minute} as independent variable. By the {\em t-test}, {\em minute} is a significant variable, $pval=0.00368$. Residuals can be considered normally distributed according to {\em Kolmogorov-Smirnov} and {\em Shaipiro-Wilk} normality test, which give respectively {\em pvalues} $(0.73, 0.60)$. $R^2 = 0.18$ is little and not much interesting for us. The fitted model appears in Eq.\ref{eq:model-1} \begin{equation} \label{eq:model-1} goals = 13.346 + 0.0708 * minute \end{equation} From Eq.\ref{eq:model-1} we can get the expected number of goals in minute $m \in (46,\dots,90)$. We will call these values $\hat{\mathbf{g}}_{2nd} := (\hat{g}_{46}, \dots, \hat{g}_{90})$. Normalizing $\hat{\mathbf{g}}$ we get $ {\hat{\mathbf{p}}}_{2nd} := \hat{\mathbf{g}}_{2nd} * (\sum_{i=46}^{90}{\hat{g_i}})^{-1} $. And now we can apply again the $\chi^2$ to see if the model for the {\em second half} is appropriate. Our null hypothesis is that goals realized in the second half ($\mathbf{g}_{2nd}$) are multinomially distributed with probability vector $\hat{\mathbf{p}}_{2nd}$. The $\chi^2$ test returns a pvalue of $0.09$ so the null hypothesis can not be rejected in the second half game. Let us join the half time models and check hypotheses against the full length match data. According to our models, the expected goal sequence is $(\tilde{g}_{1}, \dots, \tilde{g}_{90})$. Where $\tilde{g}_{i}$ for $i \in [1,45]$ is the average number of goals scored in the first half and $\tilde{g}_{i}$ for $i \in [46, 90]$ is set by Eq.\ref{eq:model-1}. Normalizing $\mathbf{\tilde{g}}$ we get a probabilities vector $\mathbf{\tilde{p}}$, a graphical representation of it appears in Fig.\ref{fig:model-scoring}. Using again a $\chi^2$, we test if goals scored in all Word Cups matches minute per minute can be a realization of a {\em multinomial distribution} with probabilities vector $\mathbf{\tilde{p}}$. We get a $pvalue=0.0034$. Removing observation 18th we get $pval = 0.0524$ and we can not reject. Removing also the first three minutes improves the fit giving $pval=0.1245$. To avoid removing observation we reshape the dataset in blocks of 2,3,5 minutes as done previously. The resulting {\em pvalues} are respectively $(0.86, 0.95, 0.99)$ so, the null hypothesis is never rejected. Fig.\ref{fig:model-per-blocks} illustrates goals smoothing on reshaping minute variable. \begin{figure}[ht] \centering \includegraphics[scale=0.4]{modeled-score-probability.png} \caption{The simple scoring model. If there is a goal in a match of the World Cup it will happen at some minute with the probability here depicted. In the first half game the probability is constant, in the second it grows linearly with time.} \label{fig:model-scoring} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.4]{model-per-blocks.png} \caption{This figure represents how reshaping variable {\em minute} in blocks of 2,3,5 units affects the goals distribution.} \label{fig:model-per-blocks} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.35]{simulated-max-distrib.png} \caption{Simulated maxima distribution for the first and second half. These two histograms tell us that 42 goals, as a maximum in the first half, is an unexpected result. On the contrary, 44 goals as a maximum for on the second half is not surprising. } \label{fig:simulated-max} \end{figure} An analytic description of the probability distribution can be easily obtained once we move to a time continuous description. Rescaling the match to a $[0,1]$ duration interval, if there will be a goal, the probability it will happen before time $x$ will be given by $F(x)$. $F$ is defined in Eq.\ref{eq:cdf}. \begin{equation} \label{eq:cdf} F(x) = \begin{cases} 0.878264\ x, & \quad 0 \le x < 0.5 \\ 0.381889\ x^2 + 0.548938\ x + 0.069173, & \quad 0.5 \le x < 1 \\ 1 & \quad x \ge 1 \\ \end{cases} \end{equation} Minutes 18th and 75th were called in the original document ``fertile'' minutes. Using our model we have a different interpretation for each one of them. In Fig.\ref{fig:simulated-max} we see the simulated distribution of the maximum\footnote{10000 maxima were simulated.} in the first and second half according to the model. For the first half we see that an observation larger than the 18th (42 goals) is improbable, its probability is approximately 0.0015. On the contrary, one observation larger than the 75th (44 goals) in the second half it is not improbable at all ($p \approx 0.217 $). \section{Conclusions} There is enough evidence to state that the goal scoring distribution in the World Cup matches can be considered constant in the first half game and growing linearly in the second half. Dividing the game in blocks of 2, 3 or 5 minutes confirms the model is valid and makes is more robust to extreme values. An analytic cumulative distribution function for goal scoring is presented in Eq.\ref{eq:cdf}. Previous finding in article \cite{armatas2007} are confirmed, there are more goals in the second half respect to the first one. According to our model, if there is a goal in a match, the probability that it will be in the first half is $44\%$. The last part of the game is most probable for a goal, and there is a trend toward more goals as time passes, but only in the second half. {\em The Economist} conclusions about its dataset are only partially acceptable. Indeed, it is true that in on the last part of the game there are expected more goals. It is not true that minute 75th is an especially ``fertile'' minute, a maximum of 44 goals can be easily realized by the variability of the second half game. Minute 18th is more interesting, according to our model the probability of observing a maximum of 42 goals in the first half is about $2 \cdot 10^{-3}$. We guess it has to be considered an outlier and removed because it is an isolated burst, it has no connections with its neighboring values. Performing a mild smooth, as grouping each match as a sequence of 2 minutes removes all of its importance. \bibliographystyle{plain}
{ "attr-fineweb-edu": 2.210938, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbmg4uzqh_DvIk2WX
\section{Introduction} \label{sec:introduction} Consider we want to plan a trip to a distant city using a dialogue agent. The agent must make choices at each leg, e.g., whether to fly or to drive, whether to book a hotel. Each of these steps in turn involves making a sequence of decisions all the way down to \emph{lower}-level actions. For example, to book a hotel involves identifying the location, specifying the check-in date and time, and negotiating the price etc. The above process of the agent has a natural hierarchy: a top-level process selects which subgoal to complete, and a low-level process chooses primitive actions to accomplish the selected subgoal. Within the reinforcement learning (RL) paradigm, such a hierarchical decision making process can be formulated in the \emph{options} framework \cite{sutton1999between}, where subgoals with their own reward functions are used to learn policies for achieving these subgoals. These learned policies are then used as temporally extended actions, or options, for solving the entire task. Based on the options framework, researchers have developed dialogue agents for complex tasks, such as travel planning, using hierarchical reinforcement learning (HRL)~\citep{cuayahuitl10evaluation}. Recently, \citet{peng2017composite} showed that the use of subgoals mitigates the reward sparsity and leads to more effective exploration for dialogue policy learning. However, these subgoals need to be human-defined which limits the applicability of the approach in practice because the domain knowledge required to properly define subgoals is often not available in many cases. In this paper, we propose a simple yet effective \emph{Subgoal Discovery Network} (SDN) that discovers useful subgoals automatically for an RL-based dialogue agent. The SDN takes as input a collection of successful conversations, and identifies ``hub'' states as subgoals. Intuitively, a hub state is a region in the agent's state space that the agent tends to visit frequently on successful paths to a goal but not on unsuccessful paths. Given the discovered subgoals, HRL can be applied to learn a hierarchical dialogue policy which consists of (1) a top-level policy that selects among subgoals, and (2) a low-level policy that chooses primitive actions to achieve selected subgoals. We present the first study of learning dialogue agents with automatically discovered subgoals. We demonstrate the effectiveness of our approach by building a composite task-completion dialogue agent for travel planning. Experiments with both simulated and real users show that an agent learned with discovered subgoals performs competitively against an agent learned using expert-defined subgoals, and significantly outperforms an agent learned without subgoals. We also find that the subgoals discovered by SDN are often human comprehensible. \section{Background} \label{sec:backgroun} A goal-oriented dialogue can be formulated as a Markov decision process, or MDP~\cite{levin00stochastic}, in which the agent interacts with its environment over a sequence of discrete steps. At each step $t\in\{0,1,\ldots\}$, the agent observes the current state $s_t$ of the conversation~\cite{henderson15machine,mrksic17neural,li2017end}, and chooses action $a_t$ according to a policy $\pi$. Here, the action may be a natural-language sentence or a speech act, among others. Then, the agent receives a numerical reward $r_t$ and switches to next state $s_{t+1}$. The process repeats until the dialogue terminates. The agent is to learn to choose \emph{optimal} actions $\{a_t\}_{t=1,2,\ldots}$ so as to maximize the total \emph{discounted} reward $r_0+\gamma r_1+\gamma^2 r_2 + \cdots $, where $\gamma \in [0,1]$ is a discount factor. This learning paradigm is known as reinforcement learning, or RL~\cite{Sutton98Reinforcement}. When facing a complex task, it is often more efficient to divide it into multiple simpler sub-tasks, solve them, and combine the partial solutions into a full solution for the original task. Such an approach may be formalized as hierarchical RL (HRL) in the options framework~\cite{sutton1999between}. An option can be understood as a subgoal, which consists of an initiation condition (when the subgoal can be triggered), an option policy to solve the subgoal, and a termination condition (when the subgoal is considered finished). When subgoals are given, there exist effective RL algorithms to learn a hierarchical policy. A major open challenge is the automatic discovery of subgoals from data, the main innovation of this work is covered in the next section. \section{Subgoal Discovery for HRL} Figure~\ref{fig:dialogue_flows} shows the overall workflow of our proposed method of using automatic subgoal discovery for HRL. First a dialogue session is divided into several segments. Then at the end of those segments (subgoals), we equip an intrinsic or extrinsic reward for the HRL algorithm to learn a hierarchical dialogue policy. Note that only the last segment has an extrinsic reward. The details of the segmentation algorithm and how to use subgoals for HRL are presented in Section~\ref{sec:subgoal_discovery} and Section~\ref{sec:HRL_SDN}. \begin{figure}[ht!] \centering \includegraphics[clip=true, trim=0.0cm 7.9cm 7.5cm 0.0cm, width=1\linewidth, scale=1.0]{./workflow.pdf} \vspace{-3mm} \caption{The workflow for HRL with subgoal discovery. In addition to the extrinsic reward at the end of the dialogue session, HRL also uses intrinsic rewards induced by the subgoals (or the ends of dialogue segments). Section~\ref{sec:policy_learning} details the reward design for HRL.} \label{fig:dialogue_flows} \end{figure} \subsection{Subgoal Discovery Network} \label{sec:subgoal_discovery} Assume that we have collected a set of successful state trajectories of a task, as shown in Figure~\ref{fig:paths}. We want to find subgoal states, such as the three red states $s_4$, $s_9$ and $s_{13}$, which form the ``hubs'' of these trajectories. These hub states indicate the subgoals, and thus divide a state trajectory into several segments, each for an option\footnote{There are many ways of creating a new option $\langle I,\pi,\beta \rangle$ for a discovered subgoal state. For example, when a subgoal state is identified at time step $t$, we add to $I$ the set of states visited by the agent from time $t-n$ to $t$, where $n$ is a pre-set parameter. $I$ is therefore the union of all such states over all the state trajectories. The termination condition $\beta$ is set to 1 when the subgoal is reached or when the agent is no longer in $I$, and to $0$ otherwise. In the deep RL setting where states are represented by continuous vectors, $\beta$ is a probability whose value is proportional to the vector distance e.g., between current state and subgoal state.}. \begin{figure}[ht!] \centering \includegraphics[clip=true, trim=0.2cm 7.0cm 1cm 5cm, width=1\linewidth, scale=1.0]{./paths_v2.pdf} \vspace{-5mm} \caption{Illustration of ``subgoals''. Assuming that there are three state trajectories $(s_0, s_1, s_4, s_6, s_9, s_{10}, s_{13})$, $(s_0, s_2, s_4, s_7, s_9, s_{11}, s_{13})$ and $(s_0, s_3, s_4, s_8, s_9, s_{12}, s_{13})$. Then red states $s_4$, $s_9$, $s_{13}$ could be good candidates for ``subgoals''. \label{fig:paths}} \vspace{-1mm} \end{figure} Thus, discovering subgoals by identifying hubs in state trajectories is equivalent to segmenting state trajectories into options. In this work, we formulate subgoal discovery as a state trajectory segmentation problem, and address it using the Subgoal Discovery Network (SDN), inspired by the sequence segmentation model~\cite{wang2017sequence}. \paragraph{The SDN architecture.} SDN repeats a two-stage process of generating a state trajectory segment, until a trajectory termination symbol is generated: first it uses an initial segment hidden state to start a new segment, or a trajectory termination symbol to terminate the trajectory, given all previous states; if the trajectory is not terminated, then keep generating the next state in this trajectory segment given previous states\nop{in this segment} until a segment termination symbol is generated. We illustrated this process in Figure~\ref{fig:rnn}. \begin{figure}[t] \centering \includegraphics[clip=true, trim=1cm 3.0cm 2cm 0cm, width=1\linewidth, scale=1.0]{./rnn.pdf} \vspace{-5mm} \caption{Illustration of SDN for state trajectory $(s_0,\ldots,s_5)$ with $s_2$, $s_4$ and $s_5$ as subgoals. Symbol \# is the termination. The top-level RNN (RNN1) models segments and the low-level RNN (RNN2) provides information about previous states from RNN1. The embedding matrix $M$ maps the outputs of RNN2 to low dimensional representations so as to be consistent with the input dimensionality of RNN1. Note that state $s_5$ is associated with two termination symbols \#; one is for the termination of the last segment and the other is for the termination of the entire trajectory.} \label{fig:rnn} \end{figure} We model the likelihood of each segment using an RNN, denoted as RNN1. During the training, at each time step, RNN1 predicts the next state with the current state as input, until it reaches the option termination symbol \#. Since different options are under different conditions, it is not plausible to apply a fixed initial input to each segment. Therefore, we use another RNN (RNN2) to encode all previous states to provide relevant information and we transform these information to low dimensional representations as the initial inputs for the RNN1 instances. This is based on the \emph{causality} assumption of the options framework~\cite{sutton1999between} --- the agent should be able to determine the next option given all previous information, and this should not depend on information related to any later state. The low dimensional representations are obtained via a global subgoal embedding matrix $M\in\mathbb R^{d\times D}$, where $d$ and $D$ are the dimensionality of RNN1's input layer and RNN2's output layer, respectively. Mathematically, if the output of RNN2 at time step $t$ is $o_t$, then from time $t$ the RNN1 instance has $M\cdot {\mathrm{softmax}}(o_t)$ as its initial input\footnote{${\mathrm{softmax}}( o_t)_i=\exp(o_{t,i})/\sum\limits_{i'=1}^D\exp(o_{t,i'})\in\mathbb R^D$ for $o_t=(o_{t,1},\ldots,o_{t,D})$. }. $D$ is the number of subgoals we aim to learn. Ideally, the vector ${\mathrm{softmax}}(o_t)$ in a well-trained SDN is close to an one-hot vector. Therefore, $M\cdot {\mathrm{softmax}}(o_t)$ should be close to one column in $M$ and we can view that $M$ provides at most $D$ different ``embedding vectors'' for RNN1 as inputs, \nop{This design makes most of the RNN1 initial inputs, which control the generative process of the state transition sequences outputted by the corresponding RNN1 instances, fall into at most $D$ small regions,}indicating at most $D$ different subgoals. Even in the case where ${\mathrm{softmax}}(o_t)$ is not close to any one-hot vector, choosing a small $D$ helps avoid overfitting. \paragraph{Segmentation likelihood.} Given the state trajectory $(s_0,\ldots,s_5)$, assuming that $s_2$, $s_4$ and $s_5$ are the discovered subgoal states, we model the conditional likelihood of a proposed segmentation $\sigma=((s_0,s_1,s_2),(s_2,s_3,s_4),(s_4,s_5))$ as $p(\sigma|s_0)=p((s_0,s_1,s_2)|s_0)\cdot p((s_2,s_3,s_4)|s_{0:2})\cdot p((s_4,s_5)|s_{0:4})$, where each probability term $p(\cdot|s_{0:i})$ is based on an RNN1 instance. And for the whole trajectory $(s_0,\ldots,s_5)$, its likelihood is the sum over all possible segmentations. Generally, for state trajectory $\bm{s}=(s_0,\ldots,s_T)$, we model its likelihood as follows\footnote{For notation convenience, we include $s_0$ into the observational sequence, though $s_0$ is always conditioned upon.}: \begin{equation} L_S(\bm s)=\sum\limits_{\sigma\subseteq\mathcal S(\bm s), \text{length}(\sigma)\le S}\prod\limits_{i=1}^{\text{length}(\sigma)}p(\sigma_i|\tau(\sigma_{1:i})), \label{eqn:likeli} \end{equation} where $\mathcal S(\bm s)$ is the set of all possible segmentations for the trajectory $\bm s$, $\sigma_i$ denotes the $i^\text{th}$ segment in the segmentation $\sigma$, and $\tau$ is the concatenation operator. $S$ is an upper limit on the maximal number of segments. This parameter is important for learning subgoals in our setting since we usually prefer a small number of subgoals. This is different from~\citet{wang2017sequence}, where a maximum segment length is enforced. We use maximum likelihood estimation with Eq.~\eqref{eqn:likeli} for training. However, the number of possible segmentations is exponential in $\mathcal S(\bm s)$ and the naive enumeration is intractable. Here, dynamic programming is employed to compute the likelihood in Eq.~\eqref{eqn:likeli} efficiently: for a trajectory $\bm s=(s_0,\ldots,s_T)$, if we denote the sub-trajectory $(s_i,\ldots,s_t)$ of $\bm s$ as $\bm s_{i:t}$, then its likelihood follows the below recursion: \begin{equation*} L_m(\bm s_{0:t})= \begin{cases} \sum\limits_{i=0}^{t-1}L_{m-1}(\bm s_{0:i})p(\bm s_{i:t}|\bm s_{0:i}),&m>0,\\ I[t=0],&m=0. \end{cases} \end{equation*} Here, $L_m(\bm s_{0:t})$ denotes the likelihood of sub-trajectory $\bm s_{0:t}$ with no more than $m$ segments and $I[\cdot]$ is an indicator function. $p(\bm s_{i:t}|\bm s_{0:i})$ is the likelihood segment $\bm s_{i:t}$ given the previous history, where RNN1 models the segment and RNN2 models the history as shown in Figure~\ref{fig:rnn}. With this recursion, we can compute the likelihood $L_S(\bm s)$ for the trajectory $\bm s=(s_0,\ldots,s_T)$ in $O(ST^2)$ time. \paragraph{Learning algorithm.} We denote $\theta^s$ as the model parameter including the parameters of the embedding matrix $M$, RNN1 and RNN2. We then parameterize the segment likelihood function as $p(\bm s_{i:t}|\bm s_{0:i})=p(\bm s_{i:t}|\bm s_{0:i};\theta^s)$, and the trajectory likelihood function as $L_m(\bm s_{0:t})=L_m(\bm s_{0:t};\theta^s)$. Given a set of $N$ state trajectories $(\bm s^{(1)}, \ldots, \bm s^{(N)})$, we optimize $\theta^s$ by minimizing the negative mean log-likelihood with $L_2$ regularization term $\frac12\lambda||\theta^s||^2$ where $\lambda>0$, using stochastic gradient descent: \begin{equation} \small \begin{aligned} \mathcal L_S(\theta^s,\lambda)=-\frac1N\sum\limits_{i=1}^N\log L_S(\bm s^{(i)},\theta^s)+\frac12\lambda||\theta^s||^2. \end{aligned} \label{eqn:obj} \end{equation} Algorithm~\ref{algo:subgoal_discovery} outlines the training procedure for SDN using stochastic gradient descent. \begin{algorithm}[htbp] \small \caption{Learning SDN} \begin{algorithmic}[1] \REQUIRE A set of state trajectories $(\bm s_1, \ldots \bm s_N)$, the number of segments limit $S$, initial learning rate $\eta>0$. \STATE Initialize the SDN parameter $\theta^s$. \WHILE {not converged} \STATE Compute the gradient $\nabla_{\theta^s}\mathcal L_S(\theta^s,\lambda)$ of the loss $\mathcal L_S(\theta^s,\lambda)$ as in Eq. \eqref{eqn:obj}. \STATE Update $\theta^s\leftarrow\theta^s-\eta\nabla_{\theta^s}\mathcal L_S(\theta^s,\lambda)$. \STATE Update the learning rate $\eta$. \ENDWHILE \end{algorithmic} \label{algo:subgoal_discovery} \end{algorithm} \input{section_hrl} \subsection{Hierarchical Policy Learning with SDN} \label{sec:HRL_SDN} We use a trained SDN in HRL as follows. The agent starts from the initial state $s_0$, keeps sampling the output from the distribution related to the top-level RNN (RNN1) until a termination symbol \# is generated, which indicates the agent reaches a subgoal. In this process, intrinsic rewards are generated as specified in the previous subsection. After \# is generated, the agent selects a new option, and repeats this process. This type of naive sampling may allow the option to terminate at some places with a low probability. To stabilize the HRL training, we introduce a threshold $p \in (0, 1)$, which directs the agent to terminate an option if and only if the probability of outputting \# is at least $p$. We found this modification leads to better behavior of the HRL agent than the naive sampling method, since it normally has a smaller variance. In the HRL training, the agent only uses the probability of outputting \# to decide subgoal termination. Algorithm~\ref{algo:hrl_training_subgoals} outlines the full procedure of one episode for hierarchical dialogue policies with a trained SDN in the composite task-completion dialogue system. \section{Experiments and Results} \label{sec:experiment} We evaluate the proposed model on a travel planning scenario for composite task-oriented dialogues~\cite{peng2017composite}. Over the exchange of a conversation, the agent gathers information about the user's intent before booking a trip. The environment then assesses a binary outcome (success or failure) at the end of the conversation, based on (1) whether a trip is booked, and (2) whether the trip satisfies the user's constraints. \begin{algorithm}[ht!] \small \caption{HRL episode with a trained SDN} \begin{algorithmic}[1] \REQUIRE A trained SDN $\mathcal M$, initial state $s_0$ of an episode, threshold $p$, the HRL agent $\mathcal A$. \STATE Initialize an RNN2 instance $R_2$ with parameters from $\mathcal M$ and $s_0$ as the initial input. \STATE Initialize an RNN1 instance $R_1$ with parameters from $\mathcal M$ and $M\cdot\text{softmax}(o^{\text{RNN2}}_0)$ as the initial input, where $M$ is the embedding matrix (from $\mathcal M$) and $o^{\text{RNN2}}_0$ is the initial output of $R_2$. \STATE Current state $s\leftarrow s_0$. \STATE Select an option $o$ using the agent $\mathcal A$. \WHILE {Not reached the final goal} \STATE Select an action $a$ according to $s$ and $o$ using the agent $\mathcal A$. Get the reward $r$ and the next state $s'$ from the environment. \STATE Place $s'$ to $R_2$, denote $o^{\text{RNN2}}_t$ as $R_2$'s latest output and take $M\cdot\text{softmax}(o^{\text{RNN2}}_t)$ as the $R_1$'s new input. Let $p_{s'}$ be the probability of outputting the termination symbol \#. \IF {$p_{s'}\ge p$} \STATE Select a new option $o$ using the agent $\mathcal A$. \STATE Re-initialize $R_1$ using the latest output from $R_2$ and the embedding matrix $M$. \ENDIF \ENDWHILE \end{algorithmic} \label{algo:hrl_training_subgoals} \end{algorithm} \paragraph{Dataset.} The raw dataset in our experiments is from a publicly available multi-domain dialogue corpus~\cite{elframes}. Following~\citet{peng2017composite}, a few changes were made to introduce dependencies among subtasks. For example, the hotel check-in date should be the same with the departure flight arrival date. The data was mainly used to create simulated users, and to build the knowledge bases for the subtasks of booking flights and reserving hotels. \paragraph{User Simulator.} In order to learn good policies, RL algorithms typically need an environment to interact with. In the dialogue research community, it is common to use simulated users for this purpose~\cite{schatzmann2007agenda,li2017end,liu2017iterative}. In this work, we adapted a publicly available user simulator~\cite{li2016user} to the composite task-completion dialogue setting with the dataset described above. During training, the simulator provides the agent with an (extrinsic) reward signal at the end of the dialogue. A dialogue is considered to be successful only when a travel plan is booked successfully, and the information provided by the agent satisfies user's constraints. \paragraph{Baseline Agents.} We benchmarked the proposed agent (referred to as the \textit{m-HRL Agent}) against three baseline agents: \begin{itemize}[noitemsep,leftmargin=*,topsep=0pt] \item A \textit{Rule Agent} uses a sophisticated, hand-crafted dialogue policy, which requests and informs a hand-picked subset of necessary slots, and then confirms with the user about the reserved trip before booking the flight and hotel. \item A \textit{flat RL Agent} is trained with a standard deep reinforcement learning method, DQN~\cite{mnih2015human}, which learns a flat dialogue policy using extrinsic rewards only. \item A \textit{h-HRL Agent} is trained with hierarchical deep reinforcement learning (HDQN), which learns a hierarchical dialogue policy based on human-defined subgoals~\cite{peng2017composite}. \end{itemize} \paragraph{Collecting State Trajectories.} Recall that our subgoal discovery approach takes as input a set of state trajectories which lead to successful outcomes. In practice, one can collect a large set of successful state trajectories, either by asking human experts to demonstrate (e.g., in a call center), or by rolling out a reasonably good policy (e.g., a policy designed by human experts). In this paper, we obtain dialogue state trajectories from a rule-based agent which is handcrafted by a domain expert, the performance of this rule-based agent can achieve success rate of 32.2\% as shown in Figure~\ref{fig:sim_results} and Table~\ref{tab:results}. We only collect the successful dialogue sessions from the roll-outs of the rule-based agent, and try to learn the subgoals from these dialogue state trajectories. \paragraph{Experiment Settings.} To train SDN, we use RMSProp~\cite{tieleman2012lecture} to optimize the model parameters. For both RNN1 and RNN2, we use LSTM~\cite{hochreiter1997long} as hidden units and set the hidden size to $50$. We set embedding matrix $M$ with $D=4$ columns. As we discussed in Section~\ref{sec:subgoal_discovery}, $D$ captures the maximum number of subgoals that the model is expected to learn. Again, to avoid SDN from learning many unnecessary subgoals, we only allow segmentation with at most $S=4$ segments during subgoal training. The values for $D$ and $S$ are usually set to be a little bit larger than the expected number of subgoals (e.g., $2$ or $3$ for this task) since we expect a great proportion of the subgoals that SDN learns are useful, but not necessary for all of them. As long as SDN discovers useful subgoals that guide the agent to learn policies faster, it is beneficial for HRL training, even if some non-perfect subgoals are found. During the HRL training, we use the learned SDN to propose subgoal-completion queries. In our experiment, we set the maximum turn $L=60$. We collected $N=1634$ successful, but imperfect, dialogue episodes from the rule-based agent in Table~\ref{tab:results} and randomly choose $80\%$ of these dialogue state trajectories for training SDN. The remaining $20\%$ were used as a validation set As illustrated in Section~\ref{sec:HRL_SDN}, SDN starts a new RNN1 instance and issues a subgoal-completion query when the probability of outputting the termination symbol \# is above a certain threshold $p$ (as in Algorithm~\ref{algo:hrl_training_subgoals}). In our experiment, $p$ is set to be 0.2, which was manually picked according to the termination probability during SDN training. In dialogue policy learning, for the baseline RL agent, we set the size of the hidden layer to $80$. For the HRL agents, both top-level and low-level dialogue policies have a hidden layer size of $80$. RMSprop was applied to optimize the parameters. We set the batch size to be $16$. During training, we used $\epsilon$-greedy strategy for exploration with annealing and set $\gamma=0.95$. For each simulation epoch, we simulated $100$ dialogues and stored these state transition tuples in the experience replay buffers. At the end of each simulation epoch, the model was updated with all the transition tuples in the buffers in a batch manner. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{./success_rate_simulation_curves_v2.pdf} \vspace{-5mm} \caption{Learning curves of agents under simulation.} \label{fig:sim_results} \end{figure} \begin{table}[t!] \begin{center} \begin{tabular}{cccc} \\ \hline Agent & Success Rate & Turns & Reward \\ \hline Rule & .3220 & 46.23 & -24.02 \\ RL & .4440 & 45.50 & -1.834 \\ h-HRL & .6485 & 44.23 & 35.32 \\ m-HRL & .6455 & 44.85 & 34.77 \\ \hline \end{tabular} \end{center} \caption{Performance of agents with simulated user.} \label{tab:results} \end{table} \subsection{Simulated User Evaluation} \label{sec:sim_user_eval} In the composite task-completion dialogue scenario, we compared the proposed \textit{m-HRL} agent with three baseline agents in terms of three metrics: success rate\footnote{Success rate is the fraction of dialogues which accomplished the task successfully within the maximum turns.}, average rewards and average turns per dialogue session. Figure~\ref{fig:sim_results} shows the learning curves of all four agents trained against the simulated user. Each learning curve was averaged over $5$ runs. Table~\ref{tab:results} shows the test performance where each number was averaged over $5$ runs and each run generated $2000$ simulated dialogues. We find that the HRL agents generated higher success rates and needed fewer conversation turns to achieve the users' goals than the rule-based agent and the flat RL agent. The performance of the m-HRL agent is tied with that of the h-HRL agent, even though the latter requires high-quality subgoals designed by human experts. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{./user_study_success_rate.pdf} \vspace{-5mm} \caption{Performance of three agents tested with real users: success rate, number of dialogues and p-value are indicated on each bar (difference in mean is significant with $p <$ 0.05).} \label{fig:user_success_rate} \end{figure} \subsection{Human Evaluation} We further evaluated the agents that were trained on simulated users against real users, who were recruited from the authors' organization. We conducted a study using the one RL agent and two HRL agents \{RL, h-HRL, m-HRL\}, and compared two pairs: \{RL, m-HRL\} and \{h-HRL, m-HRL\}. In each dialogue session, one agent was randomly selected from the pool to interact with a user. The user was \emph{not} aware of which agent was selected to avoid systematic bias. The user was presented with a goal sampled from a user-goal corpus, then was instructed to converse with the agent to complete the given task. At the end of each dialogue session, the user was asked to give a rating on a scale from $1$ to $5$ based on the naturalness and coherence of the dialogue; here, $1$ is the worst rating and $5$ the best. In total, we collected $196$ dialogue sessions from $10$ human users. Figure~\ref{fig:user_success_rate} summarizes the performances of these agents against real users in terms of success rate. Figure~\ref{fig:user_rating} shows the distribution of user ratings for each agent. For these two metrics, both HRL agents were significantly better than the flat RL agent. Another interesting observation is that the m-HRL agent performs similarly to the h-HRL agent in terms of success rate in the real user study as shown on Figure~\ref{fig:user_success_rate}. Meanwhile in Figure~\ref{fig:user_rating}, the h-HRL agent is significantly better than m-HRL agent in terms of real user ratings. This may be caused by the probabilistic termination of subgoals: we used a threshold strategy to decide whether to terminate a subgoal. This could introduce variance so the agent might not behave reasonably compared with human-defined subgoals which terminate deterministically. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{./user_study_rating.pdf} \vspace{-5mm} \caption{Distribution of user ratings for three agents in human evaluation} \label{fig:user_rating} \end{figure} \subsection{Subgoal Visualization} \label{sec:subgoal_vis} Table~\ref{tab:dialog_vis} shows the subgoals discovered by SDN in a sample dialogue by a rule-based agent interacting with the simulated user. The rule-based agent is equipped with a human-defined subtask structure, which always solves subtask \emph{flight} (turns 1--15) before \emph{hotel} (turns 16--23), as shown in the first column. At turn 10, the user starts to talk about \emph{hotel} while the rule-based agent is still working on the pre-defined, unfinished \emph{flight} subtask until subtask \emph{flight} is finished at turn 15. At turn 16, the user switches to \emph{hotel}, and so does the rule-based agent until the end of the dialogue. For this rule-based agent, the human-defined subgoal (\emph{flight}) terminates at turn 15. Meanwhile, our SDN model detected two subgoals (except for the final goal): one terminating at turn 9 (Subgoal 1), and another terminating at turn 15 (Subgoal 2). Subgoal 2 is consistent with the human-defined subgoal. Subgoal 1 is also reasonable since the user tries to switch to \emph{hotel} at turn 10. In Appendix~\ref{sec:app_samples}, Table~\ref{tab:sample_dialogue} shows a sample dialogue session by m-HRL agent interacting with a real user. \begin{table}[t!] \scriptsize \centering \begin{tabular}[t]{l|l|l} \hline h-Task & ID & \multicolumn{1}{c}{Text} \\ \hline \multirow{16}{*}{\begin{sideways}Flight\end{sideways}} & 1 & \textit{agt}: Hi, may I know your departure city? \\ & 2 & \textit{usr}: I depart from Fort Lauderdale. \\ & 3 & \textit{agt}: And what is your destination city? \\ & 4 & \textit{usr}: I need economy seat to Pittsburgh. \\ & 5 & \textit{agt}: What is your departure date? \\ & 6 & \textit{usr}: 2016-09-12, what time is available for the flight? \\ & 7 & \textit{agt}: There is one flight at 21:00:00. \\ & 8 & \textit{usr}: What is the price for that flight? \\ & 9 & \textit{agt}: \$423. \\ & \multicolumn{2}{c}{\textbox{blue} {Subgoal 1: A detected subgoal terminates!} } \\ & 10 & \textit{usr}: I need to reserve a hotel for five people. \\ & 11 & \textit{agt}: What is your return date for the trip? \\ & 12 & \textit{usr}: 2016-09-18, I need to book 5 tickets for the flight. \\ & 13 & \textit{agt}: What time do you prefer for return flight? \\ & 14 & \textit{usr}: What time is available that day? \\ & 15 & \textit{agt}: There is one flight departing at 11:00:00AM. \\ \cline{1-1} & \multicolumn{2}{c}{\textbox{blue} {Subgoal 2: A detected subgoal terminates!} } \\ \multirow{7}{*}{\begin{sideways}Hotel\end{sideways}} & 16 & \textit{usr}: What hotel is available? \\ & 17 & \textit{agt}: There is one hotel called Winter Hill Inn available. \\ & 18 & \textit{usr}: What is the price for hotel? \\ & 19 & \textit{agt}: \$410. \\ & 20 & \textit{usr}: Great, could you book the trip for me? \\ & 21 & \textit{agt}: Okay. Trip tickets are issued. \\ & 22 & \textit{usr}: Thanks very much! \\ & 23 & \textit{agt}: You are welcome! \\ \hline \end{tabular} \caption{Discovered subgoals (except for the final goal) in a sample dialogue by a rule-based agent interacting with user simulator. The left column (h-Task) shows the human-defined subtasks for the rule-based agent. SDN detects two subgoals that terminate at turn 9 and 15 respectively. (h-Task: human-defined subtask, ID: turn ID, \textit{agt}: Agent, \textit{usr}: User)} \label{tab:dialog_vis} \end{table} \section{Related Work} \label{sec:related_work} Task-completion dialogue systems have attracted numerous research efforts, and there is growing interest in leveraging reinforcement learning for policy learning. One line of research is on single-domain task-completion dialogues with flat deep reinforcement learning algorithms such as DQN~\cite{zhao2016towards,li2017end,peng2018integrating}, actor-critic~\cite{peng2017adversarial,liu2017iterative} and policy gradients~\cite{williams2017hybrid,liu2017end}. Another line of research addresses multi-domain dialogues where each domain is handled by a separate agent~\cite{gavsic2015policy,gavsic2015distributed,cuayahuitl2016deep}. Recently, \citet{peng2017composite} presented a composite task-completion dialogue system. Unlike multi-domain dialogue systems, composite tasks introduce inter-subtask constraints. As a result, the completion of a set of individual subtasks does \textit{not} guarantee the solution of the entire task. \citet{cuayahuitl10evaluation} applied HRL to dialogue policy learning, although they focus on problems with a small state space. Later, \citet{budzianowski2017sub} used HRL in multi-domain dialogue systems. \citet{peng2017composite} first presented an HRL agent with a global state tracker to learn the dialogue policy in the composite task-completion dialogue systems. All these works are built based on subgoals that were pre-defined with human domain knowledge for the specific tasks. The only job of the policy learner is to learn a hierarchical dialogue policy, which leaves the subgoal discovery problem unsolved. In addition to the applications in dialogue systems, subgoal is also widely studied in the linguistics research community~\cite{allwood2000activity,linell2009rethinking}. In the literature, researchers have proposed algorithms to automatically discovery subgoals for hierarchical RL. One large body of work is based on analyzing the spatial structure of the state transition graphs, by identifying bottleneck states or clusters, among others~\cite{stolle2002learning,mcgovern2001automatic,mannor2004dynamic,simsek05identifying,entezari2011subgoal,bacon2013bottleneck}. Another family of algorithms identifies commonalities of policies and extracts these partial policies as useful skills~\citep{thrun94finding,pickett02policyblocks,brunskill14pac}. While similar in spirit to ours, these methods do not easily scale to continuous problems as in dialogue systems. More recently, researchers have proposed deep learning models to discover subgoals in continuous-state MDPs~\citep{bacon17option,machado17laplacian,vezhnevets17feudal}. It would be interesting to see how effective they are for dialogue management. Segmental structures are common in human languages. In the NLP community, some related research on segmentation includes word segmentation~\cite{gao2005chinese,zhang2016transition} to divide the words into meaningful units. Alternatively, topic detection and tracking~\cite{allan1998topic,sun2007topic} segment a stream of data and identify stories or events in news or social text. In this work, we formulate subgoal discovery as a trajectory segmentation problem. Section~\ref{sec:subgoal_discovery} presents our approach to subgoal discovery which is inspired by a probabilistic sequence segmentation model~\cite{wang2017sequence}. \section{Discussion and Conclusion} \label{sec:conclusion} We have proposed the Subgoal Discovery Network to learn subgoals automatically in an unsupervised fashion without human domain knowledge. Based on the discovered subgoals, we learn the dialogue policy for complex task-completion dialogue agents using HRL. Our experiments with both simulated and real users on a composite task of travel planning, show that an agent trained with automatically discovered subgoals performs competitively against an agent with human-defined subgoals, and significantly outperforms an agent without subgoals. Through visualization, we find that SDN discovers reasonable, comprehensible subgoals given only a small amount of suboptimal but successful dialogue state trajectories. These promising results suggest several directions for future research. First, we want to integrate subgoal discovery into dialogue policy learning rather than treat them as two separate processes. Second, we would like to extend SDN to identify multi-level hierarchical structures among subgoals so that we can handle more complex tasks than those studied in this paper. Third, we would like to generalize SDN to a wide range of complex goal-oriented tasks beyond dialogue, such as the particularly challenging Atari game of Montezuma's Revenge~\cite{kulkarni2016hierarchical}. \section*{Acknowledgments} We would like to thank the anonymous reviewers, members of the xlab at the University of Washington, and Chris Brockett, Michel Galley for their insightful comments on the work. Most of this work was done while DT, CW \& LL were with Microsoft. \subsection{Hierarchical Dialogue Policy Learning} \label{sec:policy_learning} Before describing how we use a trained SDN model for HRL, we first present a short review of HRL for a task-oriented dialogue system. \iffalse Suppose we have a dialogue session of $T$ turns: $\tau=(s_0,a_0,r_0,\ldots,s_{T-1},a_{T-1},r_{T-1},s_T)$. The Q-learning algorithm is inspired by the Bellman equation, which essentially asserts that \[ Q^*(s_t,a_t) \approx r_t + \gamma \max_a Q^*(s_{t+1},a)\,, \] where $Q^*(s,a)$ measures the maximum discounted reward received on average by taking action $a$ in state $s$ and then following an optimal policy thereafter. \fi Following the \emph{options} framework~\cite{sutton1999between}, assume that we have a state set $\mathcal S$, an option set $\mathcal G$ and a finite primitive action set $\mathcal A$. The HRL approach we take learns two Q-functions~\cite{peng2017composite}, parameterized by $\theta_e$ and $\theta_i$, respectively: \begin{itemize}[noitemsep,leftmargin=*,topsep=0pt] \item{The top-level $Q^*(s,g;\theta_e)$ measures the maximum total discounted \emph{extrinsic} reward received by choosing subgoal $g$ in state $s$ and then following an optimal policy. These extrinsic rewards are the objective to be maximized by the entire dialogue policy.} \item{The low-level $Q^*(s,a,g;\theta_i)$ measures the maximum total discounted intrinsic reward received to achieve a \emph{given} subgoal $g$, by choosing action $a$ in state $s$ and then following an optimal option policy. These intrinsic rewards are used to learn an option policy to achieve a given subgoal.} \end{itemize} \vspace{2mm} Suppose we have a dialogue session of $T$ turns: $\tau=(s_0,a_0,r_0,\ldots,s_T)$, which is segmented into a sequence of subgoals $g_0, g_1, \ldots \in \mathcal G$. Consider one of these subgoals $g$ which starts and ends in steps $t_0$ and $t_1$, respectively. The top-level Q-function is learned using Q-learning, by treating subgoals as temporally extended actions: \[ \theta_e \leftarrow \theta_e + \alpha \cdot \left(q - Q(s_t,g;\theta_e)\right) \cdot \nabla_{\theta_e} Q(s_t,g;\theta_e)\,, \] where \[ q = \sum_{t=t_0}^{t_1-1} \gamma^{t-t_0} r_t^e + \gamma^{t_1-t_0} \max_{g'\in\mathcal G} Q(s_{t_1},g';\theta_e)\,, \] and $\alpha$ is the step-size parameter, $\gamma\in[0, 1]$ is a discount factor. In the above expression of $q$, the first term refers to the total discounted reward during fulfillment of subgoal $g$, and the second to the maximum total discounted after $g$ is fulfilled. The low-level Q-function is learned in a similar way, and follows the standard Q-learning update, except that intrinsic rewards for subgoal $g$ are used. Specifically, for $t = t_0, t_0+1,\ldots, t_1-1$: \vspace{-2mm} {\small \[ \theta_i \leftarrow \theta_i + \alpha \cdot \left(q_t - Q(s_t,a_t,g;\theta_e)\right) \cdot \nabla_{\theta_i} Q(s_t,a_t,g;\theta_i)\,, \]} where \[ q_t = r_t^i + \gamma \max_{a' \in \mathcal A} Q(s_{t+1},a',g;\theta_i)\,. \] Here, the intrinsic reward $r_t^i$ is provided by the internal critic of dialogue manager. More details are in Appendix~\ref{app:hrl}. In hierarchical policy learning, the combination of the extrinsic and intrinsic rewards is expected to help the agent to successfully accomplish a composite task as fast as possible while trying to avoid unnecessary subtask switches. Hence, we define the extrinsic and intrinsic rewards as follows: \paragraph{Extrinsic Reward.} Let $L$ be the maximum number of turns of a dialogue, and $K$ the number of subgoals. At the end of a dialogue, the agent receives a positive extrinsic reward of $2L$ for a success dialogue, or $-L$ for a failure dialogue; for each turn, the agent receives an extrinsic reward of $-1$ to encourage shorter dialogues. \paragraph{Intrinsic Reward.} When a subgoal terminates, the agent receives a positive intrinsic reward of $2L/K$ if a subgoal is completed successfully, or a negative intrinsic reward of $-1$ otherwise; for each turn, the agent receives an intrinsic reward $-1$ to encourage shorter dialogues.
{ "attr-fineweb-edu": 2.0625, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeO84dbghfTKFPEBS
\section{Introduction} \begin{table}[t] \centering\scalebox{0.71}{ \begin{tabular}{|p{2.9cm}@{\hskip3pt}|@{\hskip3pt}p{7.cm}|} \hline Context & [FEMALE] was on a long road trip . \\ \hline {\small \citet{guan2020knowledge}} & she had just driven into town . she had been driving for hours . she was so tired she decided to take a nap . when she woke up , she still had n't slept well . \\ \hline {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} {\color{blue}{keyword: driving}} & {she was on her way to see a strange country. she had never seen anything like that before. finally she saw a small town that was so small. it was a great little town with a nice park !} \\ \hline {\textcolor{black}{\textsc{Megatron-Cntrl-8b}}} {\ \ \ \ \ \ \ \ \ \ \ \ \ } {\color{blue}keyword: driving} & she was driving in the middle of [PLACE]. all of a sudden the tire pressure light came on. she got out to check her tire. it was flat so she used the roadside assistance. \\ \hline {\textcolor{black}{\textsc{Megatron-Cntrl-8b}}}-{\textsc{Ant}} {\color{blue}keyword: attract} & she really wanted to see a few attractions. the first one she saw was a giant water park. it was amazing. it ended up being a fun experience. \\ \hline \end{tabular}} \caption{\label{tab:example_0} Stories generated by models of increasing capacity and controllability. As the model size grows, story quality becomes increasingly coherent, fluent, and logically consistent. The last row demonstrates how {\textcolor{black}{\textsc{Megatron-Cntrl-8b}}-{\textsc{Ant}}} model controls the story generation with a new keyword, ``attract". Note that {[MALE] and [FEMALE] denote names and [PLACE] denotes locations.} } \vspace{-4mm} \end{table} Text generation has recently attracted significant attention from the research community as large pre-trained language models, such as {\textsc{Gpt-2}} \cite{radford2018improving, radford2019language} demonstrated promising results for generating long, grammatically correct, and fluent text. Finetuning these models has shown significant improvements in downstream tasks, such as persona chat~\cite{wolf2019transfertransfo}. However, one non-negligible drawback of these large models is the lack of knowledge which humans use to produce natural text. For example, {\textsc{Gpt-2}} based models produce degraded generations that are illogical and ungrammatical for knowledge-driven generation tasks, such as story generation. \citet{guan2020knowledge} therefore introduced commonsense knowledge to the pre-trained language model by further finetuning on commonsense datasets. Although implicit encoding of knowledge is helpful for knowledge incorporation, there is still a lack of training mechanism to teach the model when and what to incorporate from external knowledge. \begin{figure*}[t] \begin{center} \scalebox{0.8}{ \includegraphics[width=\linewidth]{figures/framework.pdf} } \end{center} \vspace{-2mm} \caption{Overview of our generation process. Based on an input context, we generate keywords for future context, use the keywords to retrieve the relevant knowledge from an external knowledge-base, filter them based on their relevance to the context, and use the top scored knowledge sentences to guide the generation.} \label{fig:sys_diagram} \vspace{-3mm} \end{figure*} In addition, these large pre-trained language models are hard to control. Recently, plug-and-play language models ~\citet{dathathri2019plug} addressed whole document controllability by adding a linear classifier on top of {\textsc{Gpt-2}} to predict whether generated text observes a particular style or property. \citet{keskar2019ctrl} controlled a 1.2B parameter language model generation via the use of control codes prepended to the model input. \citet{boyd2020large} controlled the personality of a dialogue agent by conditioning it on prior conversations of a target actor. However, these controlling conditions are predefined, limited in their capability, and are only used once at the beginning to condition the generation of the rest of the document. They do not provide control granularity at either a sentence or sub-document level. In this work, we address these shortcomings and develop an efficient controllable text generation framework that we apply to the story generation task. In order to provide manual control to users through a set of interpretable keywords, we first develop a keyword \textcolor{black}{predictor} model for the next sentence. These keywords are then used to retrieve knowledge sentences from an external knowledge base. Not all the retrieved knowledge is relevant to the story context and often it is noisy. To this end, we introduce a novel contextual ranker that ranks knowledge sentences based on the relevance to the context. As we do not have access to ground-truth supervision for this contextual knowledge ranker, we make use of sentence embedding for weak supervision. The top-ranked knowledge sentences from the knowledge ranker are then fed to the conditional text generator to guide generation. By giving the knowledge in addition to the context, we provide rich information for the generator to attend to and help the model better understand the rationale between sentences. Table \ref{tab:example_0} shows an example of controllable story generation with increasing model capacity. \paragraph{\bf Summary of Contributions:} \begin{itemize}[leftmargin=*] \item We propose a novel generation framework that allows dynamical incorporation of external knowledge into language model as well as control for text generation. \item Using both automatic metrics as well as human evaluations, we demonstrate that our model generates more fluent, consistent, and coherent stories with lower repetition rate and higher diversities compared to the previous state-of-the-art on {\textsc{ Roc}} story datasets \cite{mostafazadeh2016story}. \item We showcase the controllability of our model by replacing the keywords used to generate stories. Human evaluation results show that up to 91.5\% of the generated stories are successfully controlled by the new keywords . \item We scale our model from 124 million to 8.3 billion parameters and demonstrate that both qualities, as well as controllability of the generations, improve as the model size increases. \end{itemize} \section{Framework} \label{sec:framework} In our problem setup, we complete a story using the first sentence as input, similar to \citet{guan2020knowledge}. We augment the generation process with an external knowledge-base and develop a methodology that can guide and control the story generation. Our approach consists of the following four steps connected together as shown in Figure \ref{fig:sys_diagram}: \begin{enumerate} \item Given the story context, a keyword \textcolor{black}{predictor} model first predicts a set of keywords for the next sentence yet to be generated. \item A knowledge retriever then takes the generated keywords and queries an external knowledge-base where each knowledge triple is converted into natural language ``knowledge sentences" using templates. \item A contextual knowledge ranker then ranks the external knowledge sentences based on their relevance to the story context. \item Finally, a generator takes both the story context as well as the top-ranked knowledge sentences as input and generates the next sentence in the story. The output sentence is appended to the story context and steps 1-4 are repeated. \end{enumerate} This formulation naturally allows controllability by replacing the keyword \textcolor{black}{prediction} process with manual external keywords. \textcolor{black}{This work uses dynamic planning of the keywords and knowledge at each generation step. This allows the users to participate and control the generation on the go. As a result, they don't need to pre-specify the keywords explicitly. We also note that it is challenging to statically plan all the knowledge needed for generation at the beginning. This issue becomes severe for long generations.} To formalize this method, we start by introducing notation used throughout the paper and then detail each aforementioned four steps in the following subsections. {\bf Notation:} A knowledge-base, ${\KNOWBASE}$ is defined as a set of knowledge triples $t=(\text{subject}, \text{relation}, \text{object})$. A knowledge sentence, $r$ is defined as $r=T(t)$ by mapping $t$ using predefined templates $T$. For example, \textit{(eiffel tower, AtLocation, paris)} is transformed into {\it eiffel tower is at paris}. \textcolor{black}{We should highlight that since our framework transforms the triple knowledge database into natural language sentences, any knowledge base in natural language format can be readily incorporated into our framework.} We use superscripts to index \textit{story sentences} and define a story $S$ of length $l$ as a sequence of individual story sentences $s^i$ where $S={\{s^1, s^2, \cdots, s^l}\}$. We use $K^i=\{k_1^i, \cdots, k_q^i\}$ to denote the keywords associated with story sentence $s^i$. A keyword $k_q^i$ is made up of subword tokens from our language model's vocabulary. Note that the number of keywords $q$ per sentence varies and can be zero. We define $R^i=\{r_1^i, \cdots, r_v^i\}$ as the knowledge associated with $s^i$, where $r_j^i$ denotes the $j$-th \textit{knowledge sentence} associated $s^i$. The number of knowledge sentences $v$ varies per sentence and can be zero. Note that $v\neq q$ because a keyword can have multiple knowledge triples associated with it. Given this notation, we define the story context $X^i=\{x^1, \cdots, x^i\}$ where $x^i=[R^i, s^i]$. The goal of this work is to generate $x^i$ given $X^{i-1}$, that is to first predict the knowledge $R^i$ contained in $s^i$ and then predict $s^i$ itself. \subsection{Keyword \textcolor{black}{Predictor} Model} \label{sec:key-gen} To provide manual control to users, we first develop a keyword \textcolor{black}{predictor} model. Given the current story context $X^{i-1}$, the model predicts a set of keywords $K^i$ for the next sentence yet to be generated. The prediction of keywords instead of directly predicting knowledge triples not only allows us to control the generation in an interpretable manner, but it also helps to greatly reduce the search space for the knowledge triples. We formulate this keyword \textcolor{black}{prediction} problem similar to a left-to-right language model where the goal is to predict the string of concatenated keywords: \begin{equation} p(K^i | X^{i-1}) = \prod_{j=1}^q p(k_j^i | X^{i-1}, K^i_{<j}), \end{equation} where $K_{<j}$ denotes all the predicted keywords up to the $j$th keyword and $p$ is the probability distribution. We use a {\textsc{Gpt-2}} \cite{radford2019language} transformer to model this probability distribution. We optimize the keyword \textcolor{black}{predictor} with maximum likelihood training and a next token prediction loss. Following \citet{yao2019plan}, we provide the labels for $K^i$ by extracting keywords from a ground truth training sentence $s^i$ using the {\textsc{Rake}} algorithm \cite{rose2010automatic} to train our keyword \textcolor{black}{predictor. Note that our model allows generation of multiple keywords and thus provides the flexibility to choose a subset of them as the control signal to fit in the generation.} \subsection{Knowledge Retrieval} \label{sec:know-ret} In this step, we use the generated keywords $K^i$ in Section \ref{sec:key-gen} and retrieve all the related knowledge triples from our knowledge base $G$. This is simply done by converting all knowledge triples into knowledge sentences using predefined templates and then matching keywords against the knowledge sentences. This results in the knowledge set $\hat{R}^i=\{\hat{r}_1^i, \cdots, \hat{r}_z^i\}$ with size $z$. Future work will focus on replacing this simple retrieval with a learnable module similar to \citet{REALM}. \begin{algorithm}[h] {\fontsize{10.1pt}{10.1pt}\selectfont \caption{Building Pseudo Label of $R^i$} \label{alg:kb_retrieval} \textbf{Input:} Story sentence $s^i$ and its preceding sentence $s^{i-1}$, {\textsc{Use}} encoder $U$, {\textsc{Rake}} keywords extractor, and knowledge base ${\KNOWBASE}$ \\ \textbf{Output:} Pseudo Label of $R^i$ \begin{algorithmic}[1] \State Extract keywords $K^i$ from $s^i$ using {\textsc{Rake}} \State Find $\bar{R}=\{T(t) | \KNOWTRIP \in \KNOWBASE$ and $\exists \KEYWORD^i_j \in K^i$, s.t. $\KEYWORD^i_j \in \KNOWTRIP\}$ \State Encode each $\bar{r}_j \in \bar{R}$ to $U_j^r$ using {\textsc{Use}} \State Encode $[s_{i-1}, s_i]$ to $U^s$ \State Compute cosine similarity $score$ between each $U_j^r$ and $U^s$ \State \textbf{return} $\bar{r}_j$s with the top $N$ highest $score$ \end{algorithmic} } \end{algorithm} \subsection{Building Pseudo Label of $R^i$} \label{sec:pseduolabel} The main challenge for controlling generation with knowledge is that we have no explicit access to the hidden, latent controlling knowledge humans use to supervise their story writing. That means $R^{i}$, the knowledge associated with $s^{i}$ is not available. We, therefore, propose to use a weakly supervised signal to build the pseudo labels of $R^i$ from $s^i$. We hypothesize that $R^i$ should 1) overlap with $s^{i}$ in terms of keywords and 2) have strong connections to both the preceding sentence $s^{i-1}$ and $s^i$. \textcolor{black}{We include $s^{i-1}$ along with $s^i$ because it is hard to retrieve appropriate knowledge using only $s^{i}$ due to the ambiguity of natural language. We also did not include other previous context beyond $s^{i-1}$ as additional context overwhelms the information contained in $s^{i}$.} Following our hypothesis, we first extract keywords $K^i$ from $s^i$ using {\textsc{Rake}} \cite{rose2010automatic} and then match $K^i$ with all knowledge triples in ${\KNOWBASE}$. Transforming the retrieved triples into knowledge sentences gives us our set of $\Bar{R}^i$. We then take the sentence $s^i$ and $s^{i-1}$, concatenate them, and encode them using the Universal Sentence Encoder ({\textsc{Use}}) \cite{cer2018universal}, a widely-used toolkit for semantic similarity, $U^s=U([s^{i-1}, s^i])$, where we denote the encoder of {\textsc{Use}} as $U$. For each $\bar{r}_j^i\in\bar{R}^i$, we then calculate the cosine similarity between $U^s$ and $U^r_j=U(\bar{r}_j)$ and sort $\bar{R}^i$ based on this score. We take the top $N$ highest scores $\bar{r}_j^i$ as a pseudo label of $R^i$. Algorithm \ref{alg:kb_retrieval} describes this process. During the training phase of each following model, we use this pseudo label of $R^i$ to represent $R^i$. \subsection{Contextual Knowledge Ranker} \label{sec:contextual_ranker} While knowledge retrieval with keywords reduces the controlling knowledge candidate space from the knowledge base $\KNOWBASE$ to the subset $\hat{R}^i$, this set is still large and noisy since words are ambiguous and can have multiple senses. We, therefore, contextualize the knowledge sentences in $\hat{R}^i$ to obtain relevant and useful ones under $X^{i-1}$. To do this, we develop a contextual knowledge ranker. The model is trained with pseudo-labels extracted with access to the future sentence $s^i$ as described in Sec. \ref{sec:pseduolabel}. We use a {\textsc{ Bert}} model to encode both the context $X^{i-1}$ and each knowledge sentence $\hat{r}_j^i\in\hat{R}^i$. To adapt to the format of {\textsc{ Bert}}, we append a [SEP] token to each $R^j$ and $s^j$ inside the context $X^{i-1}$. A [CLS] token is then added to the beginning of $X^{i-1}$. For segment ids, we mark the tokens from the knowledge base as 0 and those from the story as 1. The representation of $X^{i-1}$ and $\hat{r}^i_j$ are then obtained after applying a linear layer on top of the embedding of the [CLS] token: \begin{align*} V_x &= W_1 \, \text{\textsc{ Bert}}_{\text{CLS}}(X^{i-1}), \\ V_j &= W_2 \, \text{\textsc{ Bert}}_{\text{CLS}}(\hat{r}_j^i), \end{align*} where $W_1$ and $W_2$ are learnable weights. We then calculate the relevance {\it score} $C$ between $X^{i-1}$ and $\hat{r}^i_j$ using the inner product between $V_x$ and $V_j$ as : \begin{equation} C_j^i = C(X^{i-1}, \hat{r}^i_j) = V_x V_j \end{equation} We take $R^i$ (Sec. \ref{sec:pseduolabel}) as positive samples and $\hat{R}^i \backslash R^i$ as negative samples to train our ranker. Given a positive and a negative knowledge sentence $r_p$ and $r_n$, we define the ranking loss as {\small \begin{equation} \label{eq:loss} L = \max\{0, M-C(X^{i-1}, r_p) + C(X^{i-1}, r_n)\} \end{equation}} where $M$ is a margin and determined empirically. Algorithms \ref{alg:knowledge_ranking} describe the ranker training process. At inference time, we simply calculate $C_j$ for all $\hat{r}_j^i\in \hat{R}^i$, sort them based on $C_j^i$ score and pick the top $N$ most relevant knowledge sentences as $R^i$. \begin{algorithm}[h] {\fontsize{10.1pt}{10.1pt}\selectfont \caption{Knowledge Ranker Training} \label{alg:knowledge_ranking} \textbf{Parameters:} {\textsc{ Bert}} model parameters $\Theta$ and ranker model parameters $W_1$ and $W_2$ \\ \textbf{Input:} A story $S^l$ with $l$ sentences and a knowledge base ${\KNOWBASE}$ \begin{algorithmic}[1] \State Initialize $\Theta$ using a pre-trained {\textsc{ Bert}} model and $W_1, W_2$ randomly. \State Dataset $D = \emptyset$ \State Call Algorithm \ref{alg:kb_retrieval} to retrieve $\RetKnowTrip^{1}$ from ${\KNOWBASE}$ using $s^1$. \For{$i \in \{2, \ldots, l\}$ } \State Call Algorithm \ref{alg:kb_retrieval} to retrieve $\RetKnowTrip^{i}$ using $s^i$. \State Get $\hat{R}^i$ using knowledge retrieval (Section \ref{sec:know-ret}) \For{$j \in \{1, \ldots, N\}$ } \State Sample $r_p$ from $R^i$ and $r_n$ from $\hat{R}^i \backslash R^i$ \State $D = D \cup (X^{i-1}, r_p, r_n)$ \EndFor \EndFor \For{ $(X, r_p, r_n) \in D$ } \State Calculate loss $L$ using Equation \ref{eq:loss} \State Optimize $\text{\textsc{ Bert}}, W_1, W_2$ \EndFor \State \textbf{return} $\text{\textsc{ Bert}}, W_1, W_2$ \end{algorithmic} } \end{algorithm} \subsection{Conditional Generator} \label{sec:cond-gen} The conditional generator is a language model that incorporates the controlling knowledge and generates the following sentences. It concatenates the story context $X^{i-1}$ and controlling knowledge $R^i$ as input and generates $s^i$. A {\textsc{Gpt-2}} transformer is used to model this conditional probability distribution. We describe the concatenated input representation in the Appendix \ref{subsec:input-representations}. \section{Experimental Setup} \subsection{Datasets} \label{subsec:datasets} We use the {\textsc{ Roc}} story dataset \cite{mostafazadeh2016story} for our experiments. It consists of 98,161 stories, where each story contains five sentences. 88,344/4,908/4,909 stories are used for train/validation/test sets, respectively. Following \citet{guan2020knowledge}, for each sentence, delexicalization is performed by replacing all the names and entities in stories with special placeholders, [{\it MALE}], [{\it FEMALE}], and [{\it NEUTRAL}] for male, female and unknown names and entities, respectively. Given the first sentence of each story, our model's task is to generate the rest of the story. For our external knowledge base, we use ConceptNet \cite{speer2012representing}, consists of 600k knowledge triples. \subsection{Models} We used Megatron-LM \cite{Megatron} for pre-trained {\textsc{ Bert}} and {\textsc{Gpt-2}} models to initialize our contextual knowledge ranker and generative models, respectively. For the model configurations, hidden size, number of layers, and attention heads, we used the configurations of {\textsc{ Bert}} and {\textsc{Gpt-2}} as in Megatron-LM. For generation with our {\textsc{Gpt-2}} models, we used a top-$k$ sampling scheme \cite{fan2018hierarchical} with $k=40$ and a softmax temperature of 0.7. We detail the training hyperparameters and the input representations for {\textsc{Gpt-2}} and {\textsc{ Bert}} in Appendix \ref{subsec:gpt2-hyperparams} \& \ref{subsec:bert-hyperparams} . Both the keyword \textcolor{black}{predictor} and the conditional sentence generator follow the same settings. To train our contextual knowledge ranker, we set the margin to 5.0. We set the number of knowledge sentences in $R^i$ to 10. Therefore, for a given story context, the top 10 retrieved knowledge sentences from ConceptNet according to {\textsc{Use}} are chosen as the positive samples. We further select 40 negative samples to compute our margin loss. We then randomly sample 50 (positive, negative) pairs for each story context to train our contextual knowledge ranker. In total, we used $\sim$15 million pairs for training and $\sim$1 million pairs for validation. After training our ranker, we achieve a validation accuracy of 0.9. \subsection{Controllability Experiment Setup} \label{sec:controllability_setup} To test the controllability of our model, we perform experiments where we change keywords to their antonyms. With antonyms, we expect maximal change to the story generation. To do that, we first use {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} to generate keywords $K$ and corresponding full story $S$. Then we identify the first keyword $k^i_a\in K^i$ from $K$ whose antonym is available at WordNet \cite{miller1995wordnet}. If multiple antonyms for $k^i_a$ are available we sample one with a uniform random probability. Afterwards, we provide the start of story $\{s^1, s^2, \cdots, s^{i-1}\}$, the keywords shared with our original story $\{K^1, K^2, \cdots, K^{i-1}\}$, and the antonym of $k^i_a$ to either {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} or larger models (e.g. {\textcolor{black}{\textsc{Megatron-Cntrl-355m}}}). We then let the model finish the generation. We refer to these generations as $\textcolor{black}{\textsc{Megatron-Cntrl-Ant}}$, for example, we call the antonym generations from {\textcolor{black}{\textsc{Megatron-Cntrl-355m}}} model as \textcolor{black}{\textsc{Megatron-Cntrl-355m}}-\textsc{Ant}. \subsection{Baselines} We compare our model with the following state-of-the-art story generation models. (1) \textbf{Plan and write \cite{yao2019plan}:} The authors use an LSTM-based model to first generate a sequence of keywords for planning the story. These keywords are then used to condition the generation. (2) \textbf{Knowledge enhanced {\textsc{Gpt-2}} \cite{guan2020knowledge}:} This work is currently the SOTA for ROC story generation. It finetunes a pre-trained {\textsc{Gpt-2}} model with knowledge triples from commonsense datasets. Similar to our method, the knowledge triples are converted to sentences with templates. A multitask learning framework is then developed to further finetune the story generation task and classify corrupted stories from real ones. We do not compare to \citet{fan2019strategies} because \citet{guan2020knowledge} has already shown their model significantly outperforms \citet{fan2019strategies} and in this work, we compare to \citet{guan2020knowledge}. (3) \textbf{$\textsc{Gpt-2}$-124M:} This baseline finetunes a {\textsc{Gpt-2}} model with a next token prediction loss on the story. \subsection{Evaluation} \label{sec:eval_setup} We conduct both automatic as well as human evaluations to assess our generation. \subsubsection{Automatic Evaluation} We use the following metrics to compare different models: \textbf{Repeat:} measures the redundancy of the generated story by reporting the percentage of the stories that contain at least one repeated 4-gram \cite{shao2019long}. \textbf{Distinct:} measures the diversity of generated stories by reporting the ratio between distinct 4-grams to all generated 4-grams. \textbf{Perplexity:} In the inference phase, our models involve two steps of generation: (i) generate set of knowledge sentences, $R^i$ from story context $X^{i-1}$, (ii) generate story sentence, $s^i$ from $X^{i-1}$ and $R^i$. To report the perplexity of the conditional generator we sample $R^i$ sequentially before generating each story sentence $s^{i}$ and report the total perplexity of all sentences $s^i$ for $i\in[2,l]$ where $l$ is the number of sentences in the story. \subsubsection{Human Evaluation on Quality} We conduct human evaluations on Amazon Mechanical Turk\footnote{https://www.mturk.com/} (AMT) to analyze the quality of our generations on three aspects: {\bf Fluency}, {\bf Coherence}, and {\bf Consistency}. To evaluate fluency, we show the annotators a pair of generated stories from two models. We ask them to evaluate each sentence independently and choose the story with better overall fluency. Fluency of a story is defined as a measure of intra-sentence linguistic quality and grammatical correctness taken over all sentences of the story. For coherence, we provide the same stories as in fluency but ask to choose the one with better inter-sentence causal and temporal dependencies. We let the annotators choose {\it tie} for both fluency and coherence. Different from the settings of fluency and coherence, we only show one generated story to annotators to evaluate consistency. They are required to choose whether the story is logically consistent, based on whether the story self contradicts or not. We set up these three evaluations as independent AMT tasks to make sure the tasks do not influence each other and introduce spurious correlations between labels. To reduce noise in our labeling process, we only accepted workers with an approval rating over 90\% and have over 1k accepted jobs. We further limited the location of the annotators to the United States. For each example, we explicitly ask them to spend at least 15 seconds to evaluate coherency and 10 seconds to evaluate the other two properties. In total, we randomly sample 200 stories and assign five annotators for each story. We adopted majority voting to make final decisions among the five annotators. \subsubsection{Human Evaluation on Controllability} To evaluate how controllable our model is, we conduct another human evaluation just for controllability. We show the annotators the start of a story, original keywords, and the corresponding generation. We then show the antonyms of the keywords, along with the corresponding generated story, and ask the annotators if the new story has changed compared to the original story in accordance with the meaning of the keyword's antonyms. The rest of the AMT settings for these experiments are the same as our consistency experiments. \section{Results} In this section, we first perform automatic and human evaluations with different model sizes and compare our framework to the existing baselines. We then evaluate the controllability of our model and finally show ablation study varying {\textsc{Gpt-2}} and {\textsc{ Bert}} model sizes. The detailed configuration of the model sizes are shown in Table \ref{table:configure}. We provide several generated stories in Appendix \ref{subsec:generation-examples} varying the length of the given context. \textcolor{black}{We use {\textsc{M-Cntrl}} to denote {\textcolor{black}{\textsc{Megatron-Cntrl}}} in the tables due to the limited space.} \begin{table}[!htb] \centering\scalebox{0.8}{ \begin{tabular}{c@{\hskip3pt}cccc} \hline & Conditional & Keyword & Knowledge \\ Model Name\ \ \ \ & Generator & Generator & Ranker \\ & (\textsc{Gpt-2}) & (\textsc{Gpt-2}) & (\textsc{ Bert}) \\ \hline \textsc{M-Cntrl-124m} & 124M & 124M & 336M \\ \textsc{M-Cntrl-355m} & 355M & 355M & 336M \\ \textsc{M-Cntrl-774m} & 774M & 774M & 336M \\ \textsc{M-Cntrl-2b} & 2.5B & 2.5B & 336M \\ \textsc{M-Cntrl-8b} & 8.3B & 2.5B & 336M \\ \hline \end{tabular} } \vspace{-2mm} \caption{\label{table:configure} Number of parameters of our models ({\textsc{M-Cntrl}} is the short form for {\textcolor{black}{\textsc{Megatron-Cntrl}}}).} \vspace{-2mm} \end{table} \subsection{Automatic and Human Evaluations} \begin{table*}[!htb] \centering\scalebox{0.85}{ \begin{tabular}{lcc@{\hskip3pt}r} \hline Source A & Coherence $\uparrow$ & Fluency $\uparrow$ & Source B \\ \hline\hline { \textsc{M-Cntrl-124m}} & {\textbf {78.5\%}} - 13.0\% & {\textbf {66.5\%}} - 22.5\% & { {\citet{YaoPlanAndWrite2018}}} \\ { \textsc{M-Cntrl-124m}} & {\textbf {46.0}}\% - 39.0\% & {\textbf {44.5}}\% - 43.5\% & { \citet{guan2020knowledge}} \\ { \textsc{M-Cntrl-355m}} & {\textbf {56.0}}\% - 30.5\% & {\textbf {46.5}}\% - 30.5\% & { \citet{guan2020knowledge}} \\ \hline { \textsc{M-Cntrl-355m}} & {\textbf {52.0}}\% - 31.5\% & {\textbf {46.5}}\% - 39.0\% & { \textsc{M-Cntrl-124m}} \\ { \textsc{M-Cntrl-774m}} & {\textbf {44.5}}\% - 41.5\% & {\textbf {56.0}}\% - 33.5\% & { \textsc{M-Cntrl-355m}} \\ { \textsc{M-Cntrl-2b}} & {\textbf {50.5}}\% - 30.5\% & {\textbf {53.0}}\% - 39.0\% & { \textsc{M-Cntrl-774m}} \\ { \textsc{M-Cntrl-8b}} & \textbf {46.0}\% - 39.5\% & 46.5\% - {46.5}\% & { \textsc{M-Cntrl-2b}} \\ \hline \end{tabular} } \caption{\label{table:all-allgorithms-human-evaluations} Pairwise comparison between our models and baselines. Percentages in the format "A\% - B\%" indicate how often annotators rank the samples from source A better than from source B for a given category, and vice versa. Percentage pairs do not sum to 100\% as the annotators were allowed to choose ``tie" as being of equal quality. $\textcolor{black}{\textsc{Megatron-Cntrl-124m}}$ achieves better results than all baselines. Scaling the models shows better coherence and fluency. } \vspace{-3mm} \end{table*} \begin{table}[!htb] \centering\scalebox{0.72}{ \begin{tabular}{l@{\hskip3pt}ccc|c} \hline \multirow{2}{*}{Name} & \multirow{2}{*}{PPL $\downarrow$} & \multirow{2}{*}{Repeat $\downarrow$} & \multirow{2}{*}{Distinct $\uparrow$} & Consistency $\uparrow$ \\ & & & & (Human Eval) \\ \hline\hline {\textsc{Gpt-2}-124M} & 6.98 & 27.2 & 74.1 & 69.5 \\ \hline {\small \citet{YaoPlanAndWrite2018}} & NA & {\textbf{13.3}} & 63.7 & 49.0 \\ {\small \citet{guan2020knowledge}} & 7.04 & 22.1 & 77.1 & 67.0\\ \hline \textsc{M-Cntrl-124m} & 9.37 & 20.0 & 80.1 & 74.5 \\ \textsc{M-Cntrl-355m} & 8.02 & 19.9 & 81.6 & 75.5\\ \textsc{M-Cntrl-774m} & 6.58 & 21.3 & 81.6 & 80.5\\ \textsc{M-Cntrl-2b} & 6.31 & 21.2 & 82.6 & 89.0\\ \textsc{M-Cntrl-8b} & {\textbf{6.21}} & 21.2 & {\textbf{82.8}} & {\textbf{93.0}}\\ \hline \end{tabular} } \caption{\label{table:all-allgorithms-baseline-metrics} Evaluation results for the previous state-of-the-art models as well as our algorithm at different sizes. Perplexity, repeat, and distinct are evaluated automatically whereas consistency is obtained using human evaluations. Our smallest model with 124M parameters achieves better distinct and consistency score compared to prior work. Increasing model size up to 8B improves perplexity, distinct, and consistency scores. For reference, the ground truth human writing gives 7.6 score for repeat and 88.9 for distinct.} \end{table} Table \ref{table:all-allgorithms-baseline-metrics} shows that our smallest model, {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} achieves better distinct and consistency scores compared to previous work. For repetition, our model is worse than \citet{yao2019plan} which was also observed in \citet{guan2020knowledge}. The reason could be their small 8M model is better at learning short term statistics (e.g. 4-grams), while large models are better at learning long term dependencies. Compared to other {\textsc{Gpt-2}} based models (i.e. {\textsc{Gpt-2}-124M} and \citet{guan2020knowledge}), {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} achieves lower repeat and higher distinct scores, hence our model generates less repetitive stories. We notice from Table \ref{table:all-allgorithms-baseline-metrics} that our perplexity (PPL) score is much higher than other \textsc{Gpt-2}-based models. Our hypothesis for why this occurs is that other {\textsc{Gpt-2}}-based methods directly model and report $P(s^i|s^1, s^2, \cdots, s^{i-1})$ while our conditional generator models and reports $P(s^i| X^{i-1}, R^i)$. When computing perplexity, $[s^1, s^2, \cdots, s^{i-1}]$ are given ground truth tokens, but $R^i$ and all $R$ in $X^{i-1}$ must be sampled from a distribution that is learned with weak supervision. This sampling introduces noise and non-determinism that results in higher perplexity. This discrepancy is not an issue when analyzing automatic evaluation metrics within our model family. When scaling our model from 124M up to 8B parameters we see a consistent drop in PPL and increase in distinct. This shows larger models can generate better stories with more diversity. Human evaluation results are presented in last column of Table \ref{table:all-allgorithms-baseline-metrics} (consistency) and in Table \ref{table:all-allgorithms-human-evaluations}. Comparing {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} to \citet{yao2019plan}, we achieve much better coherence, fluency, and consistency scores, which shows the benefit of large pre-trained transformer models. Comparing {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} to \citet{guan2020knowledge} which uses a similar base model, we find that fluency is similar, however we should note that \citet{guan2020knowledge} is not controllable and our model has significantly better coherence (+7.0\%) in Table \ref{table:all-allgorithms-human-evaluations} and consistency (+7.5\%) in Table \ref{table:all-allgorithms-baseline-metrics}. We attribute this to the use of the retrieved knowledge, $R^i$. By explicitly providing facts pertinent to the next sentence, the conditional generative model can focus on just generating text. By comparison, a standard autoregressive {\textsc{Gpt-2}} model is tasked with predicting both the topics and the text of the next sentence. Scaling this up, and comparing {\textcolor{black}{\textsc{Megatron-Cntrl-355m}}} to \citet{guan2020knowledge}, our model significantly outperforms in all aspects. Furthermore, a thorough comparison among {\textcolor{black}{\textsc{Megatron-Cntrl-355m}}}, {\textcolor{black}{\textsc{Megatron-Cntrl-774m}}}, {\textcolor{black}{\textsc{Megatron-Cntrl-2b}}}, {\textcolor{black}{\textsc{Megatron-Cntrl-8b}}} shows that scaling the model size further almost always improves the quality of generation in terms of fluency, coherence and consistency. For consistency, our best model at 8B parameters achieves a score of 93\%. \subsection{Controllability Evaluation} We evaluate the controllability by changing keywords to their antonyms as detailed in Section \ref{sec:controllability_setup} \& \ref{sec:eval_setup}. Table \ref{table:all-allgorithms-control-metrics} shows repeat and distinct for {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} as well as the controlled versions at three different sizes. Altering control with antonym keywords gives lower repeat and higher distinct scores than the original generation. As the model size increases, the repeat stays almost constant while distinct improves. These results show that changing keywords manually results in distinct and not repeated text. \begin{table}[!htb] \centering\scalebox{0.8}{ \begin{tabular}{l@{\hskip3pt}cccccc} \hline Name & Repeat $\downarrow$ & Distinct $\uparrow$ \\ \hline\hline \hline {\textsc{M-Cntrl-124m} } & 20.0 & 80.1 \\ \hline {\textsc{M-Cntrl-124m}-\textsc{Ant}} & 17.8 & 80.9 \\ {\textsc{M-Cntrl-355m}-\textsc{Ant}} & 18.0 & 81.6 \\ {\textsc{M-Cntrl-8b}-\textsc{Ant}} & 18.5 & 82.8 \\ \hline \end{tabular} } \caption{\label{table:all-allgorithms-control-metrics} Comparing controllability of the models by changing the keywords to their antonyms. Controlled generations show less repetition and higher diversity compared to the original one.} \end{table} Further supporting this hypothesis, evaluation of controllability in Table \ref{table:all-allgorithms-human-evaluations-control} shows that {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}-\textsc{Ant}} achieves a high controllability score of 77.5\%. This means that by changing the keywords to their antonym, 77.5\% of newly generated stories change their semantic content to follow the new antonym keywords. We also show that larger models are better able to leverage keyword control. Scaling the model size from 124M to 355M and 8B model further improves the controllability score to 84.5\% and 91.5\%, respectively. We again observe the quality (e.g. coherence) of our controlled generation improves as the model size scales to 8B. \begin{table}[!htb] \centering\scalebox{0.8}{ \begin{tabular}{l@{\hskip3pt}c@{\hskip3pt}r} \hline Name & Controllability $\uparrow$ \\ \hline\hline {\textsc{M-Cntrl-124m}-\textsc{Ant}} & 77.5\%\\ {\textsc{M-Cntrl-355m}-\textsc{Ant}} & 84.5\% \\ {\textsc{M-Cntrl-8b}-\textsc{Ant}} & \textbf{91.5}\% \\ \hline \end{tabular} } \caption{\label{table:all-allgorithms-human-evaluations-control} Human evaluation for controllability by changing keywords to their antonyms. Over 77\% of our generation changes according to the keywords.} \vspace{-3mm} \end{table} \subsection{Ablation Studies} In this section, we conduct the ablation study on the planning strategy and external knowledge. The study of model size can be found in the Appendix \ref{subsec:ablation_model_size}. \subsubsection{Planning Strategy} \begin{table}[t] \centering\scalebox{0.8}{ \begin{tabular}{l@{\hskip3pt}c@{\hskip3pt}c@{\hskip3pt}c} \hline Name & Repeat $\downarrow$ & Distinct $\uparrow$ \\ \hline\hline {\textsc{M-Cntrl-124m}} (D) & 20.04 & 80.14 \\ {\textsc{M-Cntrl-124m}} w/o knowledge (D) & 23.59 & 79.39 \\ {\textsc{M-Cntrl-124m}} (S) & 23.87 & 79.45 \\ {\textsc{M-Cntrl-124m}} w/o knowledge (S) & 23.98 & 79.61 \\ \hline \end{tabular} } \caption{\label{table:ablation_strategy} \textcolor{black}{Ablation studies of static (S) vs dynamic (D) planning strategy, with and without knowledge.}} \vspace{-3mm} \end{table} \textcolor{black}{ In this section, we investigate the effects of planning strategy in our framework. \citet{yao2019plan} showed that static planning works better than dynamic planning for LSTM-based models. To introduce the static planning in our model, we predicted all the keywords and relevant knowledge sentences from the starting sentence and generated the entire stories. When we compare these generations with the stories generated by dynamic planning, we see in Table \ref{table:ablation_strategy} (first and third rows) that dynamic planning outperforms the static planning strategy with higher distinction (+0.7\%) and lower repetition (-3.8\%) scores. This is due to direct guidance over each sentence provided by the retrieved knowledge from dynamic planning . In contrast, in static planning, the retrieved knowledge sentences are all predicted together at the beginning using only the starting sentence, which makes the supervision for each story sentence weaker and noisier. } \subsubsection{External Knowledge} \textcolor{black}{ In this section, we investigate the importance of retrieved knowledge. Table \ref{table:ablation_strategy} (first and second rows) shows that, when excluding the knowledge from our framework (i.e. {\textcolor{black}{\textsc{Megatron-Cntrl-124m}}} w/o knowledge), distinction score decreases by 0.8\% and repetition increases by 3.6\%, highlighting the importance of external knowledge in our approach. Unlike dynamic planning, we observe that in static planning, the external knowledge does not play an important role in the quality of the generations and using or not using the knowledge leads to similar qualities. This observation also confirms that knowledge needs to be planned dynamically.} \section{Future Work} \textcolor{black}{ Short story sentences in {\textsc{ Roc}} story dataset limits our exploration from several potential research directions. For example, how long the control signal would propagate for longer generations? Investigating this issue using longer story datasets (e.g. {\sc{WritingPrompts}} \cite{fan2018hierarchical}) is a subject for future work. Other interesting direction may include incorporating the structure-level controllability by adding it as either an extra input for the conditional generator or a multitask learning supervision for each sequence.} \textcolor{black}{We also observed that in some cases during the generation, our model simply mentions the given word in the sentence, and talks about things that are not strictly related to or restricted by the given word. For example, the generated story of {\textcolor{black}{\textsc{Megatron-Cntrl-8b}}} in Table \ref{tab: example_15} only mentions the keyword ``realize" instead of centering around it. This is caused by the RAKE keywords extractor, which does not always extract the keywords that represent the sentence well. One way to mitigate this issue is to leverage longer context information to identify better keywords which is subject of the future work. } \section{Related Work} \paragraph{Knowledge} Incorporation of knowledge into language models has shown promising results for downstream tasks, such as factual correct generation \cite{logan2019barack} , commonsense knowledge graph construction \cite{bosselut2019comet}, entity typing \cite{zhang2019ernie} and etc. More recently, several works have shown that inclusion of learned mechanisms for explicit or implicit knowledge can lead to the state-of-the-art results in Question Answering \cite{REALM,DPR,ORQA,RAGS} and dialogue modeling \cite{Blendr}. \paragraph{Storytelling} There are several different storytelling tasks described throughout the literature. Storytelling can be classified into story completion ~\cite{chen2019incorporating}, story ending generation ~\cite{guan2019story}, story generation from prompts~\cite{fan2018hierarchical} or titles ~\cite{yao2019plan}, and story generation from a given sentence ~\cite{guan2020knowledge}. Different approaches have been developed to model the structure of stories with storylines ~\cite{yao2019plan}, skeletons ~\cite{xu2018skeleton}, Conditional Variational AutoEncoders ~\cite{wang2019t} and a coarse-to-fine framework ~\cite{fan2019strategies}. Other works focus on incorporating commonsense knowledge into story generation with attention-based models \cite{guan2019story, chen2019incorporating}. Recently, pre-trained language models have been used to finetune on both story completion datasets and commonsense knowledge to further improve the quality of story completion \cite{guan2020knowledge}. However, few works concern the controllability of language model generation, especially for the large pre-trained models that are common in today's literature. \paragraph{Controllable Generation} Controllable text generation has a wide range of applications, including controlling through persona \cite{zhang2018personalizing,boyd2020large}, politeness \cite{niu2018polite}, etc. \textcolor{black}{\citet{wiseman2018learning} presented controlling generations by learning latent, discrete templates from data. \citet{fu2019rethinking} discovered the importance of pivot words that determines the sentence attributes and presented a lexical analysis framework. To control large pre-trained models,} \citet{keskar2019ctrl} demonstrated the ability to control text generation through a wide range of aspects, such as domains and links. Plug-and-play language models ~\citet{dathathri2019plug} also address whole document controllability by adding a linear classifier on top of {\textsc{Gpt-2}} to predict whether generated text observes a particular style or property. \citet{prabhumoye2020exploring} provides a good survey of five modules for control. Differing from these works, we control the generation through keywords backed by external knowledge. \section{Conclusion} In this paper, we proposed a novel framework that adds control to text generation with external knowledge. Our model first generates a set of keywords and a knowledge retriever then queries an external knowledge base for triples related to the keywords. Based on the relevance to the story context, a contextual knowledge ranker ranks the retrieved knowledge sentences and feeds the top ones to a conditional generator to generate the next story sentence. Experimental results on the {\textsc{ Roc}} story dataset showed that our model outperforms state-of-the-art models by generating less repetitive, more diverse and logically consistent stories. Human evaluation of the controllability of our model shows that 91.5\% of generated stories are successfully controlled by changing keywords to their antonym. In line with current trends, we also demonstrate that using larger pre-trained language models consistently improves both the quality of the generated stories and controllability.
{ "attr-fineweb-edu": 2.09375, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbFs4c3aisGX5ZOGV
\section{Introduction} Suppose that $k$ runners run laps on a unit-length circular track. They all start together from the same point and run in the same direction with pairwise different constant speeds $d_{1},d_{2},\ldots ,d_{k}$. At a given time $t$, a runner is said to be \emph{lonely} if no other runner is within a distance of $1/k$, both in front and rear. The Lonely Runner Conjecture states that for every runner there is a time at which he is lonely. For instance if $k=2$, one can imagine easily that at some time or other, the two runners will find themselves on antipodal points of the circle, both becoming lonely at that moment. To give a precise statement, let $\mathbb{T}=[0,1)$ denote the \emph{circle} (the one-dimensional torus). For a real number $x$, let $\{x\}$ be the fractional part of $x$ (the position of $x$ on the circle), and let $% \left\Vert x\right\Vert $ denote the distance of $x$ to the nearest integer (the circular distance from $\{x\}$ to zero). Notice that $\left\Vert x-y\right\Vert $ is just the length of the shortest circular arc determined by the points $\{x\}$ and $\{y\}$ on the circle. It is not difficult to see that the following statement is equivalent to the Lonely Runner Conjecture. \begin{conjecture} \label{Wills}For every integer $k\geqslant 1$ and for every set of positive integers $\{d_{1},d_{2},\ldots ,d_{k}\}$ there exists a real number $t$ such that \begin{equation*} \left\Vert td_{i}\right\Vert \geqslant \frac{1}{k+1} \end{equation*}% for all $i=1,2,\ldots ,k$. \end{conjecture} The above bound is sharp as is seen for the sets $\{1,2,\ldots ,k\}$. The paper of Goddyn and Wong \cite{GoddynWong} contains items of interesting exemplars of such extremal sets. The problem was posed for the first time by \mbox{Wills \cite{Wills}} in connection to Diophantine approximation. Cusick \cite{Cusick} raised the same question independently, as a view obstruction problem in Discrete Geometry (cf. \cite% {BrassMoserPach}). Together with Pomerance \cite{CusicPomerance}, he confirmed the validity of the conjecture for $k\leqslant 4$. Bienia et al. \cite% {Bienia} gave a simpler proof for $k=4$ and found interesting application to flows in graphs and matroids. Next the conjecture was proved for $k=5$ by Bohman et al. \cite{BohmanHolzmanKleitman}. A simpler proof for that case was provided by Renault \cite{Renault}. Recently the case $k=6$ was established by Barajas and Serra \cite{BarajasSerra}, using a new promising idea. Let $D=\{d_{1},d_{2},\ldots ,d_{k}\}$ be a set of $k$ positive integers. Consider the quantity \begin{equation*} \kappa (D)=\sup_{x\in \mathbb{T}}\min_{d_{i}\in D}\left\Vert xd_{i}\right\Vert \end{equation*}% and the related function $\kappa (k)=\inf \kappa (D)$, where the infimum is taken over all $k$-element sets of positive integers. So, the Lonely Runner Conjecture states that $\kappa (k)\geqslant \frac{1}{k+1}$. The trivial bound is $\kappa (k)\geqslant \frac{1}{2k}$, as the sets $\{x\in \mathbb{T}% :\left\Vert xd_{i}\right\Vert <\frac{1}{2k}\}$ simply cannot cover the whole circle (since each of them is a union of $d_{i}$ open arcs of length $\frac{1% }{kd_{i}}$ each). Surprisingly, nothing much better was proved so far. Currently the best general bound is \begin{equation*} \kappa (k)\geqslant \frac{1}{2k-1+\frac{1}{2k-3}} \end{equation*}% for every $k\geqslant 5$ \cite{Chen}. A slightly improved inequality $\kappa (k)\geqslant \frac{1}{2k-3}$ holds when $k\geqslant 4$ and $2k-3$ is prime \cite{ChenCusic}. Using the probabilistic argument we proved in \cite% {CzerwinGrytczuk} that every set $D$ contains an element $d$ such that \begin{equation*} \kappa (D\setminus \{d\})\geqslant \frac{1}{k}. \end{equation*}% In this paper we prove another general result supporting the Lonely Runner Conjecture. \begin{theorem} Let $k$ be a fixed positive integer and let $\varepsilon >0$ be fixed real number. Let $D\subseteq \{1,2,\ldots ,n\}$ be a $k$-element subset chosen uniformly at random. Then the probability that $\kappa (D)\geqslant \frac{1}{% 2}-\varepsilon $ tends to $1$ with $n\rightarrow \infty $. \end{theorem} The proof uses elementary Fourier analytic technique for subsets of $\mathbb{% Z}_{p}$. We give it in the next section. In the last section we point to a striking consequence of our result for colouring of integer distance graphs. \section{Proof of the main result} Let $k$ be a fixed positive integer and let $p\geqslant k$ be a prime number. For $a\in \mathbb{Z}_{p}$, let $\left\Vert a\right\Vert _{p}=\min \{a,p-a\}$ be the circular distance from $a$ to zero in $\mathbb{Z}_{p}$. We will need the following notion introduced by \cite{Ruzsa}. Let $L$ be a fixed positive integer. A set $D=\{d_{1},\cdots,d_{k}\}\subseteq \mathbb{Z}_{p}$ is called $L$\emph{-independent} in $\mathbb{Z}_{p}$ if equation \begin{equation*} d_{1}x_{1}+d_{2}x_{2}+\ldots +d_{k}x_{k}=0 \end{equation*}% has no solutions satisfying \begin{equation*} 0<\sum\limits_{i=1}^{k}\left\Vert x_{i}\right\Vert _{p}\leqslant L. \end{equation*}% We will show that for appropriately chosen $L$, any $L$-independent set can be pushed away arbitrarily far from zero. Then we will demonstrate that for such $L$, almost every set in $\mathbb{Z}_{p}$ is $L$-independent. Let $f:\mathbb{Z}_{p}\rightarrow \mathbb{C}$ be any function and let $\hat{f}% :\mathbb{Z}_{p}\rightarrow \mathbb{C}$ denote its Fourier transform, that is \begin{equation*} \hat{f}(r)=\sum_{x\in \mathbb{Z}_{p}}f(x)\omega ^{rx}, \end{equation*}% where $\omega =e^{\frac{2\pi }{p}i}$. For a set $A\subseteq \mathbb{Z}_{p}$, by $A(x)$ we denote its characteristic function. We will make use of the following basic properties of the Fourier transform: \begin{description} \item[(F1)] $\left\vert \hat{f}(r)\right\vert =\left\vert \hat{f}% (-r)\right\vert $ for every $r\in \mathbb{Z}_{p}$. \item[(F2)] $f(x)=\frac{1}{p}\sum_{r\in \mathbb{Z}_{p}}\hat{f}(r)\omega ^{-rx}$ for every $x\in \mathbb{Z}_{p}$. \item[(F3)] $\hat{A}(0)=\left\vert A\right\vert $ for every subset of $% \mathbb{Z}_{p}$. \end{description} In the lemma below we give a bound for the Fourier coefficient $\hat{A}(r)$ for the sets of the form \begin{equation} A=\left\{ s,s+1,\ldots ,l\right\} , \tag{*} \end{equation}% where $l$ and $s$ are elements of $\mathbb{Z}_{p}$, such that $s<l$. This bound does not depend on $l$ and $s$. The following lemma can be easily proved, as for instance in \cite{Book}(p. 39). We proved this for a reader convenience. \begin{lemma}\label{lem} \label{wsp Fouriera}If $0<r<\frac{p}{2}$, then \begin{equation*} \left\vert \hat{A}(r)\right\vert \leqslant \frac{p}{2r}. \end{equation*} \end{lemma} \begin{proof} By simple calculations we have \begin{align*} |\hat{A}(r)|=\Big|\sum_{x=s}^{l}\omega^{rx}\Big|=% \Big|\frac{\omega^{r(l+1)}-\omega^{rs}}{\omega^{r}-1}\Big|=&\\ \Big|\frac{\omega^{\frac{r(l+s+1)}{2}}}{\omega^{\frac{r}{2}}}\cdot \frac{\omega^{\frac{r(l+1-s)}{2}}-\omega^{\frac{-r(l+1-s)}{2}}} {\omega^{\frac{r}{2}}-\omega^{\frac{-r}{2}}}\Big|=& \Big|\frac{\sin(\frac{\pi r}{p})} {\sin(\frac{\pi r}{p})}\Big|. \end{align*}% Using inequality $\sin (x)\geqslant \frac{2x}{\pi }$ for $x<\frac{\pi }{2}$, we get \begin{equation*} \left\vert \hat{A}(r)\right\vert \leqslant \frac{p}{2r}.\qedhere \end{equation*} \end{proof} Now, we state and prove the aforementioned property of $L$-independent sets. \begin{theorem} \label{tw L-niezal}Let $0<\varepsilon <\frac{1}{2}$ be a fixed real number. Let $D$ be a $k$-element, $L$-independent set in $\mathbb{Z}_{p}$, where% \begin{equation*} L> \sqrt{\frac{k^33^{k-1}}{2^{k+1}\varepsilon ^{2k}}}. \end{equation*}% Then% \begin{equation*} \kappa (D)\geqslant 1/2-\varepsilon . \end{equation*} \end{theorem} \begin{proof} Let \[C=\{x\in\mathbb{Z}_p:(\frac{1}{4}-\frac{\varepsilon}{2})p<x<(\frac{1}{4}+\frac{\varepsilon}{2})p\} \] and let $C(x)$ be the characteristic function of the set $C$. Define convolution of two functions $f$ and $g$ by \[(f*h)(x)=\sum_{y\in\mathbb{Z}_p }f(y)\cdot g(x-y).\] Denote by $B(x)=(C*C)(x)$ convolution of function $C$ with itself. It is easy to see that $\hat{B}(r)=\hat{C}(r)\cdot \hat{C}(r)$ for all $r\in\mathbb{Z}_p$. So, if we find $t\in \mathbb{Z}_{p}$ such that $tD\subseteq \mathrm{supp} B$, where $\mathrm{supp} B=\{x\in\mathbb{Z}_p:B(x)\ne 0\}$, then at the same time we push the set $D$ away into the small arc $\left( \frac{1}{2}% -\varepsilon ,\frac{1}{2}+\varepsilon \right) $ on the torus $\mathbb{T}$. Then the expression \begin{equation*} I=\sum_{t\in \mathbb{Z}_{p}}B(td_{1})B(td_{2})\cdots B(td_{k}) \end{equation*}% counts those numbers $t$ which push the set $D$ away to a distance $\frac{1}{2% }-\varepsilon $ from zero. We will show that $I\neq 0$. From properties of the Fourier transform results that \begin{equation*} I=\sum_{t\in \mathbb{Z}_{p}}\left( \frac{1}{p}\sum_{r_{1}\in \mathbb{Z}_{p}}% \hat{B}(r_{1})\omega ^{-td_{1}r_{1}}\right) \cdots \left( \frac{1}{p}% \sum_{r_{k}\in \mathbb{Z}_{p}}\hat{B}(r_{k})\omega ^{-td_{k}r_{k}}\right) . \end{equation*}% Denoting $\overset{\rightarrow}{r}=(r_1,r_2,\cdots,r_{k})$, we get \begin{equation*} p^{k}I=\sum_{\overset{\rightarrow}{r}\in \mathbb{Z}_{p}^k}% \hat{B}(r_{1})\cdots \hat{B}(r_{k})\sum_{t\in \mathbb{Z}_{p}}\omega ^{-t(d_{1}r_{1}+\cdots+d_{k}t_{k})}. \end{equation*}% The expression $\sum_{t}\omega ^{-t(d_{1}r_{1}+\cdots+d_{k}t_{k})}$ is equal to $p$ when \begin{equation} d_{1}r_{1}+\cdots+d_{k}r_{k}\equiv 0 \pmod p, \tag{**} \end{equation} and is equal to zero in the contrary case. As a consequence we may write \begin{equation*} p^{k-1}I=\sum_{\overset{\rightarrow}{r}\in \mathbb{Z}_{p}^k}% \hat{B}(r_{1})\cdots \hat{B}(r_{k})R(\overset{\rightarrow}{r}), \end{equation*} where $R(\overset{\rightarrow}{r})=1$ for $r_{1},\ldots ,r_{k}$ satisfying the equation (**), and $R(\overset{\rightarrow}{r})=0$ in the opposite situation. Since $D$ is $L$-independent, the identity $R(\overset{\rightarrow}{r})=1$ holds only for those $r_{1},\ldots ,r_{k} $ satisfying condition $\sum_{i=1}^{k}\left\Vert r_{i}\right\Vert _{p}>L$, or $r_{1}=r_{2}=\ldots =r_{k}=0$. Hence, \begin{equation*} p^{k-1}I-|C|^{2k}=\sum_{\overset{\rightarrow}{r}\in \mathbb{Z}_{p}^{k},\sum \left\Vert r_{i}\right\Vert _{p}>L}\hat{B}(r_{1})\cdots \hat{B}% (\overset{\rightarrow}{r}), \end{equation*} as for $r_{i}=0$ the Fourier coefficient $\hat{B}(r_{i})$ is equal to square of the size of $C$. So, by showing that \begin{equation*} \left\vert C\right\vert ^{2k}>\sum_{\sum \left\Vert r_{i}\right\Vert _{p}>L}\left\vert \hat{B}(r_{1})\right\vert \cdots \left\vert \hat{B}% (r_{k})\right\vert R(\overset{\rightarrow}{r}), \end{equation*}% we will confirm that $I\neq 0$. The property of $L$-independence of the set $D$ implies that in any nontrivial solution of (**) there is some $r_{i}$ satisfying $\left\Vert r_{i}\right\Vert _{p}>\frac{L}{k}$. The estimates for those $r_{i}$ \begin{equation*} \left\vert \hat{B}(r_{i})\right\vert= \left\vert \hat{C}(r_{i})\right\vert^2 \leqslant \left (\frac{p}{2r_{i}}\right )^2 \leqslant \left (\frac{kp}{2L}\right )^2 \end{equation*}% results from Lemma \ref{wsp Fouriera}. Denote by $\overset{\rightarrow}{r_j}=(r_1,\cdots,r_{j-1},r_{j+1},\cdot,r_k)$, the vector $\overset{\rightarrow}{r}$ with $j^{th}$ coordinate missing . Substituting this to the previous sum we obtain \begin{align*} &\sum_{\sum \left\Vert r_{i}\right\Vert _{p}>L}\left\vert \hat{B}% (r_{1})\right\vert \cdots \left\vert \hat{B}(r_{k})\right\vert R(r_{1},\ldots ,r_{k}) \\ \leqslant&\Big ( \frac{kp}{2L}\Big )^2\sum\limits_{j=1}^{k}\sum_{\overset{\rightarrow}{r_j}\in \mathbb{Z}_{p}^{k-1}}\left\vert \hat{B}% (r_{1})\right\vert \cdots \left\vert \hat{B}(r_{j-1})\right\vert \left\vert \hat{B}(r_{j+1})\right\vert \ldots \left\vert \hat{B}(r_{k})\right\vert \\ \leqslant &k\Big ( \frac{kp}{2L}\Big )^2\sum_{\overset{\rightarrow}{r_k}\in \mathbb{Z}% _{p}^{k-1}}\left\vert \hat{B}(r_{1})\right\vert \cdots \left\vert \hat{B}% (r_{k-1})\right\vert. \end{align*} The last sum may be estimated further. Let $S_p=\{0,1,\cdots,\frac{p-1}{2}\}$ and we get \begin{align*} &\sum_{\overset{\rightarrow}{r_k}\in \mathbb{Z}_{p}^{k-1}}\left\vert \hat{B}% (r_{1})\right\vert \cdots \left\vert \hat{B}(r_{k-1})\right\vert \\ \leqslant &2^{k-1}\sum_{\overset{\rightarrow}{r_k}\in S_p^{k-1}}\left\vert \hat{B}(r_{1})\right\vert \cdots \left\vert \hat{% B}(r_{k-1})\right\vert. \end{align*}% Thus, applying Lemma \ref{wsp Fouriera} again we get% \begin{align*} &\sum_{\sum \left\Vert r_{i}\right\Vert _{p}>L}\left\vert \hat{B}% (r_{1})\right\vert \cdots \left\vert \hat{B}(r_{k})\right\vert R(\overset{\rightarrow}{r}) \\ \leqslant &k\big(\frac{kp}{2L}\Big)^2 \cdot 2^{k-1}\cdot \Big(\frac{p^{k-1}}{2^{k-1}}\Big)^2\cdot \Big (1+\sum_{r\in S_p}\frac{1}{r^2}\Big )^{k-1} \\ \leqslant &k\big(\frac{kp}{2L}\Big)^2 \cdot 2^{k-1}\cdot \Big(\frac{p^{k-1}}{2^{k-1}}\Big)^2\cdot \Big (1+\frac{\pi^2}{2}\Big )^{k-1} \end{align*}% since $1+\frac{\pi^2}{2}\leq3$, we obtain \begin{equation*} \sum_{\sum \left\Vert r_{i}\right\Vert {p}>L}\left\vert \hat{B}% (r_{1})\right\vert \cdots \left\vert \hat{B}(r_{k})\right\vert R(\overset{\rightarrow}{r}) \leqslant \frac{k^3p^{2k}3^{k-1}}{2^{k+1}L^2}. \end{equation*}% So, by the assumption on $L$ we obtain% \begin{equation*} \sum_{\sum \left\Vert r_{i}\right\Vert _{p}>L}\left\vert \hat{B}% (r_{1})\right\vert \cdots \left\vert \hat{B}(r_{k})\right\vert R(\overset{\rightarrow}{r})< (\varepsilon p )^{2k}\leqslant \left\vert C\right\vert ^{2k}. \end{equation*}% This completes the proof. \end{proof} \begin{proof}[Proof of Theorem 1] Let $L$ be a number satisfying inequalities% \begin{equation*} \sqrt{\frac{k^3 3^{k-1}}{2^{k+1}\varepsilon ^{2k}}}<L<\sqrt[k+1]{p}. \end{equation*}% Such numbers $L$ exist provided that $p$ is sufficiently large. By \mbox{Theorem \ref {tw L-niezal}}, $\kappa (D)\geqslant \frac{1}{2}-\varepsilon $ for every $L$% -indepedent set $D$. We show that the second inequality implies that almost every set in $\mathbb{Z}_{p}^{\ast }$ is $L$-independent. Indeed, the number of sets that are not $L$-independent is at most% \begin{equation*} (2L+1)^{k}\binom{{p-1}}{{k-1}}. \end{equation*}% So, the fraction of those sets in $\mathbb{Z}_{p}^{\ast }$ is equal% \begin{equation*} \frac{(2L+1)^{k}\binom{{p-1}}{{k-1}}}{\binom{{p-1}}{{k}}}=\frac{% (2L+1)^{k}k}{p-k}<\frac{(2\sqrt[k+1]{p}+1)^{k}k}{p-k}. \end{equation*}% The last expression tends to zero with $p$ tending to infinity. This completes the proof, as the ratios of two consecutive primes tend to one. \end{proof} \section{Integer distance graphs} We conclude the paper with a remark concerning \emph{integer distance graphs}% . For a given set $D$, consider a graph $G(D)$ whose vertices are positive integers, with two vertices $a$ and $b$ joined by an edge if and only if $% \left\vert a-b\right\vert \in D$. Let $\chi (D)$ denote the chromatic number of this graph. It is not hard to see that $\chi (D)\leqslant \left\vert D\right\vert +1$. To see a connection to parameter $\kappa (D)$, put $N=\left\lceil \kappa (D)^{-1}\right\rceil $ and split the circle into $N$ intervals $% I_{i}=[(i-1)/N,i/N)$, $i=1,2,\ldots ,N$ (cf. \cite{RuszaTuzaVoigt}). Let $t$ be a real number such that $min_{d\in D}\| dt\|=\kappa(D)$. Then define a colouring $c:\mathbb{N}\rightarrow \{1,2,\ldots ,N\}$ by $c(a)=i$ if and only if $\{ta\}\in I_{i}$. If $c(a)=c(b)$ then $\{ta\}$ and $\{tb\}$ are in the same interval $I_{i}$. Hence $\left\Vert ta-tb\right\Vert <1/N\leqslant \kappa (D)$, and therefore $\left\vert a-b\right\vert $ is not in $D$. This means that $c$ is a proper colouring of a graph $G(D)$. So, we have a relation% \begin{equation*} \chi (D)\leqslant \left\lceil \frac{1}{\kappa (D)}\right\rceil . \end{equation*}% Now, by Theorem 1 we get that $\chi (D)\leqslant 3$ for almost every graph $% G(D)$. A different proof of a stronger version of this result has been recently found by Alon \cite{alon}. He also extended the theorem for arbitrary Abelian groups, and posed many intriguing questions for general groups. \begin{acknowledgement} I would like to thank Tomasz Schoen for an inspiring idea of using independent sets, and to Jarek Grytczuk for stimulating discussions and help in preparation of the manuscript. I thank the anonymous referees for valuable suggestions concerning the merit of the paper. I also acknowledge a support from Polish Ministry of Science and Higher Education (MNiSW) (N N201 271335). \end{acknowledgement}
{ "attr-fineweb-edu": 2.132812, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdA85i7PA9O6rS2yY
\section{Introduction} Airbnb, the largest digital peer-to-peer distribution platform for accommodations, operates in about 81,000 cities in 191 countries, offering 4.5 million rooms, apartments, houses, and other types of accommodation~\cite{AirbnbInc2018}. The widespread adoption of Airbnb led to changes in urban life and in particular urban tourism. Visitors of urban destinations increasingly leave inner-city areas that are close to major sights and tourism-related facilities, and venture into residential neighborhoods---a phenomenon called \emph{off the beaten track} or \emph{new urban tourism}~\cite{PappaleporeMaitlandOthers2010, PappaleporeMaitlandOthers2014, FullerMichel2014, StorsKagermeier2017, MaitlandNewman2009}. Short-term rentals seem to foster this development~\cite{FullerMichel2014, FreytagBauder2018, IoannidesRoeslmaierOthers2018}. They enable all kinds of temporary city-users~\cite{Martinotti1999}, such as cultural tourists, exchange students, temporary migrants, or business travelers, to stay in private apartments. However, only a fraction of urban residents participates in the practice of renting out rooms or apartments. Some of them are annoyed by the large influx of visitors to their neighborhood and the consequences for the urban structure resulting from large-scale short term rentals~\cite{Gant2016, ColombNovy2016, NofreGiordanoOthers2017}. The impact of the fast-growing number of Airbnb listings, particularly in residential neighborhoods, has been controversially discussed in many cities around the world. Central topics of this debate were, among others, Airbnb's contribution to the transformation of residential neighborhoods and resulting gentrification processes~\cite{GravariBarbasGuinand2018, Gant2016}, coming along with rent increase~\cite{Mermet2018, SchaferHirsch2017}, changes in the social structure of the neighborhood~\cite{Gant2016, SansQuaglierei2016}, and effects on the hotel industry~\cite{ZervasProserpioOthers2017}. Legal issues, such as the prohibition of Airbnb in some cities, or restrictions of its offer in others, were likewise central points of discussion~\cite{DredgeGyimothy2017, GuttentagSmithOthers2017, QuattroneProserpioOthers2016}. In order to fully understand the Airbnb phenomenon and its consequences for the city, it is important to consider both the tourist and the host perspective. Tourists and their spatial practices have already been considered in academic research from urban geography and the CSCW research community. While Pappalepore, Maitland, and Smith were primarily concerned with visitors' motivation for leaving central tourist areas~\cite{PappaleporeMaitlandOthers2010, PappaleporeMaitlandOthers2014}, Brown and Perry looked into tourists' usage of maps~\cite{BrownPerry2001}. Researchers also discussed the potential of electronic tourist guides~\cite{CheverstDaviesOthers2000, KenterisGavalasOthers2009} and in particular mobile tourism recommendation systems~\cite{GavalasKenteris2011}. Studies on mobility practices provided insights into visitors' spatio-temporal flows throughout the city and highlighted the significance of mobile technologies such as GPS trackers as new tools for engaging in this research field~\cite{BirenboimShoval2016, GrinbergerShovalOthers2014, ShovalAhas2016}. Digital information technologies proved helpful in identifying tourists' mobility practices even outside central tourist areas~\cite{Bauder2015, FreytagBauder2018}. Local communities, in contrast, have long been neglected in urban tourism research. They were mainly described as suffering from urban transformation processes, rent increase, and gentrification~\cite{Novy2011, FullerMichel2014}. However, some researchers pointed to the significance of local people in the co-construction of urban tourism space and atmospheres~\cite{PappaleporeMaitlandOthers2014, RussoRichards2016}. Airbnb hosts are increasingly gaining attention in this context. Studies exist that look into hosts' motivation to participate in online hospitality networks~\cite{StorsKagermeier2017, LampinenCheshire2016}. Such research approaches deal with drivers for renting out~\cite{LampinenCheshire2016, StorsKagermeier2017, Ke2017} and analyze the importance of monetary transactions in hospitality exchange networks~\cite{IkkalaLampinen2015}. Other studies were concerned with hosts' trustworthiness~\cite{ErtFleischerOthers2016, MaHancockOthers2017}. What has so far rarely been taken into account is hosts' potential to contribute to the discursive and performative reframing of residential neighborhoods into urban tourism areas, which is the focus of our work. Generally, user-generated content on information sharing platforms such as TripAdvisor has been identified as an important source of information for tourists~\cite{XiangGretzel2010}. Regarding the implications that online reviews provide for urban space, Zukin et al. found that Yelp reviews produce positive or negative space images and thus may contribute to economic (dis)investment in urban areas~\cite{ZukinLindemanOthers2015}. Similarly, Corbett outlined how space images are affected by descriptions on an online real estate information platform~\cite{Corbett2017}. With regard to the CSCW community, he argues that the concept of place has been widely adapted to inform location-based technologies, whereas implications of a \emph{``discursive place shaping''} on online platforms have not been considered yet. With our research, we try to fill this gap. The overarching goal of our research is to investigate: \begin{quote} \emph{How are online platforms engaging in the co-production of (new) urban tourism space?} \end{quote} In particular, we are interested in how the peer-to-peer accommodation platform Airbnb contributes to shaping tourist places in the city. In Section~\ref{sec:theoretical-background}, we embed this research direction in varying conceptualizations of space and place as applied in the CSCW and tourism geography research communities, and discuss the common ground between their research perspectives. We identify that container-like understandings of urban tourism space~\cite{Framke2002}, as represented in the \emph{tourist-historic city model}~\cite{AshworthHaan1985, AshworthTunbridge1990, AshworthTunbridge2000}, resemble the traditional framing of space as a \emph{``natural fact''}~\cite{HarrisonDourish1996} in the CSCW community. Both approaches, however, are lacking satisfying explanations of how tourism space can emerge and develop in residential neighborhoods. The \emph{tourist-historic city model} solely takes into account tourism-related facilities and infrastructure as defining elements of tourism space. Such structures are often non-existing in new urban tourism areas like \emph{Kreuzkölln}, one of our case-study neighborhoods. Still, this neighborhood without any major sights is becoming a tourism hotspot~\cite{FullerMichel2014, ColombNovy2016}. Against this background, we follow a constructionist understanding of urban tourism space~\cite{Young1999, Framke2002, Iwashita2003}. We argue that tourism space, like tourist sights~\cite{MacCannell1976}, is socially constructed through \emph{representations}~\cite{PritchardMorgan2000, Saarinen2004} and \emph{performances}~\cite{Edensor1998, Edensor2000, Edensor2001, BrenholdtHaldrupOthers2004, Larsen2008}. That means we no longer regard tourism facilities and infrastructure as the central elements defining tourism space. Instead, people are the major agents who transform places and landscapes into tourist destinations. They attach meanings and values to places and objects~\cite{Davis2001, Saarinen2004}, produce written, oral, or pictorial \emph{representations} of them, and thus contribute to the discourse of how places or objects are to be perceived. Finally, following the performative turn in tourism studies~\cite{Larsen2008, Larsen2012}, we argue that places need to be enacted through \emph{``bodily performances''}~\cite{Larsen2012, BrenholdtHaldrupOthers2004}. Practices, such as picture taking or collectively \emph{``gazing''}~\cite{Urry1990} upon a building, are necessary to enact places in a touristic manner~\cite{Larsen2014}. Taking these theoretical considerations into account, we analyze how two different Berlin neighborhoods, \emph{Kreuzkölln} and \emph{City West}, are socially constructed in Airbnb listings. The following three questions guided our research: \begin{itemize}[labelindent=\parindent, labelwidth=\widthof{\textbf{RQ1:}}, label=\textbf{RQ1:}, leftmargin=*, align=parleft, parsep=0pt, partopsep=0pt, topsep=1ex, noitemsep] \item[\textbf{RQ1:}] How are the two neighborhoods \emph{Kreuzkölln} and \emph{City West} constructed as tourism spaces in Airbnb listings? \item[\textbf{RQ2:}] How does the space construction differ between these two neighborhoods? \item[\textbf{RQ3:}] How do the neighborhood descriptions differ between Airbnb hosts and the destination management and marketing organization (DMO)? \end{itemize} We collected the listing descriptions from all Airbnb listings located in our research areas, resulting in a total number of 960 descriptions (see Section~\ref{sec:data-collection}). Afterwards, we randomly selected 100 listings that we qualitatively analyzed, applying grounded theory coding techniques~\cite{Charmaz2014, Saldana2015} (see Section~\ref{sec:data-analysis}). Our goal was to identify which elements of the neighborhoods are mentioned in the listings and how they are described as being touristically significant. Moreover, we analyzed which practices are encouraged in the listings (see Sections~\ref{sec:places-facilities},~\ref{sec:streets-squares},~\ref{sec:sights-parks-markets}) and investigated how places named by Airbnb hosts and \emph{visitBerlin}, the city's destination management and marketing organization (DMO), differ (see Sections~\ref{sec:quantitative-analysis}~and~\ref{sec:comparison-dmo}). This paper provides a twofold contribution: From a theoretical perspective, a container-like notion of physical tourism space is overcome by understanding space as being socially constructed. This theoretical reframing of space is necessary to be able to explain how residential neighborhoods that are lacking any sights can become tourist places. What is particularly new is the empirical focus on the digital and collaborative construction of tourism space in Airbnb listings. Traditionally, the DMO produced promotional space images and steered visitors' attention and practices. After the proliferation of digital technologies and in particular peer-to-peer platforms, Airbnb hosts are now able to participate in the discursive framing of their neighborhood---they gained the ability to (re)interpret space and endow neighborhoods with a touristic meaning. \section{Theoretical Background} \label{sec:theoretical-background} Understanding \emph{space} and \emph{place} on a conceptual level is a central goal of geography. Much empirical work, however, deals with natural or cultural phenomena happening in distinct \emph{places}. Considering the nature of \emph{space} just became popular again after the \emph{``spatial turn''}~\cite{Soja1989} in social sciences from the late 1960s onwards~\cite{Soja2009}. This development induced debates on the nature of space and place in various disciplines, including the CSCW research community~\cite{HarrisonDourish1996, FitzpatrickKaplanOthers1996, Dourish2006}. In 1996, Harrison and Dourish~\cite{HarrisonDourish1996} and in 2006 Dourish~\cite{Dourish2006} broadly discussed differences and similarities between the concepts of space and place and resulting implications for CSCW researchers. An important outcome of their considerations, to which we relate our research, is the assumption that digital technologies, such as online sharing platforms, influence the way people encounter and appropriate urban space. We distinguish traditional understandings of (tourism) space as a defined geographical area from notions of (tourism) space as being socially constructed. Traditional approaches are not satisfying anymore in order to understand current urban tourism phenomena. Therefore, we motivate the constructionist approach we followed in our case study of two Berlin neighborhoods. We argue that digital representations of space, as produced in Airbnb listing descriptions, have implications on the way tourists get to know about, encounter, and appropriate such areas. Finally, we link these theoretical considerations to our research design and motivate our argument that urban tourism space is also digitally constructed, for example in the Airbnb listings that we chose to analyze empirically. \subsection{Traditional Understandings of Urban Tourism Space} Traditionally, tourism space, in the sense of a tourist destination, has been defined as a geographical area~\cite{BurkartMedlik1976, DavidsonMaitland1997} that contains agglomerations of tourism-related attractions, facilities, and services~\cite{Pearce2013, SaraniemiKylanen2010}. Accordingly, the destination is a stable and closed spatial unit, valorized for tourism purposes and filled up with tourism infrastructure. It is the distinct physical space in which tourism, tourism development, politics, and planning happen. In such traditional conceptualizations of tourism space~\cite{Framke2002}, the destination is understood as a static and given entity. Natural and cultural resources, as well as tourism infrastructure, are its defining elements---tourism only happens within it. These definitions entail a container-like notion of space. As a result, the tourist destination is reduced to a confined space to which people are traveling for leisure reasons~\cite{Leiper1995}. Urban tourism space has been framed similarly. First illustrated in the \emph{tourist-historic city model}, designed by Ashworth and de Hahn in 1985, spatial clusters of historical sights, leisure facilities, food or accommodation infrastructure, were designated as tourism space~\cite{AshworthHaan1985}. Like many other urban ecological or land use models that were commonly used in urban geography~\cite{Pacione2009}, the \emph{tourist-historic city model} divided the city into various functional regions---the tourism region being one of them. More specifically, a central section of the city, mostly the area intersecting between the historical center with its traditional buildings and the central business district (CBD) with all the shops, hotels, and restaurants, was considered to be tourism space~\cite{AshworthTunbridge1990, AshworthTunbridge2000}. Such urban ecological models reflect scholars' intention to identify and separate the functional regions of a city, based on their prevalent built environment, that is the buildings, infrastructure, and other physical objects. However, those models rarely capture dynamic aspects such as today's large flows of people, capital, and information~\cite{ShellerUrry2006}---they are static, but tourists' behavior is not. The action space of tourists in the city has long been equated with a particular central urban region dedicated to them. As a result of this spatial differentiation between tourism and non-tourism space, regions of the city lacking corresponding infrastructure cannot be touristically significant, following the inherent logic of the \emph{tourist-historic city model}~\cite{AshworthTunbridge1990, AshworthTunbridge2000}. A spatial concentration of tourism in the city center was likely to be true at the time the \emph{tourist-historic city model} was designed. Until well into the 1970s, tourism was not even regarded as a separate industry or function of the city~\cite{BurtenshawAshworthOthers1991}. Cities were places for the serious tasks of work and government---tourism in contrast, was a function of the periphery~\cite{Christaller1964}. People travelled from the city to the countryside for leisure reasons. Consequently, researchers and urban governments did not perceive tourism as a field for intervention~\cite{Ashworth2003, AshworthTunbridge1990}. They considered tourists to be invisible in all but a few selected districts of a few remarkable cities at specific times. Their economic significance was peripheral~\cite{Ashworth2003} and tourism could hardly be isolated as a separate industry~\cite{AshworthTunbridge2000}. Only 50 years later, the perception of tourism has changed dramatically. Urban tourism is booming in Western Europe since the 1990s~\cite{EurostatStatisticsExplained2016}. This boom is both supply- and demand-side driven~\cite{Law2000, FreytagPopp2009}. On the supply side, governmental urban regeneration schemes were carried out throughout Europe and North America, initiating the re-attractiveness of inner city areas. Governmental promotion of tourism as a regeneration strategy started in the 1980s~\cite{Law1992}, when many cities suffered from de-industrialization, rising unemployment rates, and derelict manufacturing sites. Urban tourism and the erection of respective infrastructure such as entertainment facilities~\cite{JuddFainstein1999, Spirou2010} were intended to initiate a counter-development in order to restructure the local economy, generating jobs and income~\cite{Law1992}. At the same time, increasing affluence, more leisure time, and the ubiquitous proliferation of the car and the rise of the airplane as a means of transportation~\cite{Law1992} enabled people to travel not only to the countryside but also to urban destinations~\cite{FreytagPopp2009}. After the fall of the Berlin Wall, Germany's capital, which we focus on in our case study, became fully accessible for national and international tourists and visitors---their numbers exploded from 2003 onwards~\cite{AmtfurStatistikBerlinBrandenburg2017g}. Today's so-called \emph{``overtourism''}~\cite{Popp2012, PostmaSchmuecker2017} is becoming an emerging issue, not only in traditional tourist cities like Rome and Venice, but also in upcoming urban tourism destinations like Barcelona and Berlin. As a result, residents are protesting against large tourist masses, flooding into their neighborhoods and \emph{``touristifying''} residential areas~\cite{ColombNovy2016, GravariBarbasGuinand2018, Gant2016}. \subsection{Constructing New Urban Tourism Space} Tourism in general and tourist behavior in particular have changed. Tourists are no longer staying within the confines of the central tourist zone with major sights in walking distance and gastronomic infrastructure just around the corner. Instead, many of them are now venturing into residential neighborhoods. Urban tourism scholars discuss this phenomenon as \emph{off the beaten track} or \emph{new urban tourism}~\cite{MaitlandNewman2009, PappaleporeMaitlandOthers2010, PappaleporeMaitlandOthers2014, FullerMichel2014, StorsKagermeier2017}. Reasons for those changing visitor practices are diverse. One important factor is the rise of the mobile internet and the proliferation of information sharing platforms of the social web, such as Instagram, TripAdvisor, and the like. These new information technologies are nowadays shaping the ways we encounter urban space~\cite{Dourish2006}. They enable us to navigate through an unknown city~\cite{BrownPerry2001, Vertesi2008}, recommend us the best restaurant nearby~\cite{HicksCompOthers2012}, and let us stay in an Airbnb accommodation at our favorite location. Thus, mobile technologies are no longer \emph{``simply operating within a spatial environment''}~\cite{BrewerDourish2008}. Instead, they contribute to the \emph{``production of spatiality and spatial experiences''}~\cite{BrewerDourish2008}. Their location-based service influence the way we move through the city and their personalized recommendation systems decide what we should see and where we should eat or sleep. Consequently, mobile technologies direct our perception of a city and sometimes even suggest adequate place performances. Before the widespread usage of mobile technologies and online information sharing platforms, the travel guide and the city's tourist information offices were the most influential information sources for visitors---and for many tourists they still are. Travel guides contain descriptions of people and landscapes and pre-interpret them for visitors. They influence peoples' decision-making process of what should be seen and thus affect visitors' perception of the city~\cite{Edensor1998}. Urban management and marketing organizations play a similar role. They intend to attract and steer tourists by marking and marketing sights and places. Iconic buildings are framed and promoted as must sees in live. The Eiffel tower, for example, is signified as a symbol for love, it is endowed with this particular meaning. Gazing upon the sight thus turns into a romantic experience for many. This example illustrates that iconic buildings, like places, have no \emph{``intrinsic attraction power''}~\cite{Gunn1972, Leiper1990}. Instead, they are formed and fashioned by human beings~\cite{Iwashita2003}. They are marked and signified as iconic architecture worth visiting~\cite{MacCannell1976, Saarinen2004, Edensor1998}, meanings and values are ascribed to them~\cite{Squire1994}, and they are enacted through social practice~\cite{Edensor2000, Edensor2001, Larsen2008, Edensor1998, BrenholdtHaldrupOthers2004}. In many cities around the world, the DMOs are responsible for representing their city. In a process of \emph{``place branding'}'~\cite{MoilanenRainisto2009}, they develop a strategy to promote the urban destination, utilizing tools such as \emph{``policy making, planning, advertisement campaigns, exhibitions, publicity, and the like''}~\cite{ChenChen2016, Geary2013}. As a result, urban representatives create images about cities and places purposefully and strategically in order to succeed in a fierce competition for visitors. Such space images, manifested in flyers, maps, pictures---both materially and digitally---are fragments of the whole discursive framework co-constructing the city ~\cite{Saarinen2004}. As several researchers have pointed out, spatial images contributing to a discourse are never neutral. (Spatial) discourses are socially \emph{``produced coherent meaning systems and practices, which both manifest and are power structures at the same time''}~\cite{Saarinen2004}. Power geometries~\cite{Dourish2006} and ideologies are inherent in spatial discourses~\cite{Lefebvre1991}. Consequently, the destination image produced by a city's DMO is a manifestation of their power. They have the ability to legitimize one space image over the others~\cite{Davis2005}. Due to the rise of mobile technologies and several peer-to-peer (information) sharing platforms, such as Yelp, TripAdvisor, and Airbnb, we argue that the above-mentioned power structures are shifting. Following Dourish's point of view~\cite{Dourish2006}, we establish that such technologies influence the way people move through the city and encounter urban space. Moreover, information on online platforms is no longer solely provided by the city's DMO. Due to social web applications, many people can now produce content online, for example to market their local business or their Airbnb apartment. In doing so, they produce their own spatial representations of the city or neighborhood and thus participate in the spatial discourse. Digital technologies in general and Airbnb listings in particular empower local people to participate in the digital co-production of urban (tourism) space. For urban visitors, mobile technologies open up spaces they would hardly have encountered before. Classic tourist maps, for example, only represent a fraction of the city and visitors tend to stay within this confined area. Digital peer-to-peer platforms, in contrast, also provide information on sites or places outside the central tourist district(s). Thus, visitors get to know about rather unexplored parts of the city and are motivated to leave the central tourist area. Mobile technologies enable visitors to access on-site information about sights and places and help them to navigate their way through the city. Moreover, a rising number of tourists are searching for authentic experiences \emph{off the beaten track} and want to immerse themselves in the local, everyday life~\cite{PappaleporeMaitlandOthers2014}. This perspective challenges traditional conceptualizations of tourism as an escape from work and home~\cite{Urry1990}. Traditional understandings of urban tourism, as represented in the \emph{tourist-historic city model}, associate residential neighborhoods with mundane activities. Tourism, in contrast, has long been regarded as an extraordinary practice that happens at distinct times and places, separating people \emph{``off from everyday experiences''}~\cite{Urry1990}. From this point of view, tourism is unlikely to happen in mundane residential areas. Actual visitor practices challenge such traditional binary differentiations between work-leisure, tourist-resident, and home-away~\cite{CohenCohen2017}. Those conceptual boundaries become increasingly blurred when visitors venture into residential neighborhoods, stay in private Airbnb apartments, and behave like locals. \emph{Off the beaten track} tourists intend to differentiate themselves spatially and ideationally from mass tourism, which is still located in the central areas of the city~\cite{Freytag2010, McCabe2005}. Against this background, the \emph{tourist-historic city model} is losing its explanatory power. Tourism is no longer restricted to certain areas, but can take place all over the city. Most research on \emph{off the beaten track} and \emph{new urban tourism} has focused on the tourist perspective and their desires to leave the central tourist zones. What is still lacking in scholarly research is the spatial perspective and attempts to explain how residential neighborhoods become touristically significant. The intention of our research is to tackle this gap and, in particular, to analyze how residential neighborhoods are transformed into tourist places. To this end, we follow a constructionist understanding of urban tourism space. That means we no longer regard tourism facilities and infrastructure as the central elements defining tourism space. Instead, we assume that tourism space is socially constructed through representations and performances, as illustrated in Figure~\ref{fig:overview}. For us, people are the major agents who transform places and landscapes into tourism destinations. They have the ability to transform space both physically and materially as well as perceptually and symbolically~\cite{Iwashita2003}. This production of space, however, does not only happen in the physical realm. Following some initial research approaches in geography~\cite{ZukinLindemanOthers2015} and CSCW~\cite{Corbett2017}, we analyze representations of space that are produced digitally and collaboratively. Contrary to most research looking into space images produced by the DMO~\cite{ChenChen2016, PritchardMorgan2000}, we take spatial representations into account that are produced by local residents in their Airbnb listing descriptions. We argue that through the means of digital technologies, local people are now empowered to participate in the discourse producing and reproducing urban space. \begin{figure*} \centering \includegraphics[width=0.8\columnwidth, trim=0.0in 0.0in 0.0in 0.0in]{figures/overview} \caption{Overview of our conceptual approach: Airbnb hosts perceive their city and neighborhood and capture this perception in their listing descriptions. Guests read those descriptions, which contribute to their image of the neighborhood they are going to visit. This, again, influences guests' practices during their visit. As we focus on how Airbnb listing descriptions may influence guests' practice, we don't show other relationships such as guests' perception of neighborhood and city and hosts' practice in that environment.} \label{fig:overview} \end{figure*} \subsection{From Theoretical Concepts to Empirical Research} The city depiction in Figure~\ref{fig:overview} represents the city in its multitude of dimensions. It stands for buildings and infrastructure, but likewise for people, for flows of information and capital, for images and feelings associated with it, for its various representations in pictures and discourses, and for everyday practices enacting it. The Airbnb hosts in our case study can never grasp the full urban system, they only perceive a part of reality, that is their imagination of the city, existing in their minds. This imagination is never fixed, it is subject to numerous external and internal factors, and can easily change. However, when hosts decide to rent out a room or an apartment on Airbnb, the platform's listing structure encourages them to write down information about the neighborhood wherein the apartment is located. In the process of writing, hosts reproduce their space image~\cite{BalogluMcCleary1999, KockJosiassenOthers2016}. Here again, the listing description is not a full duplicate or total representation of everything hosts know and feel about the city and the neighborhood---it is a selection of aspects that they find interesting or important to know. At the same time, the listing description is written for a certain audience. Since hosts intend to rent out their room or apartment, the text has a promotional character. Hosts deliberately produce a particular space image~\cite{ChenChen2016, Young1999}. The sum of neighborhood descriptions posted on Airbnb finally results in a collaboratively produced digital representation of (imagined) urban space. It thus contributes to the discursive framework of the respective area. The proliferation of (information) sharing platforms, such as Instagram, TripAdvisor, and Airbnb, enables people to signify places or buildings to be worth visiting. For example, remote rocks in Norway are being reproduced thousands of times on Instagram and thus become a tourist attraction~\cite{Meier2018}. These platforms empower people to reinterpret places, to ascribe new meanings to them, and hence to transform them into places of significance for visitors. Finally, in producing such online or offline, written or filmed, pictorial or oral representations of space, every participant contributes to the way space is constructed and perceived~\cite{Saarinen2004}. After posting the neighborhood description on Airbnb, these digital textual representations of urban space are read and processed by potential guests, affecting their space image. However, Airbnb listings are not the only source of information that guests use. For the most part, they already have space images in their minds, which are based on previous visits, newspaper articles, tourist guidebooks, tips from friends and relatives, or posts on Instagram or TripAdvisor. All these sources of information influence, to a certain extent, what guests are going to do during their stay~\cite{Larsen2012}. In our research, we decided to focus on Airbnb listing descriptions of the neighborhood, because they have a high potential to steer visitors' attention, in particular to elements that would otherwise remain unrecognized. Against the background of \emph{off the beaten track} and \emph{new urban tourism}, scholars have already illustrated that visitors want to behave like a local~\cite{PappaleporeMaitlandOthers2010, PappaleporeMaitlandOthers2014}. In order to do so, they rely on insider tips that hosts can provide. Staying in a private Airbnb room or apartment enables hosts and guests to exchange expectations and experiences. Guests receive local information, for example about the best restaurant or coolest club, and get to know local routines and practices. Instead of discussing spots for picture taking and gazing~\cite{Urry1990} upon sights, hosts rather provide information on the closest route for a morning jog. Such information allows guests to take part in the local everyday life~\cite{Larsen2008, Larsen2012}. As a result, listing descriptions on Airbnb do not only produce space images and influence the way in which space is perceived, but also impact visitors' behavior in place. This is because Airbnb hosts encourage guests to visit certain places and motivate related practices (see Section~\ref{sec:results}). In our case study, we empirically analyze how hosts describe and thus co-produce the image of their neighborhoods. We have theoretically motivated the relation of those digital descriptions to the physical world, that is the neighborhood and the city, and peoples' performances in that environment. We consider it to be an important direction for future work to also analyze Airbnb from a user perspective. On the one hand, one could evaluate how hosts write and revise their neighborhood descriptions. On the other hand, it would be interesting to see how exactly visitors read those descriptions and how they influence their practices. \section{Research Design} \begin{figure*} \centering \includegraphics[width=1\columnwidth, trim=0.0in 0.0in 0.0in 0.0in]{figures/erhebungsraum_annotated} \caption{Analyzed areas (yellow) and identified Airbnb listings (green, $n=965$); map data from Amt für Statistik Berlin-Brandenburg~\cite{AmtfurStatistikBerlinBrandenburg2016, AmtfurStatistikBerlinBrandenburg2017b, AmtfurStatistikBerlinBrandenburg2017c, SenatsverwaltungfurStadtentwicklungundWohnenBerlin2018}, Airbnb listings retrieved 2017-03-22 (see Section~\ref{sec:data-collection}), images from Wikipedia.} \label{fig:erhebungsraum} \end{figure*} As motivated above, our goal was to analyze and compare how the two neighborhoods \emph{Kreuzkölln} and \emph{City West} are constructed as tourism spaces in Airbnb listing descriptions (RQ1 and RQ2). For our analysis, we deliberately chose two very different neighborhoods. The neighborhood we denote \emph{City West} has a long tradition of being touristically significant. It contains internationally known iconic sights~\cite{visitBerlin2018f, visitBerlin2018g} that are visited by a large number of people every year. From January until July 2017, the borough \emph{Charlottenburg-Wilmersdorf} had about 1.5 million overnight guests~\cite{AmtfurStatistikBerlinBrandenburg2017f}. Against this background, we consider \emph{City West} to be a \emph{traditional urban tourism hotspot}. The neighborhood we denote as \emph{Kreuzkölln}, in contrast, had a rather difficult past. It has long been associated with poverty, crime, and drugs~\cite{AmtfurStatistikBerlinBrandenburg2017d, AbgeordnetenhausBerlin2017}, although its image is changing now~\cite{DeutschePresseAgentur2018}. The neighborhood itself is lacking major sights, but the borough provides some facilities that are being promoted by Berlin's DMO~\cite{visitBerlin2018b}. Only about 219,000 officially registered overnight guests stayed in the borough of \emph{Neukölln} between January and July 2017~\cite{AmtfurStatistikBerlinBrandenburg2017f}. Despite these facts, \emph{Kreuzkölln} increasingly attracts tourists~\cite{SchultePeevers2017, FullerMichel2014}, which is why we consider the neighborhood to be a \emph{new urban tourism hotspot}. Our \emph{units of observation} are the free text fields of the Airbnb listings located in those neighborhoods. This includes the listings' title, description, and house rules (see Figure~\ref{fig:airbnb-listing}). Our \emph{units of analysis} are the hosts' descriptions of the city and the neighborhoods (outside)---the descriptions of the apartments (inside) are not in the focus of our research. To delineate the two neighborhoods, we rely on the so-called \emph{Lebensweltlich Orientierte Räume} (LOR). These urban planning areas have been introduced by the Berlin administration for urban development and housing in the year 2006. Instead of focusing solely on infrastructure, the LORs also consider other factors such as social milieus and population density~\cite{SenatsverwaltungfurStadtentwicklungundWohnen2009}. The neighborhood that we denote \emph{Kreuzkölln} corresponds to the LOR \emph{Reuterkiez} with 27,792 inhabitants (as of December 2016~\cite{AmtfurStatistikBerlinBrandenburg2017}). As the density of Airbnb listings is smaller in the \emph{City West} neighborhood, we decided to include several LORs around the major West Berlin streets \emph{Kantstraße} and \emph{Kurfürstendamm}. The LORs we selected are named \emph{Karl-August-Platz}, \emph{Savignyplatz}, \emph{Hindemithplatz}, \emph{Georg-Grosz-Platz}, and \emph{Breitscheidplatz}. In total, 37,225 inhabitants live in those LORs (as of December 2016~\cite{AmtfurStatistikBerlinBrandenburg2017}). Figure~\ref{fig:erhebungsraum} visualizes the location of the two areas. We provide the retrieved host and listing data, the R scripts used for our quantitative analysis, and our final coding schema as supplementary material~\cite{StorsBaltes2018}. The last step of our research was to compare hosts' neighborhood descriptions with the descriptions of \emph{visitBerlin}, the city's DMO (RQ3). To this end, we extracted all places that \emph{visitBerlin} mentions in the corresponding borough descriptions on their website~\cite{visitBerlin2018h, visitBerlin2018i}. Then, we compared those places with the most frequently named places in the Airbnb listings. Since we identified a disparity between what Airbnb hosts and the DMO consider important places, we further analyzed this relationship. We assessed how frequently the places named in the DMO's borough descriptions are mentioned in the Airbnb listing descriptions we retrieved for the two neighborhoods (see Sections~\ref{sec:data-collection} and \ref{sec:quantitative-analysis}). \subsection{Data Collection} \label{sec:data-collection} To retrieve the Airbnb listings in the two neighborhoods, we utilized Tom Slee's \emph{airbnb-data-collection} tool\footnote{\url{https://github.com/tomslee/airbnb-data-collection}}. On March 22, 2017, we collected all listings within two bounding boxes encompassing the \emph{Kreuzkölln} and \emph{City West} areas. Afterwards, we utilized the LOR shapefiles provided by the city of Berlin~\cite{AmtfurStatistikBerlinBrandenburg2016} to filter out listings that are not located within the selected LORs. Please note that Airbnb slightly randomizes the location of listings. Consequently, some listings at the border of the neighborhoods may actually be located in the surrounding LORs (or vice versa). As the neighboring LORs are very similar to the analyzed ones in terms of their social structure and urban fabric~\cite{SenatsverwaltungfurGesundheitPflegeundGleichstellungBerlin2014, Levy1999}, this should not pose a threat to the validity of our results. In the end, we were able to identify 753 listings in the \emph{Kreuzkölln} area and 212 listings in the \emph{City West} area. This means that there were five times more listings per inhabitant in the new urban tourism hotspot \emph{Kreuzkölln} (0.03 listings per inhabitant) compared to the traditional urban tourism hotspot \emph{City West} (0.006 listings per inhabitant). In other words, statistically, 1 out of 33 inhabitants in \emph{Kreuzkölln} rents out a room or an apartment on Airbnb---opposed to 1 out of 167 in \emph{City West}. To be able to quantitatively describe the retrieved listings and to analyze their descriptions, we needed to retrieve additional information, which was not possible with the above-mentioned tool. Thus, we developed our own tool, which utilizes Airbnb's unofficial API ~\cite{Baltes2017c}. Using this tool, we were able to successfully retrieve data for 750 listings in the \emph{Kreuzkölln} neighborhood and for 210 listings in the \emph{City West} neighborhood. The retrieval failed for five listings. In the following, we briefly describe the hosts and listings in the neighborhoods, before we continue with our research design. The daily price of the analyzed listings differed significantly in the two neighborhoods: The average daily price in \emph{Kreuzkölln} was 51.65 Euro ($Mdn=45$, $SD=37.84$) compared to 73.71 Euro in \emph{City West} ($Mdn=55$, $SD=64.88$). The difference was significant according to the nonparametric two-sided \textit{Wilcoxon rank-sum test}~\cite{Wilcoxon1945} with p-value $<\num{8.0e-9}$. However, the effect was only small according to \textit{Cohen's} $d$~\cite{Cohen1988, GibbonsHedekerOthers1993}, considering the thresholds described by Cohen~\cite{Cohen1992} ($d=0.49$). A factor that could bias the listing price is the accommodation type (whole apartments are usually more expensive than single rooms). However, the distribution of listing types was quite similar in both neighborhoods: In \emph{Kreuzkölln}, there were 390 (52.0\%) entire homes or apartments, 358 (47.7\%) private rooms, and 2 (0.3\%) shared rooms. In \emph{City West}, there were 116 (55.2\%) entire homes or apartments, 90 (42.9\%) private rooms, and 4 (1.9\%) shared rooms. \begin{table} \small \centering \caption{Professionalism of hosts in the two neighborhoods, measured by the total number of listings they provide on Airbnb (the listings are not necessarily located in one of the two neighborhoods, because hosts may also provide listings in other areas of Berlin or in other cities worldwide).} \label{tab:listings-per-host} \begin{tabular}{c | rrrrr r} \hline \multicolumn{1}{c|}{Listing created by host} & \multicolumn{1}{c}{\multirow{2}{*}{1}} & \multicolumn{1}{c}{\multirow{2}{*}{2}} & \multicolumn{1}{c}{\multirow{2}{*}{3}} & \multicolumn{1}{c}{\multirow{2}{*}{>3}} & \multicolumn{1}{c}{\multirow{2}{*}{NA}} & \multicolumn{1}{|c}{\multirow{2}{*}{...listings}} \\ \multicolumn{1}{c|}{with a total of...} & & & & & & \multicolumn{1}{|c}{} \\ \hline \hline Kreuzkölln & 622 (83.2\%) & 91 (12.2\%) & 20 (2.7\%) & 15 (2.0\%) & 2 (0.3\%) & \multicolumn{1}{|r}{750 (100\%)} \\ City West & 144 (68.9\%) & 39 (18.7\%) & 9 (4.3\%) & 17 (8.1\%) & 1 (0.7\%) & \multicolumn{1}{|r}{210 (100\%)} \\ \hline \hline Total & 766 (80.0\%) & 130 (13.6\%) & 29 (3.0\%) & 32 (3.3\%) & 3 (0.4\%) & \multicolumn{1}{|r}{960 (100\%)} \\ \hline \end{tabular} \end{table} The listings were provided by 898 different hosts (708 in \emph{Kreuzkölln} and 190 in \emph{City West}). To assess the number of professional hosts in the two areas, we retrieved the total number of Airbnb listings that those hosts provide. This includes listings in other areas of Berlin or other cities worldwide. We were able to successfully retrieve this information for 706 hosts in \emph{Kreuzkölln} and for 189 hosts in \emph{City West}. The retrieval failed for three hosts. There is no universally accepted threshold for the number of listings that a host must provide to be considered \emph{professional}. Nevertheless, we observed that most hosts (80\%) in the two areas provide only one listing. However, there is a considerable difference between the two neighborhoods: 31.1\% of the listings in \emph{City West} were provided by hosts with more than one listing, opposed to 16.9\% in \emph{Kreuzkölln} (see Table~\ref{tab:listings-per-host}). The degree of professional hosting on Airbnb seems to be higher in the traditional tourist hotspot \emph{City West} than in the new urban tourism hotspot \emph{Kreuzkölln}. To compare hosts' descriptions in the listings to \emph{visitBerlin}'s descriptions on their website (RQ3), we retrieved the English version of the borough pages of \emph{Neukölln}~\cite{visitBerlin2018h} and \emph{Charlottenburg-Wilmersdorf}~\cite{visitBerlin2018i} on July 5, 2018. Afterwards, we manually extracted all specific place names from those pages, ignoring vague descriptions. In the description of the borough \emph{Neukölln}, for example, the DMO refers to the borough's diverse built environment \emph{``from its estates of detached houses in the south to the high-rises in the Gropiusstadt neighbourhood''}~\cite{visitBerlin2018h}. Here, we only considered the specific place \emph{Gropiusstadt neighbourhood} and ignored the vague description of \emph{estates of detached houses in the south}. Both borough pages, \emph{Neukölln} and \emph{Charlottenburg-Wilmersdorf}, have the same structure: \begin{itemize} \item A header with the borough's name, including a slogan describing the area, \item pictures and brief descriptions of the DMO's \emph{``favorite places''} in the borough, \item a section about \emph{``what you need to know''} about the borough, \item and several paragraphs on selected topics. \end{itemize} Table~\ref{tab:places-dmo} lists the places mentioned by the DMO together with the number and percentage of Airbnb listings in which those places were likewise mentioned (see Section~\ref{sec:quantitative-analysis}). In their borough descriptions, \emph{visitBerlin} sometimes used places only to describe the location of other places or facilities. In the description of \emph{Neukölln}, for example, the DMO mentions \emph{``artist studios located between the Landwehr Canal, Sonnenallee and Hermannstraße}''~\cite{visitBerlin2018h}. Here, the artist studios are in focus, not the canal and the two streets, which are merely used to describe the location of those studios. In the table, we marked such places with an asterisk and used a lighter background color. \subsection{Qualitative Data Analysis} \label{sec:data-analysis} \begin{figure*} \centering \includegraphics[width=\columnwidth, trim=0.0in 0.0in 0.0in 0.0in]{figures/airbnb-listing-7738887-annotated} \caption{Exemplary Airbnb listing in the Kreuzkölln neighborhood (\url{https://airbnb.com/rooms/7738887}). For all listings in the samples, we qualitatively analyzed the title (1), the complete listing description (2), and the house rules (3). } \label{fig:airbnb-listing} \end{figure*} Figure~\ref{fig:airbnb-listing} shows an exemplary Airbnb listing from the \emph{Kreuzkölln} neighborhood and visualizes where the information we analyzed is located on the Airbnb website. The median length of the listing descriptions (including house rules) was 138.5 words for \emph{Kreuzkölln} and 117.5 words for \emph{City West}; the median title length was five words for both neighborhoods. To qualitatively analyze and compare the data, we drew a random sample of 100 listings (50 from each neighborhood) and imported them into the CAQDAS software \emph{MAXQDA}.\footnote{\url{https://www.maxqda.com/}} The subsequent qualitative analysis consisted of three phases: First, the two authors coded 25 listings from each neighborhood independently (\emph{initial coding}~\cite{Saldana2015}). In a second phase, the authors discussed their codings until they agreed on a common coding schema. In the last phase, one author coded the remaining 50 listings using the common coding schema (\emph{elaborative coding}~\cite{Saldana2015}) and after that, both authors discussed the final coding. The final coding schema focused on four main topics: \begin{enumerate} \item Which \emph{places} in the neighborhoods and in the city are mentioned? \item Which \emph{facilities} and \emph{people} in the neighborhoods are described? \item Which \emph{adjectives} are used to describe the neighborhoods? \item Which \emph{practices} are encouraged in the descriptions? \end{enumerate} For aspects (1), (2), and (3), we conducted a \emph{word-by-word coding}~\cite{Charmaz2014}, assigning the places / facilities / adjectives to corresponding categories that emerged from the data. For aspect (4), an \emph{in-vivo coding}~\cite{Charmaz2014, Saldana2015} approach was more appropriate. We assigned whole sentences encouraging certain practices to categories that again emerged from the data. In one \emph{Kreuzkölln} listing, for example, a host wrote the following: \emph{``Club amateur, let's enter the About Blank, Berghain or Griessmühl by bike! You'll look like proper Berliner! We are located from each for less than 15 minutes.''} We assigned this statement to the practice category \textsc{go clubbing/enjoy nightlife}. \subsection{Quantitative Data Analysis} \label{sec:quantitative-analysis} After we finished our qualitative analysis of the listing descriptions, we used the resulting coding schema to compare those descriptions to the borough descriptions of \emph{visitBerlin} (RQ3). We extracted the places from the borough descriptions on the DMO's website, as described in Section~\ref{sec:data-collection}, and compared those places with the places frequently mentioned by Airbnb hosts. Since we identified a disparity between what Airbnb hosts and the DMO consider important places (see Section~\ref{sec:comparison-dmo}), we decided to quantitatively search for the places mentioned by the DMO in their descriptions of \emph{Neukölln} and \emph{Charlottenburg-Wilmersdorf} in all 960 Airbnb listing descriptions we retrieved for the two neighborhoods \emph{Kreuzkölln} (located in \emph{Neukölln}) and \emph{City West} (located in \emph{Charlottenburg-Wilmersdorf}). That way, we were able to estimate the overlap between places mentioned by the hosts and the DMO. For each extracted place, we built a regular expression matching different spellings of the place. The `Kurfürstendamm', for example, is also known as `Ku'damm' or `Kudamm'. Moreover, the German umlaut `ü' can be represented as `ue', thus `Kurfuerstendamm' is an additional possible spelling. The regular expression we used in that case was: \begin{quote} \begin{verbatim} (?i:.*ku[^\n\t]+damm[^\n\t]*) \end{verbatim} \end{quote} The regular expression is case-insensitive and matches the complete line in which the pattern was found. Since the Kudamm was mentioned in the DMO's description of \emph{Charlottenburg-Wilmersdorf}, where the \emph{City West} neighborhood is located, we only searched the Airbnb listing descriptions we retrieved for that neighborhood. We utilized the programming language \emph{R} to search the listing descriptions for matches of the regular expression and found matches in 119 of 210 descriptions (56.7\%). We used this workflow for all places mentioned by the DMO and provide the R script containing the regular expressions as supplementary material~\cite{StorsBaltes2018}. The result of this analysis can be found in Table~\ref{tab:places-dmo}. Section~\ref{sec:comparison-dmo} summarizes our findings. \section{Results} \label{sec:results} In this section, we summarize key findings from our qualitative and quantitative data analyses. As described above, the four high-level concepts \emph{places}, \emph{facilities}, \emph{adjectives}, and \emph{practices} emerged from the qualitative data. Table~\ref{tab:places} shows the number of listings in which particular \emph{places} in the neighborhood or other parts of the city were mentioned. Table~\ref{tab:facilities} shows how many listings contained information on \emph{facilities} in the vicinity of the offered apartment or room. An important aspect for the construction of the neighborhoods are the \emph{adjectives} that hosts use in their descriptions. In our analysis, we only considered adjectives used to describe streets, places, or the whole neighborhood, excluding descriptions of the apartment or room. Figure~\ref{fig:adjectives} shows the adjectives that were mentioned in at least three different listings in one of the neighborhoods. We ordered them according to the number of listings they were used in and highlighted the ones that were used in the descriptions of both neighborhoods. The encouraged \emph{practices} are described throughout this section. In the following, we focus on the aspects that are most suitable to illustrate differences between the two neighborhoods. We provide the listing ID when referring to specific listings and provide the number of coded listings for important codes or categories. To distinguish those two attributes, the number of coded listings is in \textbf{bold} font and the listing ID is in regular font. \begin{figure*} \centering \includegraphics[width=0.75\columnwidth, trim=0.0in 0.0in 0.0in 0.0in]{figures/adjectives_3} \caption{Adjectives used to describe the neighborhood (only adjectives mentioned in $\ge 3$ listings are shown).} \label{fig:adjectives} \end{figure*} \subsection{Places and Facilities} \label{sec:places-facilities} To spatially reference the location of an Airbnb room or apartment in the city, it was common for hosts to name specific places in the neighborhood or in the city. This is particularly true for the short listing headlines, which we analyzed first. The most frequently mentioned places in those headlines were \emph{Berlin}, \emph{Neukölln}, and \emph{Kurfürstendamm} (often abbreviated as \emph{Kudamm}): \begin{quote} \emph{``Stay in the middle of West Berlin''} (10103432)\\ \emph{``Sunny room in the Heart of Neukölln''} (11321524)\\ \emph{``Bright DREAM LOFT at KuDamm 6 rooms''} (6931794) \end{quote} For visitors, the location of their accommodation is important. Being located centrally or peripherally in the city affects the time needed to get around. Moreover, an accommodation in close proximity to the facilities one intends to visit is often considered to be more comfortable. However, the Airbnb hosts in our case study did not promote their apartment through naming distinct sights in their listing headlines. Instead, they focused on places. As headlines lack additional context, potential guests need to carry certain space images in mind in order to understand what these places are about and what meanings they convey. Visitors have to decode the information inherent in such place names, particularly when solely reading the short listing headlines. Full listing descriptions, in contrast, contain much more information about certain places. In the case of \emph{Kurfürstendamm}, for example, hosts mainly referred to the large variety of shopping facilities along the street: \emph{``Kurfürstendamm, the 3.5 kilometer shopping paradise''} (65340). Naming \emph{Neukölln} in a listing description may produce a very different space image. The borough \emph{Neukölln} has long been associated with poverty, crime, and drugs. To this day, Neukölln is still the borough with the highest share of inhabitants threatened by poverty---in 2016, more than a quarter of its residents received social welfare~\cite{AmtfurStatistikBerlinBrandenburg2017d}. The borough has a very high share of residents with migration background (46\% as of June 2017~\cite{AmtfurStatistikBerlinBrandenburg2017e}) and drugs are still a prevalent issue~\cite{AbgeordnetenhausBerlin2017}. Nevertheless, the spatial reference \emph{Neukölln} \textbf{(41)} was named almost as frequently as \emph{Berlin} \textbf{(49)} over all analyzed Airbnb-listings. 80\% of the listings located in the neighborhood \emph{Kreuzkölln} referred to the borough's name. For hosts, it seems to be of particular relevance that their rooms or apartments are located in \emph{Neukölln}. Also the neighborhood's informal name \emph{Kreuzkölln}, which we used to denote the research area, was mentioned several times \textbf{(15)}. It combines the names of the two neighboring districts \emph{Kreuzberg} \textbf{(15)} and \emph{Neukölln}, but at the same time, it represents much more. The name \emph{Kreuzkölln} conveys a certain neighborhood image, including elements of the neighborhood's past, its recent development, typical facilities, and people who are attracted by its special mix: \begin{quote} \emph{``Kreuzkölln nightlife has almost outstripped Kreuzberg. New galleries, bars and restaurants are opening almost every week''} (178347)\\ \emph{``(...) the pulsing heart of Kreuzkölln district (one of the most hip and fastest developing areas since 2008)''} (912787)\\ \emph{``To be clear, this area is called the "Brooklyn of Berlin". Now you get the picture! Neukölln is one of the most appreciated parts of the German capital, the epicenter of hip''} (1559030) \end{quote} Airbnb hosts write their listing descriptions with a certain intention, namely to promote their offered room or apartment. As illustrated above, the name \emph{Neukölln} no longer solely refers to the social difficulties that the borough's government is facing. After a decade of urban renewal initiatives and still on-going projects~\cite{SenatsverwaltungfurStadtentwicklungundWohnenBerlin2007, SenatsverwaltungfurStadtentwicklungundWohnenBerlin2011}, the neighborhood's image is shifting. Airbnb hosts pick up and highlight this development in order to promote their rooms and apartments for visitors. In doing so, they contribute to the public discourse~\cite{DeutschePresseAgentur2017, Connolly2016, Dyckhoff2011, DeutschePresseAgentur2018}, framing the neighborhood as a \emph{trendy} and \emph{upcoming} area, a so-called \emph{Szenekiez}. The adjectives illustrated in Figure~\ref{fig:adjectives} directly refer to the neighborhood's transformation (\emph{upcoming}) and its new image (\emph{hip}, \emph{cool}, \emph{vibrant}, and \emph{trendy}). The district \emph{Charlottenburg} \textbf{(21)}, which belongs to the larger borough \emph{Charlotten\-burg\--\-Wilmers\-dorf}, is described very differently. The district's name is not mentioned as frequently as \emph{Neukölln}, which could indicate that the hosts assume that the former does not convey as much meaning as the latter. However, \emph{Charlottenburg} is often mentioned together with \emph{West Berlin} \textbf{(13)}. It seems to be important that \emph{Charlottenburg} and particularly the area we have denoted \emph{City West} \textbf{(6)} represents the western center of the formerly divided city: \begin{quote} \emph{``Very charming and convenient neighborhood in West-Berlin''} (16763002)\\ \emph{``This is a city center of West Berlin''} (491528) \end{quote} \emph{Neukölln} likewise belongs to the former area of \emph{West Berlin}~\cite{SenatskanzleiBerlin2018}, yet, this fact has not been mentioned once. Generally, \emph{City West} is described as a \emph{nice} and \emph{elegant} area, opposed to the \emph{hip}, \emph{cool}, \emph{trendy}, \emph{vibrant}, and \emph{upcoming} neighborhood \emph{Kreuzkölln} (see Figure~\ref{fig:adjectives}). The following quote illustrates how hosts construct \emph{City West}: \begin{quote} \emph{``Friendly, quiet and trendy area with beautiful old buildings, upscale shops and fine restaurants. People in Charlottenburg are down to earth, educated and wealthy. The typical Berlin Tourism is virtually non-existent. So, perfect for visitors who prefer museums rather than party miles.'' (5921316)} \end{quote} This quote indicates that the two analyzed neighborhoods differ greatly in terms of infrastructure and the people they attract. While \emph{Neukölln} is framed by many Airbnb hosts as \emph{``the place to be''} for a younger crowd, \emph{Charlottenburg} is demarcated from this Berlin image. As mentioned above, it is mainly depicted as \emph{famous}, \emph{beautiful}, \emph{nice}, and \emph{elegant} (see Figure~\ref{fig:adjectives}). In particular, these attributes are often used to characterize the main street \emph{Kurfürstendamm} and the historic square \emph{Savignyplatz}. \begin{table} \small \centering \caption{Places in the city or in the neighborhood, mentioned in the 100 analyzed Airbnb listings.} \label{tab:places} \begin{tabular}{ll rr} \hline \multicolumn{1}{c}{\textbf{Category}} & \multicolumn{1}{c}{\textbf{Subcategory}} & \multicolumn{2}{c}{\textbf{Number of listings}} \\ & & \multicolumn{1}{c}{Kreuzkölln} & \multicolumn{1}{c}{City West} \\ \hline \hline \textbf{City} & & & \\ & Berlin & 24 & 25 \\ & West Berlin & 1 &\cellcolor{VeryLightGray} 13 \\ & East Berlin & 1 & 2 \\ \cline{3-4} & & 24 & 32 \\ \hline \hline \textbf{Districts} & & & \\ & Neukölln &\cellcolor{VeryLightGray} 40 & 1 \\ & Charlottenburg & 1 &\cellcolor{VeryLightGray} 21 \\ & Kreuzberg &\cellcolor{VeryLightGray} 15 & 2 \\ \multicolumn{1}{c}{\scriptsize(unofficial)} & Kreuzkölln &\cellcolor{VeryLightGray} 15 & 0 \\ & Mitte & 5 & 3 \\ \multicolumn{1}{c}{\scriptsize(unofficial)} & City West & 0 & 6 \\ & Other & 4 & 1 \\ \cline{3-4} & & 46 & 26 \\ \hline \hline \textbf{Streets} & & & \\ & Kurfürstendamm & 0 &\cellcolor{VeryLightGray} 35 \\ & Weserstraße &\cellcolor{VeryLightGray} 17 & 0 \\ & Wilmersdorfer Straße & 0 & 9 \\ & Kantstraße & 0 & 7 \\ & Other & 9 & 6 \\ \cline{3-4} & & 21 & 36 \\ \hline \hline \textbf{Squares} & & & \\ & Savignyplatz & 0 &\cellcolor{VeryLightGray} 21 \\ & Alexanderplatz & 7 & 1 \\ & Hermannplatz & 5 & 0 \\ & Other & 5 & 4\\ \cline{3-4} & & 13 & 24 \\ \hline \hline \textbf{Sights} & & & \\ & Berlin Zoo & 1 &\cellcolor{VeryLightGray} 13 \\ & Opera (German/State) & 0 &\cellcolor{VeryLightGray} 10 \\ & KaDeWe & 0 &\cellcolor{VeryLightGray} 8 \\ & Charlottenburg Palace & 0 & 5 \\ & Fair/Messe & 0 & 4 \\ & Brandenburg Gate & 1 & 2 \\ & Other (neighborhood) & 1 & 7 \\ & Other (city) & 2 & 1 \\ \cline{3-4} & & 3 & 27 \\ \hline \hline \textbf{Parks} & & & \\ & Canal/Maybachufer &\cellcolor{VeryLightGray} 28 & 0 \\ & Görlitzer Park & 6 & 0 \\ & Tiergarten & 0 & 4 \\ & Tempelhofer Feld & 4 & 0 \\ & Lake Lietzensee & 0 & 4 \\ & Other & 1 & 2 \\ \cline{3-4} & & 32 & 10 \\ \hline \hline \end{tabular} \end{table} \begin{table} \small \centering \caption{Facilities and people in the neighborhood, mentioned in the 100 analyzed Airbnb listings.} \label{tab:facilities} \begin{tabular}{ll rr} \hline \multicolumn{1}{c}{\textbf{Category}} & \multicolumn{1}{c}{\textbf{Subcategory}} & \multicolumn{2}{c}{\textbf{Number of listings}} \\ & & \multicolumn{1}{c}{Kreuzkölln} & \multicolumn{1}{c}{City West} \\ \hline \hline \textbf{Transportation} & & & \\ & Public, sharing, etc. & 42 & 37 \\ \hline \hline \textbf{Gastronomy} & & & \\ & Restaurants &\cellcolor{VeryLightGray} 32 &\cellcolor{VeryLightGray} 25 \\ & Cafés &\cellcolor{VeryLightGray} 27 & 13 \\ & Bars and Pubs &\cellcolor{VeryLightGray} 37 & 9 \\ \cline{3-4} & & 40 & 25 \\ \hline \hline \textbf{Shopping} & & & \\ & Non-grocery &\cellcolor{VeryLightGray} 18 & \cellcolor{VeryLightGray} 22 \\ & Grocery & 10 & 11 \\ & Weekly markets &\cellcolor{VeryLightGray} 13 & 5 \\ \cline{3-4} & & 25 & 23 \\ \hline \hline \textbf{Culture} & & & \\ & Art/Galeries &\cellcolor{VeryLightGray} 11 & 3 \\ & Night clubs &\cellcolor{VeryLightGray} 12 & 2 \\ & Opera & 0 &\cellcolor{VeryLightGray} 8 \\ & Cinemas & 1 & 3 \\ & Theaters & 2 & 2 \\ & Other & 3 & 3 \\ \cline{3-4} & & 21& 16 \\ \hline \hline \textbf{People} & & & \\ & Background, age, etc. & 8 & 4 \\ \hline \hline \textbf{Parks} & & & \\ & See places (Table~\ref{tab:places}) &\cellcolor{VeryLightGray} 32 & 9 \\ \hline \hline \textbf{Other} & & & \\ & Spa, ATM, etc. & 2 & 8 \\ \hline \hline \end{tabular} \end{table} \subsection{Streets and Squares} \label{sec:streets-squares} Large streets and squares structure the urban fabric~\cite{Levy1999} of a neighborhood. In our case study, for each neighborhood one important street emerged from the listing descriptions. \emph{Kurfürstendamm} is the street that is mentioned most frequently \textbf{(35)} and it is primarily framed as Berlin's \emph{famous} shopping street: \emph{``Berlin's main shopping street Kurfürstendamm offers you Berlin's widest choice of shops, from the standard clothes chains to the exclusive designer store''} (989871). \emph{Weserstraße} is \emph{Kurfürstendamm}'s counterpart in \emph{Kreuzkölln}. Yet, the street's name is not mentioned in as many listing descriptions as \emph{Kurfürstendamm}, which could indicate that the street is not as popular yet. Compared to \emph{Kurfürstendamm}, \emph{Weserstraße} is described very differently: Contrary to the practice of \emph{shopping} encouraged by Airbnb hosts in \emph{City West}, \emph{Weserstraße} mainly provides gastronomic infrastructure, in particular the trilogy of \emph{restaurants}, \emph{cafés}, and \emph{bars/pubs} is mentioned (see Table~\ref{tab:facilities}). In total, 40 out of 50 analyzed listings in \emph{Kreuzkölln} referred to such facilities, opposed to 25 listings in \emph{City West}. Hosts did not only mention the prevalent infrastructure as assets of the neighborhood, they likewise inform related practices such as \emph{``enjoy amazing breakfast, lunch and delightful dinners''} (912787), \emph{``have a beer''} (10005680) in the numerous bars and restaurants filled with \emph{``young people''} (7738887), and \emph{``enjoy a typical Berlin summer night''} (10005680). The most frequently named square to go out for food and drinks in \emph{City West} is \emph{Savignyplatz} \textbf{(21)}. However, in that neighborhood hosts focused more on \emph{restaurants} than on \emph{cafés} and \emph{bars} (see Table~\ref{tab:facilities}). \emph{Savignyplatz} was described as an \emph{``upscale part of town''} (16763002) that is \emph{``legendary and historic''} (16763002), offering \emph{``nice restaurants''} (6931794) in an environment with \emph{``flair and ambience''} (6931794). \emph{Weserstraße}, in contrast, was framed as \emph{``one of the most trendy and lively streets of Neukölln''} (1559030) that has \emph{``some of the coolest bars in Berlin''} (7616493). Hence, Airbnb hosts not only contribute to a discursive image construction of a neighborhood, they also advise their guests about what to do and how to behave---they describe adequate place performances. The location of the offered room or apartment was frequently mentioned in both areas. Two aspects seem to be particularly important (see Figure~\ref{fig:adjectives}): First of all, being located in a \emph{central} part of Berlin was mentioned by many hosts in \emph{City West} \textbf{(17)}. Interestingly, this attribute was rarely used by \emph{Kreuzkölln} hosts \textbf{(3)}, despite the fact that both neighborhoods have approximately the same walking distance to the \emph{Brandenburg Gate} (about 4 km). The second aspect is being located in a (relatively) \emph{quiet} street of the neighborhood, which was also mentioned more frequently in \emph{City West} compared to \emph{Kreuzkölln} \textbf{(11 vs. 7)}. \subsection{Sights, Parks, and Markets} \label{sec:sights-parks-markets} Sights and other attractions, such as museums or art galleries, are often considered to be facilities that are primarily tourism-related and attract visitors. In Ashworth and Tunbridge's \emph{tourist-historic city model}~\cite{AshworthTunbridge1990, AshworthTunbridge2000}, they belong to the decisive elements that define urban tourism space. This section deals with sights that were mentioned in Airbnb listings. We evaluate the relevance that traditional sights have in the descriptions and analyze what hosts signify to be worth visiting in case their neighborhood does not provide any major sights. Several hosts in \emph{City West} refer to internationally known sights that are located in close proximity to their apartments. They frequently name the \emph{Berlin Zoo} \textbf{(13)}, the \emph{opera houses} \textbf{(10)}, the \emph{KaDeWe} \textbf{(8)}, which is one of Europe's largest department stores, or the \emph{Charlottenburg Palace} \textbf{(5)}. All of these sights, apart from the \emph{State Opera}, are located in the \emph{City West} neighborhood. The \emph{Kaiser Wilhelm memorial church}, which is a landmark building dominating the \emph{Breitscheidplatz} square, located at the beginning of \emph{Kurfürstendamm}, was only mentioned in two listing descriptions. Other famous Berlin sights such as Reichstag, Television Tower, or Gendarmenmarkt~\cite{visitBerlin2018} were not mentioned in any of the listings. Another result of our analysis is that only sights in close proximity to the offered room or apartment seem to be relevant for the hosts to describe. While street names, in particular \emph{Kurfürstendamm}, are mentioned in 36 out of 50 analyzed listings in \emph{City West}---\emph{Berlin Zoo}, the most frequently mentioned sight, appears only in 13 listing descriptions (see Table~\ref{tab:facilities}). Even more striking, is the fact that references to well-known tourist attractions are almost non-existent in \emph{Kreuzkölln}'s Airbnb listings. As discussed earlier, the neighborhood is de facto lacking major sights. Nevertheless, \emph{visitBerlin}, the city's DMO, promotes, for example, the former area of the \emph{Kindl brewery}, which is now transformed into a center for contemporary art~\cite{visitBerlin2018b}. In addition, \emph{Britz Castle}~\cite{visitBerlin2018c} and the \emph{Hufeisensiedlung}~\cite{visitBerlin2018d}, an UNESCO world heritage site, are promoted. However, these sites did not appear in any of the \emph{Kreuzkölln} Airbnb listing descriptions we analyzed. Instead of such traditional tourist sights, hosts framed other facilities as being relevant for visitors. In particular, the small waterway \emph{Landwehrkanal} at the northern border of \emph{Kreuzkölln} was marked to be important. This is interesting, because \emph{Landwehrkanal} is not a park offering large green spaces or street furniture. It is primarily a canal set in concrete, surrounded by a cobblestone street at the \emph{Neukölln} side and a bicycle path in \emph{Kreuzberg}. The quality and accessibility of these streets are so bad that the city district office of Friedrichs\-hain\--\-Kreuz\-berg is planning to renovate the area~\cite{BezirksamtFriedrichshainKreuzberg2018}. For local Airbnb hosts, however, this facility is an attraction, it is framed as a \emph{beautiful} and \emph{famous} place: \begin{quote} \emph{``The place is next to the beautiful canal in Neukölln''} (7738887)\\ \emph{``Right next door is the famous `canal' that separates Kreuzberg from Neukölln''} (2246227) \end{quote} Furthermore, the place names \emph{Landwehrkanal} and \emph{Maybachufer} (the banks of the canal) appeared in more than half of the analyzed listings in \emph{Kreuzkölln}. This high amount of namings indicates that Airbnb hosts regard this facility as significant for visitors. However, \emph{Landwehrkanal} is hardly a place that can be experienced through the typical tourist practice of \emph{gazing}. According to Urry's seminal work \emph{``The Tourist Gaze''}~\cite{Urry1990}, the practice of gazing upon landscapes, places, objects, or people is a defining element of tourism. It is closely related to the practice of sightseeing that is common in classic city tourism. Instead of gazing upon \emph{Landwehrkanal}, hosts encouraged other practices. They invited guests to \emph{``take a bike ride along the tree-lined Landwehr Canal''} (4243519), to \emph{``stroll''} along (6089778), to \emph{``hang out''} (879602), or to \emph{``jog or take walks''} (16462941). One host even related being at the canal to a common local practice: \emph{``(...) around the corner (...) is the canal where you can have a beer and enjoy a typical Berlin summer night''} (10005680). Highlighting that relaxing at \emph{Landwehrkanal} is a typical local practice enables visitors to behave in a similar way. They learn how to perform in order to immerse themselves in the local everyday life and to deliberately blur the boundaries between visitor and resident. Referring to places such as \emph{Maybachufer}, \emph{Tempelhofer Feld} \textbf{(4)}, or \emph{Görlitzer Park} \textbf{(6)} in the Airbnb listings guides visitors to find places that are mainly frequented by local residents for their leisure activities. Generally, they are not worth visiting due to their physical appearance---\emph{Tempelhofer Feld} might be an exception due to its size. Instead, it is the way hosts describe these areas that makes them attractive for visitors who want to experience Berlin like a local: \begin{quote} \emph{``One of my favorite places in Berlin [is] the Tempelhof airfield, [...] a lovely park''} (1590153)\\ \emph{``Tempelhofer Feld, the former aircraft field that has been transformed into a marvelous park''} (5963351) \end{quote} These findings, however, cannot be transferred to all parks mentioned in the analyzed listings. In \emph{City West} listing descriptions, hosts referred to the large and famous \emph{Tiergarten} \textbf{(4)} and \emph{Lake Lietzensee} \textbf{(4)}. However, only one host encouraged the practice of \emph{``relaxing''} (2141544). In all other cases, parks are mentioned together with other sights, particularly in the case of \emph{Tiergarten}. Despite its size of 210 hectares~\cite{visitBerlin2018e}, and its location in the center of Berlin, adjoining \emph{Kurfürstendamm}, not much attention was given to it in the Airbnb listings. In line with \emph{parks} being described as everyday facilities worth visiting, local \emph{markets} and regular \emph{grocery} shopping gains importance. \emph{Gastronomic} offers and \emph{shopping} facilities are of relevance in both neighborhoods, although their framings vary. While attention in \emph{City West} lies on expensive and exclusive \emph{designer stores} (989871, 846489), cheap and \emph{second-hand bargains} (8376421) are promoted in \emph{Kreuzkölln}. In particular, the large department store \emph{KaDeWe} \textbf{(8)} is mentioned as a shopping paradise and as an attraction to visit in \emph{City West}. The corresponding facilities in \emph{Kreuzkölln} are the immigrant food market and the bi-weekly flea market \textbf{(13)} at \emph{Maybachufer}. This food market, which originally satisfied the needs of the large immigrant population in \emph{Kreuzberg} and \emph{Neukölln}~\cite{AmtfurStatistikBerlinBrandenburg2017e}, is now being marked as a multicultural place for grocery shopping: \begin{quote} \emph{``very rich and colorful turkish market at Maybachufer, where you can buy fresh fish, meet, turkish nuts and dried fruits etc.''} (1590153)\\ \emph{``You will find a huge open air grocery market twice a week (extra good bio products) and an amazing flea market every second Sunday''} (7738887) \end{quote} In these listing descriptions, the history of the neighborhood \emph{Kreuzkölln} is prevalent. Airbnb hosts are steering tourists' attention towards the market, which is rooted in the neighborhood. They highlight its long tradition and the fact that it is still frequented by both long established as well as short-term inhabitants. This mix of people visiting it, and the offers that are still primarily dedicated to the local residents, mark it as an authentic place---a place that is not artificially designed to meet only visitors' needs. \begin{table} \small \centering \caption{Places mentioned by \emph{visitBerlin}, the city's DMO, in their borough descriptions of \emph{Neukölln} and \emph{Charlottenburg-Wilmersdorf}, the boroughs in which the two neighborhoods \emph{Kreuzkölln} and \emph{City West} are located; in brackets, we provide number and percentage of Airbnb listings in the corresponding neighborhood (\emph{Kreuzkölln}: $n=750$, \emph{City West}: $n=210$) mentioning the place; asterisk indicates places that were only used to describe the location of other places.} \label{tab:places-dmo} \begin{tabular}{l rr} \hline & \multicolumn{2}{c}{\textbf{Places from DMO borough descriptions and matches in Airbnb listings}} \\ \multicolumn{1}{c}{\textbf{Paragraph}} & \multicolumn{1}{c}{Neukölln} & \multicolumn{1}{c}{Charlottenburg-Wilmersdorf} \\ \hline \hline \multirow{3}{*}{\textbf{Header}} &\cellcolor{VeryLightGray} Neukölln (district) (468 | 62.4\%) &\cellcolor{VeryLightGray} Charlottenburg (district) (90 | 42.9\%) \\ & &\cellcolor{VeryLightGray} Wilmersdorf (district) (43 | 20.5\%) \\ & &\cellcolor{VeryLightGray} City West (neighborhood) (16 | 7.6\%)\\ \hline \hline \multirow{5}{*}{\textbf{Favorite places}} & Richardplatz (square) &\cellcolor{VeryLightGray} Bikini Berlin (concept mall) (19 | 9.0\%) \\ & Schloss Britz/Gutspark (palace/park) &\cellcolor{VeryLightGray} A-Trane (jazz club) (7 | 3.3\%)\\ & Capt'n Crop (hat making studio) &\cellcolor{VeryLightGray} Teufelsberg (hill) (1 | 0.5\%)\\ & Kindl (art space) & Bröhan Museum (museum)\\ & & Rüdesheimer Platz (square)\\ \hline \hline \multirow{4}{*}{\textbf{Need to know}} &\cellcolor{VeryVeryLightGray} Kreuzberg (227, 30.3\%)* &\cellcolor{VeryLightGray} Kurfürstendamm (street) (119 | 56.7\%) \\ &\cellcolor{VeryLightGray} Schillerkiez (neighborhood) (5 | 0.7\%) &\cellcolor{VeryLightGray} Grunewald (district/park) (3 | 1.4\%)\\ & Gropiusstadt (neighborhood) & \\ & Lavanderia Vecchia (restaurant) & \\ & Eins 44 (restaurant) & \\ \hline \hline \multirow{4}{*}{\textbf{Other}} & \cellcolor{VeryLightGray} Weserstraße (street) (189 | 25\%) &\cellcolor{VeryLightGray} Kurfürstendamm (street) (119 | 56.7\%) \\ & \cellcolor{VeryVeryLightGray} Landwehr Canal (canal) (92 | 12\%)* &\cellcolor{VeryLightGray} West Berlin (part of city) (28 | 13.3\%) \\ & \cellcolor{VeryVeryLightGray} Sonnenallee (street) (62 | 8.3\%)* &\cellcolor{VeryLightGray} Charlottenburg Pal. (palace/park) (20 | 9.5\%) \\ & \cellcolor{VeryVeryLightGray} Hermannstraße (street) (7 | 0.9\%)* &\cellcolor{VeryLightGray} Bikini Haus (mall) (19 | 9.0\%) \\ &\cellcolor{VeryLightGray} Tier (bar) (3 | 0.4\%) &\cellcolor{VeryLightGray} City West (neighborhood) (16 | 7.6\%) \\ &\cellcolor{VeryLightGray} Ä (pub) (1 | 0.1\%) &\cellcolor{VeryLightGray} Olympic Stadium (stadium/tower) (5 | 2.4\%)\\ &\cellcolor{VeryLightGray} Körner Park (park) (1 | 0.1\%) &\cellcolor{VeryLightGray} Uhlandstraße (street) (7 | 3.3\%) \\ & Heimathafen (arts/theater) &\cellcolor{VeryLightGray} East Berlin (part of city) (3 | 1.4\%) \\ & Neuköllner Oper (opera) &\cellcolor{VeryLightGray} Grunewald (district/park) (3 | 1.4\%) \\ & Vin Aqua Vin (bar) &\cellcolor{VeryLightGray} Waldorf Astoria (hotel) (2 | 1.0\%) \\ & Schloss Britz (palace) &\cellcolor{VeryLightGray} Funkturm (radio tower) (2 | 1.0\%) \\ & Britzer Garten (park) & \cellcolor{VeryLightGray} Wannsee (lake) (2 | 1.0\%) \\ & & \cellcolor{VeryLightGray} Teufelsberg (hill) (1 | 0.5\%) \\ & & Schloßstraße (street) \\ & & Rüdesheimer Straße/Platz (street/square) \\ & & Fasanenplatz (square) \\ & & Ludwigkirchplatz (square) \\ & & Haus der Berliner Festspiele (theater) \\ & & Bar jeder Vernunft (theater) \\ & & Havel (river) \\ & & Waldbühne (amphitheater) \\ & & Grunewaldturm (tower) \\ \hline \hline \end{tabular} \end{table} \subsection{Comparison with DMO Website} \label{sec:comparison-dmo} After extracting the places mentioned in \emph{visitBerlin}'s description of the two boroughs in which the neighborhoods \emph{Kreuzkölln} and \emph{City West} are located (see Section~\ref{sec:data-collection}), we first compared those places with the places frequently mentioned by Airbnb hosts (gray background in Table~\ref{tab:places}). Then, we conducted a follow-up quantitative analysis to search for the places mentioned by the DMO in all 960 Airbnb listing descriptions we retrieved (see Section~\ref{sec:quantitative-analysis} for a description of our search approach). Table~\ref{tab:places-dmo} shows the results of this analysis. Regarding the district names, it was more important for hosts in \emph{Kreuzkölln} to refer to the district name \emph{Neukölln} as it was for hosts in \emph{City West} to refer to \emph{Charlottenburg} or \emph{Wilmersdorf} (those two districts were merged in 2001 and \emph{City West} is part of former \emph{Charlottenburg}). We also observed this trend in our qualitative analysis (see Section~\ref{sec:places-facilities}). Regarding the DMO's favorite places, it is noteworthy that none of the four favorite places in \emph{Neukölln} were mentioned by the Airbnb hosts in \emph{Kreuzkölln}, while hosts in \emph{City West} named three out of five places in \emph{Charlottenburg-Wilmersdorf}. The same trend can also be observed for the other sections of the descriptions, where the overlap between the DMO and the hosts in \emph{City West} was relatively large. Interestingly, while \emph{Neukölln}'s neighboring district \emph{Kreuzberg} was named in 227 \emph{Kreuzkölln} listings (30.3\%), \emph{visitBerlin} only used the name \emph{Kreuzberg} to describe the location of the \emph{Kreuzkölln} neighborhood, without mentioning its name. The DMO just referred to the neighborhood as \emph{``a vibrant multicultural area around the border to Kreuzberg''}~\cite{visitBerlin2018h}. While Airbnb hosts relate their \emph{upcoming} neighborhood \emph{Neukölln} to the already better known \emph{Kreuzberg}, the DMO tries to market the two boroughs independently from each other. Hosts and \emph{visitBerlin} agree that \emph{Kudamm} and \emph{Weserstraße} are the main streets in the two areas. The DMO also motivates similar practices like the Airbnb hosts: \emph{shopping} for \emph{Kudamm} and \emph{having a drink at a bar} for \emph{Weserstraße}. The latter practice was also motivated by the hosts in \emph{City West} for the area around \emph{Savignyplatz}. However, the DMO did not even mention this square or the practice of going out for a drink in their description of \emph{Charlottenburg-Wilmersdorf}. Our qualitative analysis revealed that traditional tourist sights such as \emph{Berlin Zoo} or \emph{KaDeWe} were primarily named by the hosts in \emph{City West}---\emph{visitBerlin} did not mention those sights in their borough description of \emph{Charlottenburg-Wilmersdorf}. The only obvious overlap between Airbnb hosts and the DMO is \emph{Charlottenburg Palace}. The qualitative analysis also suggested that hosts in the new urban tourism hotspot \emph{Kreuzkölln} reframe everyday places such as the \emph{Landwehrkanal}/\emph{Maybachufer}, including its weekly \emph{market}, as being significant for tourists. The DMO did not mention the \emph{market} or the \emph{Maybachufer}, and only referred to \emph{Landwehrkanal} to describe the location of a different facility. Generally, the overlap between places mentioned by \emph{visitBerlin} and places mentioned by the Airbnb hosts is much larger in \emph{City West} compared to \emph{Kreuzkölln}, even though we retrieved a considerably smaller number of listings in \emph{City West} (210 vs. 750). While hosts in \emph{Kreuzkölln} focused on places in the neighborhood and nearby, which the DMO did not consider to be noteworthy, hosts in \emph{City West} mentioned places all over the district of \emph{Charlottenburg-Wilmersdorf}, which were also named by the DMO. \section{Discussion} \label{sec:discussion} Questions regarding the conceptual nature of space and place have been addressed in both CSCW and human geography research. For CSCW scholars, the main motivation to use spatial metaphors has long been to transfer spatial structures from the physical world into the digital realm~\cite{HarrisonDourish1996}. Examples include the organization of the virtual workspace~\cite{BrewerDourish2008} or the creation of new collaborative virtual environments~\cite{HarrisonDourish1996}. More recent considerations, however, established that space cannot solely be regarded as a \emph{``natural fact---a collection of properties that define the essential reality of settings of action''}~\cite{Dourish2006}; space, like place, is a social product~\cite{Dourish2006}. The proliferation of peer-to-peer (information) sharing platforms have influenced the way people encounter (urban) space. As Brewer and Dourish point out, mobile communication platforms not just produce another level of \emph{``virtual space''} on top of the physical space---they \emph{``allow people to encounter and appropriate existing spaces in different ways''}~\cite{BrewerDourish2008}. Technology thus becomes a part of how people encounter (urban) space and it \emph{``is shaped through technologically mediated mobility''}~\cite{BrewerDourish2008}. Our research draws on these theoretical considerations and expands their implications into the field of leisure and tourism, a research area that already received attention in CSCW contributions~\cite{BrownChalmers2003, Dourish2006}. In this paper, we investigated how online platforms such as Airbnb engage in the co-production of (new) urban tourism space. We considered the \emph{``spatial turn''}~\cite{Soja1989} in social sciences as the starting point for our reflections on the nature of space as being socially constructed. We then used the \emph{tourist-historic city model}~\cite{AshworthTunbridge1990} to illustrate that traditional framings of urban tourism space as clusters of historical sights, leisure facilities, and gastronomic infrastructure do not suffice to explain the emergence of new urban tourism areas in residential neighborhoods. The proliferation of mobile technologies and changed tourist behavior led to rising visitor numbers in \emph{off the beaten track} localities~\cite{MaitlandNewman2009}. Urban tourism scholars discuss this phenomenon as \emph{new urban tourism}~\cite{FullerMichel2014}. To explain how residential neighborhoods lacking major sights can gain significance for visitors, we utilized the concepts of \emph{representations}~\cite{Iwashita2003, PritchardMorgan2000, Saarinen2004} and \emph{performances}~\cite{BrenholdtHaldrupOthers2004, Edensor1998, Edensor2000, Edensor2001, Larsen2008} as two theoretical lenses. We argued that Airbnb hosts publish strategically produced representations of their neighborhoods online and thus alter the spatial discourse~\cite{Davis2005, Saarinen2004}. They endow residential neighborhoods with new meanings, encourage place-specific practices, and consequently co-construct places of significance for visitors. By the means of such collaboratively produced spatial representations, residential neighborhoods transform into (new) urban tourism destinations. In our case study, we considered one example of such digital representations. We qualitatively analyzed how the two Berlin neighborhoods \emph{Kreuzkölln} and \emph{City West} are digitally constructed by Airbnb hosts in their listing descriptions. Moreover, we quantitatively investigated to what extend mentioned places differ between the listing descriptions and the digital representation of the corresponding boroughs provided by Berlin's DMO. We found that the types of places and sights described greatly differ between the two neighborhoods \emph{Kreuzkölln} and \emph{City West}. While \emph{Kreuzkölln} hosts mainly reframed everyday places to be worth visiting, \emph{City West} hosts primarily focused on well-known sights that were likewise promoted by the DMO. In the neighborhood of \emph{Kreuzkölln}, the places and facilities described to be worth visiting originally served the needs of local residents. A \emph{canal} bordering \emph{Neukölln} (\emph{Landwehrkanal}) and a weekly \emph{food market} were marked as the areas' highlights. These facilities, however, do rarely attract visitors due to their physical appearance. Instead, Airbnb hosts convey specific meanings to these places. In their neighborhood descriptions, hosts signify \emph{Landwehrkanal} as a beautiful place to spend a typical Berlin summer night and describe the food market as an exotic place. Hosts encourage their guests to buy typical food in order to experience the neighborhoods' multicultural atmosphere. As a result of these neighborhood descriptions, certain space images are collaboratively constructed and (re-)produced by Airbnb hosts. In the case of \emph{Kreuzkölln}, a residential area and its prevalent local infrastructure is reinterpreted as a touristic place. The second neighborhood \emph{City West}, in contrast, provides facilities that are generally understood as sights or attractions. In their listing descriptions, hosts focused primarily on sights in close proximity to their room or apartment. Iconic architecture that is located further away, such as \emph{Brandenburg Gate} or \emph{Reichstag}, was rarely mentioned. Airbnb hosts in \emph{City West} steered visitors' attention mainly to \emph{Kurfürstendamm}, a large street that they signified as a shopping paradise, and to \emph{Savignyplatz}, which they described as a historic square full of restaurants. Hosts primarily encouraged the practices of \emph{shopping} and \emph{eating out}, which they directly related to the prevalent infrastructure. They do not need to reinterpret the area, as in the case of \emph{Kreuzkölln}, to mark it as attractive for visitors. Instead they derive its attractiveness from the neighborhood's past. They refer to the famous \emph{Kurfürstendamm} and to \emph{KaDeWe}, one of the largest and oldest department stores in Europe, founded in 1907. Hence, \emph{City West} hosts reproduce long-established images and practices of their neighborhood in their listing descriptions. In both cases, the analyzed neighborhoods are signified by hosts as places worth visiting. While \emph{Kreuzkölln} hosts reinterpret everyday facilities, \emph{City West} hosts reproduce existing space images. Hence, hosts in both neighborhoods contribute to the perception of their neighborhoods through reinterpreting or reproducing space images---they construct them as tourist places. The quantitative analysis revealed that the overlap between the DMO's borough descriptions and hosts' neighborhood descriptions is larger in the traditional urban tourism hotspot \emph{City West} compared to the new urban tourism hotspot \emph{Kreuzkölln}. In both districts, the DMO mainly focused on classic, material sights. Many of the places located in \emph{Charlottenburg-Wilmersdorf} were also mentioned by Airbnb hosts in \emph{City West}. In contrast, none of the DMO's favorite places in the borough \emph{Neukölln} were mentioned by \emph{Kreuzkölln} hosts. Those hosts instead focused on local infrastructure and everyday places such as the \emph{Landwehrkanal}/\emph{Maybachufer} and the \emph{markets}, which the DMO did not consider to be noteworthy. Interestingly, \emph{Landwehrkanal} appears in \emph{visitBerlin}'s new tourism concept \emph{``Berlin-Tourismus 2018+''}~\cite{DWIFConsulting2017} as a place that is gaining increasing visitor attention. The authors estimate 27\% of the \emph{Landwehrkanal} visitors to be \emph{``new urban tourists''}. The DMO's rising awareness regarding \emph{Landwehrkanal} as a tourist attraction indicates that Airbnb hosts' place framings already play a crucial role in steering visitor attention and mobility. In summary, our findings show how space images are collaboratively constructed in Airbnb listings and how they differ between the analyzed neighborhoods. We found that Airbnb hosts in traditional tourist hotspots rather reproduce existing place images, which have previously been co-produced by the city's DMO. In contrast, hosts in new urban tourism hotspots tend to reinterpret everyday places and endow them with new meanings, but such places are often not regarded as being significant for visitors by the DMO. The new urban tourism hotspot \emph{Kreuzkölln}, for example, seems to provide a variety of attractions according to Airbnb hosts, but is not promoted as a touristic place by Berlin's DMO. Since the DMO seems to focus on material sights in general, they are unlikely to promote residential areas that are lacking such infrastructure. On peer-to-peer platforms like Airbnb, in contrast, everyday places gain importance and are marketed. We thus hypothesize that digital information sharing platforms are more important in the production of new urban tourism areas than the city's DMO. Our finding that, through digital representations of space in Airbnb listing descriptions, mundane places can gain significance for visitors and thus transform into tourist places, is an important contribution, because previous research mainly focused on spatial representations and destination images produced by the city's DMO or other governmental representatives~\cite{ChenChen2016, PritchardMorgan2000}. We, in contrast, have illustrated how space images can likewise be produced and reframed collaboratively on online platforms such as Airbnb. Against this background, we argue that the power to endow space with new meanings is nowadays more evenly distributed among actors. The DMO or the tourism industry in general are no longer the only ones pre-interpreting tourism space. By the means of digital technologies and online platforms, local people can likewise contribute to the framing of places. A last aspect we want to discuss here is how digital technologies encourage particular appropriations of space. In our case study, we analyzed performances motivated in Airbnb listings. For example, we found that Airbnb hosts in both neighborhoods encourage the practice of \emph{going out} in their listing descriptions. Moreover, while hosts in \emph{City West} rather focused on the practice of \emph{shopping}, likely motivated by the presence of \emph{KaDeWe} and \emph{Kurfürstendamm} in close proximity, Airbnb hosts in \emph{Kreuzkölln} encouraged their guests to \emph{relax} at the shore of \emph{Maybachufer} or to \emph{enjoy the nightlife} in one of the various bars and restaurants nearby. The practice of \emph{sightseeing}, which is traditionally regarded as a typical tourist activity, plays a subordinate role. These findings illustrate that Airbnb hosts not only reproduce or reinterpret spatial representations in their listings, but also influence the way space is enacted through the practices they encourage. As a direction for future work, we suggest to add an ethnographic perspective to our research design in order to investigate how exactly places are enacted by Airbnb guests. \section{Conclusion and Future Work} In this paper, we illustrated how urban tourism space is (re-)produced digitally and collaboratively on online platforms. In particular, we investigated how Airbnb hosts construct their neighborhoods as touristic places in their listing descriptions. We followed a constructionist notion of space, building on existing research in human geography~\cite{Soja2009} and CSCW~\cite{HarrisonDourish1996, Dourish2006}. We understand Airbnb listing descriptions as \emph{representations} of space~\cite{Iwashita2003, PritchardMorgan2000, Saarinen2004}, produced by Airbnb hosts and read by potential guests, which have the power to influence the discourse about an area and the way places are appropriated. For our empirical study, we collected Airbnb listing data from the two Berlin neighborhoods \emph{Kreuzkölln} and \emph{City West} and qualitatively analyzed a random sample of 100 listing descriptions. We found that, in the description of their neighborhood, hosts primarily focused on facilities in close proximity to their apartment. Well-known sights that are further away were of little importance. In the neighborhood \emph{Kreuzkölln}, which is basically lacking any major sights, hosts reframed local everyday places as being of significance for visitors. The shores of \emph{Maybachufer}, a canal bordering \emph{Neukölln}, and a weekly food \emph{market} were framed as the areas' highlights. These facilities are no sights in a traditional sense. Instead, Airbnb hosts reinterpret such mundane places and convey new meaning to them. Our qualitative analysis of the listings in the \emph{City West} neighborhood revealed that hosts mainly focused on traditional sights, such as \emph{Berlin Zoo} and \emph{KaDeWe}, and motivated related practices. Our quantitative analysis has shown that the space construction between Airbnb hosts and the DMO is more similar in traditional tourism hotspots (like \emph{City West}) compared to new urban tourism hotspots (like \emph{Kreuzkölln}). We conclude that online platforms such as Airbnb play a crucial role in (re-)directing visitors' attention to less `touristified' neighborhoods. Moreover, we illustrated how encouraged place-specific \emph{performances} help visitors to appropriate such neighborhoods. Our research approach opens up various directions for future work. Particular projects include analyzing the construction of tourism space in Airbnb listings located in other neighborhoods or cities, but it also seems promising to compare how tourism space is constructed on other platforms such as Instagram or TripAdvisor. Another direction would be to scale the approach we followed in our quantitative analysis, comparing the DMO's descriptions of all twelve boroughs to all Airbnb listing descriptions in Berlin. That way, one could test the hypothesis if the DMO's descriptions are more likely to match the Airbnb listing descriptions in traditional urban tourism hotspots like \emph{City West}. In our case study, the Airbnb listing density was much higher in the new urban tourism hotspot \emph{Kreuzkölln}. One could use this information, together with other data retrieved from Airbnb listings, to classify neighborhoods, similar to Venerandi et al.'s approach~\cite{VenerandiQuattroneOthers2015}. An open question that could be investigated using surveys and interviews is if hosts are aware of their role as producers of space, in particular in new urban tourism hotspots. On a more general level, understanding tourism space as being socially constructed through diverse forms of \emph{representations} and \emph{performances} enables researchers to further investigate the discourse about a place. This aspect is becoming increasingly important in light of the proliferation of sharing platforms for different kinds of information such as images and films. Traditionally, the destination's management and marketing organization was the dominant actor steering tourists' attention and action spaces in the city. Sharing platforms such as Airbnb, Instagram, and TripAdvisor are now overtaking this role. Thus, people producing digital content on these platforms have nowadays a large influence on how certain places are perceived. Considering this development, it appears to be very promising to foster collaboration between social and information sciences in order to understand how digital media impacts our perception of reality. \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 2.232422, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdPw4ubng3m-N42N4
\section{Introduction} Soccer or association football has one of the highest number of players and spectators in the world. Because of the diversity, we need to establish a common indicator to quantitatively evaluate soccer teams, regardless of the background of the constituent players. However, analysis of soccer data is usually difficult because the ball and players are moving continuously during a game and specific event data (e.g., scores and concedes) are likely to be limited. In particular, defensive tactics are considered difficult to evaluate because of the limited amount of available statistics, such as goals scored in the case of attacks (for overview, see a review of \cite{fujii2021data}). Although publishing of tracking technologies and videos \citep{scott2022soccertrack, Cioppa_2022_CVPR} have recently progressed, a large amount of accurate all-time tracking data for all players remain unpublished. Therefore, many evaluations of attacking players with the ball are performed from event data and the coordinates of the ball. For example, based on scoring prediction using ball location data, researchers evaluated plays such as using the expected values of goals scored and conceded (\citet{rudd2011framework, Decroos19,Liu2020}, and others are reviewed in \citet{vanroy2020valuing}). Although there have been some studies on off-ball player evaluation \citep{spearman2018beyond,Fernandez18,teranishi2022evaluation} which require positional data for all players, these are usually private (i.e., not published). Recently, the location of all players in broadcast video frames of every event have been published by StatsBomb Inc. (a UK football analytics and data visualization company) in soccer games of men's Euro 2020 and women's Euro 2022 competitions. For example, using this data including a pass receiver and defenders, researchers developed a machine learning method for prediction and valuation of penetrative passes \citep{rahimian2022lets}. However, there have been a few studies on defensive play evaluation. For example, using only ball location and event data, researchers have evaluated interceptions \citep{Piersma20} and the effectiveness of defensive plays by the expected value of a goal-scoring opportunity conceded \citep{Robberechts19}. For team evaluation using all players' tracking data, \citet{toda2022evaluation} have previously proposed a method to evaluate team defense by predicting ball gain (or recovery) and being attacked. However, this approach has the following limitations: (1) they did not consider the importance weight of the ball gain and being attacked, (2) they assumed the perfect observation of all 22 players and the effect of the lack of the observation is unknown, and (3) they only investigated in a domestic male professional league and did not fully investigated in terms of the diversity (e.g., nations and sexes). The purpose of this study is to address these issues. First, we propose a generalized valuation method of defensive teams by score-scaling the predicted probabilities of the events called \textit{Generalized Valuing Defense by Estimating Probabilities} (GVDEP). Then, using the location of all players in the broadcast video frames of every event in football games of men's Euro 2020 and women's Euro 2022, we investigated the effect on the prediction models of reducing the number of players and validated our approach to analyze the games of each team. The main contributions of this work are as follows: (i) We generalize the previous team defense valuation method \citep{toda2022evaluation} by weighting the probabilities of ball gain and being attacked based on the prediction of scoring and conceding \citep{Decroos19}. (ii) We verified the classifiers for prediction of ball gain, being attacked, scoring, and conceding when we reduced the number of players. (iii) We verified our method using a diverse soccer dataset in terms of nations and both sexes (open data of men's Euro 2020 and women's Euro 2022). Our approach can encourage even non-professionals watching broadcast videos to discuss the valuation of defensive teams at a particular event. Furthermore, our approach can provide quantitative and useful indicator for valuing and scouting our and opponent teams in defense. \section*{Materials and methods} \subsection*{Dataset} Since having the location data of players of teams in diverse nations, we chose the open-source data with all games of UEFA men's Euro 2020 (51 games) and women's Euro 2022 (31 games) provided from StatsBomb Inc. (UK). This datasets include event data (i.e., labels of actions, such as passing and shooting, and the simultaneous xy coordinates of the ball) and location data (i.e., xy coordinates) of all players ``in the frame of broadcast video'' around every event. Many scenes of a soccer broadcast video do not show all 22 players; therefore, as shown in Figure \ref{fig:example}, note that some of datasets do not include all 22 players' information. In the case of this dataset, the min, the first quartile, the median, the third quartile and the max are 0.0, 11.0, 15.0, 18.0 and 22.0 respectively about the number of players in a scene (for the histogram, see Supplementary Figure 5). Also, we used ``event'' as the above meaning based on the previous studies of \cite{Decroos19,pappalardo2019playerank}. Data acquisition was based on the contract between the competitions (UEFA Euro2020/2022) and the company (StatsBomb, Inc), not between the players and us. The Statsbomb dataset has been widely used in academic studies (e.g., \cite{gregory2022influence, Decroos19}). It would be guaranteed that the use of the data would not infringe on any rights of players or teams. The dataset is available at \url{https://github.com/statsbomb/open-data}. In all 51 games of UEFA EURO 2020, 142 of the 1262 shots were on goal, 7,027 effective attacks were played, and 2,463 ball gains were realized. Similarly, in all 31 games of UEFA EURO 2022, 95 of the 880 shots were on goal, 4,717 effective attacks were played, and 1,839 ball gains were realized. An effective attack is defined as an event that ends in shooting or penetrating into the penalty area. Also, ball gains are defined as a change in the attacking by some factors such as tackle, interception and offside. In this study, an effective attack is defined as \textit{attacked} from viewpoints of defenders. It should be noted that we labeled for each event (not an attack segment) whether positive/negative attacked/ball gains occurred at an event. \subsection*{Proposed Method} The flow diagram of our method is shown in Figure \ref{fig:diagram}. We perform data pre-processing and feature creation, training classifiers, prediction with the classifiers, and computing GVDEP as well as Valuing Defense by Estmating Probabilities (VDEP) proposed by \cite{toda2022evaluation}. Here we describe the core idea of our proposed method and the details are described in the following subsection. Suppose that the state of the game is given by $S = [s_1,\ldots,s_N]$ in chronological order. We consider $s_i$, like \cite{toda2022evaluation}, as the $i$th state. This includes features about the event or action $a_i$, the on-ball features $b_i$ and the off-ball features $o_i$ not close to the ball at the time of the action such as attacking/defending players' coordinates at a state. Note that we show the result about the validation of the number of the players in the following section. To evaluate all state transition for defensive and offensive actions in this study from the defender's perspective, the following time index $i$ is used as the $i$th \textit{event}. As with \cite{toda2022evaluation}, we define the probability of future ball gain $P_{gains}(s_i)$ and attacked $P_{attacked}(s_i)$ in the game state $s_i$ of a certain interval at an $i$th event. These probabilities were given by the classifier trained. We focus on whether changing the game state affects the team defense like the work of \cite{Decroos19}, which directly used these probabilities for state evaluation. For this reason, first we use the difference between the probabilities at a state $s_i$ and ones at previous state $s_{i-1}$ as follows: \begin{align} \label{eq:deltas} \Delta P_{gains}(s_i, x) &= P_{gains}(s_i, x) - P_{gains}(s_{i-1}, x), \\ \Delta P_{attacked}(s_i, x) &= P_{attacked}(s_i, x) - P_{attacked}(s_{i-1}, x), \end{align} where $x$ is which team is defending. The most critical problem in the previous was constant parameter to balance the probabilities of ball gain and being attacked, which was defined by the frequencies of both. To appropriately weight both variables in a score scale, the value of defense in the proposed method $V_{gvdep}$ is weighted with VAEP of \cite{Decroos19} at a time when ball gains or attacked as follows: \begin{align} weight\_gains &= \sum_{j \in Ev_{gains}}\frac{sign(Teams_{j})V_{vaep}(s_j)}{|Ev_{gains}|}, \\ weight\_attacked &= \sum_{j \in Ev_{attacked}}\frac{sign(Teams_{j})V_{vaep}(s_j)}{|Ev_{attacked}|}, \end{align} \begin{equation} \label{eq:vGVDEP} V_{gvdep}(s_i) = weight\_gains \times \Delta P_{gains}(s_i) - weight\_attacked \times \Delta P_{attacked}(s_i), \end{equation} where each of $Ev_{gains}$ and $Ev_{attacked}$ (also $|Ev_{gains}|$ and $|Ev_{attacked}|$) is the number of ball gains/attacked in all games. $sign(Teams_{j})$ means the sign of which team performed ball gain or attacked at a $j$th event. Where the first term in equation (\ref{eq:vGVDEP}) can be regarded as the difference in the probabilities of scoring or conceding after gaining the ball, and the second term as that after being attacked. Furthermore, to use the framework of \cite{Decroos19} into our formula, we calculated VAEP as follows: \begin{align} \label{eq:VAEP} \Delta P_{scores}(s_i) &= P_{scores}(s_i) - P_{scores}(s_{i-1}) \\ \Delta P_{concedes}(s_i) &= P_{concedes}(s_i) - P_{concedes}(s_{i-1}) \\ V_{vaep}(s_i) &= \Delta P_{scores}(s_i) - \Delta P_{concedes}(s_i) \end{align} To evaluate the team defense, we define $R_{gvdep}(p)$ as the evaluation value per game for team $p$ as follows: \begin{equation} \label{eq:rGVDEP} R_{gvdep}(p) = \frac{1}{M}\Sigma_{s_i\in \bm{S}_{M}^p} V_{gvdep}(s_i), \end{equation} where $M$ is the number of events for team $p$ defending in all matches, and $\bm{S}_M^p$ is the set of states $S$ of team $p$ defending in all matches. \subsection*{Feature Creation} In this subsection, we describe pre-processing and feature creation. As mentioned above, we used the features at a state $s = [a,b,o]$ including the $i$th and $i-1$th events for the same reasons as in the framework of \cite{toda2022evaluation}. Using these features, we trained four classifiers and we predicted $P_{scores}(s_i)$ and $P_{concedes}(s_i)$, $P_{gains}(s_i)$ and $P_{attacked}(s_i)$. Each of the classifiers estimates whether a state $s_i$ is labeled positive ($= 1$) or negative ($= 0$). Positive label was assigned if a attacking team in a state $s_i$ scored or conceded in the subsequent $k'$ events, and the latter was labeled positive if the defending team in the state $s_i$ gained the ball or attacked in the subsequent $k$ events. An illustrative example is shown Figure \ref{fig:example}. In both classifications, $k$ is a parameter freely determined by the user. The smaller $k$ is, the shorter term the prediction is and the smaller positive labels, so we can predict the probabilities reliably and obtain unambiguous interpretation. On the other hand, the larger $k$ is, the longer term the prediction is and the larger positive labels, so we can predict these considering many factors but obtain ambiguous interpretation. Since it is intrinsically difficult to solve this trade-off, we set $k=5$ and $k'=10$ like the previous studies of \cite{toda2022evaluation} and \cite{Decroos19} respectively. Next, we describe the feature vector creation. We first created three types of features; events, on-ball features and off-ball features. The events' types defined by VAEP \citep{Decroos19} had 24 events: id, pass, cross, throw in, freekick crossed, freekick short, corner crossed, corner short, take on, foul, tackle, interception, shot, shot penalty, shot freekick, keeper save, keeper claim, keeper punch, keeper pick up, clearance, bad touch, non action, dribble, goalkick (for details, see the paper \citep{Decroos19}). Yet, as shown in Figure \ref{fig:diagram}, this type was used in the classifiers for VAEP but not used for VDEP due to the difference of each concept. Whereas we predicts future actions in VAEP, the prediction in VDEP was whether a team was able to gain the ball or was penetrated into the penalty area. Hence, feature leakage (to know ground truth for prediction in advance) might occur if we used data including the types of events into the classifiers for VDEP. On the other hand, we utilized on-ball features and off-ball features in the classifiers for VAEP and VDEP. We first created the 21 dimensional features of $i$th and $i-1$th events about on-ball features. We used the body part such as foot, head, other and head/other when the player acts at $i$th event (4 dim.), whether a yellow/red card is given at the event (2 dim.), the scoreboard and the goal difference at the event (3 dim.), the xy coordinates of the ball at the start and the end (4 dim.), the displacements of the ball from the start to the end (x, y, the Euclidean norm: 3 dim.), the distance and angle between the ball and the goal at the start and the end (4 dim.), whether a team possessing the ball is a visitor or not (1 dim.) In addition, we considered the off-ball feature $o_i$ at the time that the event occurred in the state $s_i$. It should be noted that data utilized in this study was provided from broadcast videos shown in Figure \ref{fig:example}. Thus, we first verified impacts on the prediction score by changing offensive/defensive players in the order of closest to the ball. This verification is described in the subsection \textit{Evaluation and Statistical Analysis}. Moreover, we used the x and y coordinates of positions of all players (22 players xy coordinates) Furthermore, we calculated the distance and the angle of each player from the ball and sorted the features in the order of closest to the ball. Therefore, the feature for VAEP has 133 dimensions in total ($24 + 21 + 22\times 4$) and for VDEP has 109 ($21 + 22 \times 4$). As with the previous studies by \cite{Decroos19} and \cite{toda2022evaluation}, we used $k'=10$ when calculating $P_{scores}$ and $P_{concedes}$ and $k=5$ when calculating $P_{gains}$ and $P_{attacked}$. Also, 12,262 out of the 112,590 events generated by all teams had no data about all players' xy coordinates. Thus, we removed its data and utilized 100,328 events for classifiers. Out of the number of events, defined by $k$ above, there were 1,101 positive cases of scores, 186 concedes, 3,723 ball gain and 11,895 attacked. In UEFA EURO 2022, we removed 28,218 out of 61,433 events and utilized 33,215 for the classifiers, including 454 positive cases of scores, 132 concedes, 2,551 ball gain and 5,401 attacked. As with the previous studies, we can interpret these as goals scored and conceded are rare events compared to ball gains and attacked. Therefore, to verify the goals scored and conceded in this study with a smaller dataset (compared with the larger dataset in the previous work \cite{Decroos19}), we used a indicator and described it in the subsection \textit{Evaluation and Statistical Analysis}. \subsection*{Prediction Model Implementation} According to the previous frameworks of \cite{Decroos19} and \cite{toda2022evaluation}, we adopted XGBoost (eXtreme Gradient Boosting) proposed by \cite{Chen16} as the classifiers to predict scores, concedes, ball gains and attacked. Gradient tree boosting has been a popular technique since \cite{friedman2001greedy} proposed. This technique is a prediction algorithm that uses a previous information to optimise the splitting of explanatory variables and construct a new decision tree at each stage. Also, this performs well on a variety of areas, and in a faster and more scalable way. Moreover, even if the prediction model itself do not consider the time series structure, the previous studies of \cite{Decroos19} proposed the way for prediction models to reflect the history of the input ($i$th and $i-1$th events) and that of the output (the subsequent $k$ events) . In calculating VDEP and VAEP values, we verified the classifiers using a 10-fold cross-validation procedure. Here we define the terms of training, validation, and test (datasets). We train the machine learning model using the training dataset, validate the model performance using the validation dataset (sometimes for determining some hyper-parameters), and finally test the model performance using the test dataset. Note that we remove data not including the x and y coordinates of positions of players from these dataset. Such procedure benefits us to verify a model which can test the performance using a new test data (not used during training). In our case, the validation data was not used and hyperparameters are predetermined as default in Python library ``xgboost'' (version 1.4.1). Our all computations were performed using Python (version 3.7.13). In particular, the code we used \ifarxiv is provided at \url{https://github.com/Rikuhei-ynwa/Generalized-VDEP}. \else will be shared if our manuscript is ready for publication. \fi \subsection*{Evaluation and Statistical Analysis} \label{subsec:eval} First, we verified how many attacker/defender we need to predict the above probabilities. To this end, we defined the number of nearest attackers/defenders to the ball as \textit{n\_nearest}. This value was $0 \leq n\_nearest \leq 11$ and by each of this value, we predicted the probabilities such as $P_{gains}$ and $P_{attacked}$. Here we used a 10-fold cross-validation procedure. Again, the datasets have 51 games in UEFA EURO 2020 and 31 in UEFA EURO 2022. Hence, in the step of verification of classifiers, 9 out of 10 times we repeated the learning of classifiers using the data of 46 games and a prediction using the data of 5 games, and finally used the data of 45 games into the learning of classifiers and 6 games into the prediction (i.e., data of all 51 games were finally predicted and evaluated) to analyze all games in UEFA EURO 2020. Similarly, in UEFA EURO 2022, first 9 times the data of 26 games was used into the learning of classifiers and 5 games into the prediction repeatedly, and last time we change 26 to 25 at the classifier stage and 5 to 6 at the prediction stage. To validate the classifiers for the predictions, we used the F1 score as with the previous study of \cite{toda2022evaluation}. Data in this study was much more negative than positive cases like ones used in the previous studies. Then, the (intuitive) accuracy score may not be better when there are extremely more negative than positive cases, as in this and previous studies. For example, the accuracy of attacked in VDEP and scored in VAEP will be $1-11,895/100,328\approx0.881$ and $1-186/100,328\approx0.998$ when all negative cases are predicted. For this reason, in this study, we used the F1 score to evaluate whether the true positives can be classified without considering the true negatives. The F1 score is the harmonic mean of Precision and Recall as follows. \begin{equation*} \text{F1score} = (2 \times \text{Precision} \times \text{Recall}) / (\text{Precision} + \text{Recall}) \end{equation*} where the Recall and the Precision is defined as the true-positive rate and the ratio of the sum of true positives and true negatives to false positives respectively. In this index, only true positives are evaluated, not true negatives. Therefore, we saw how changing $n\_nearest$ affected values of F1 score for $P_{scores}(s_i)$, $P_{concedes}(s_i)$, $P_{gains}(s_i)$ and $P_{attacked}(s_i)$. For the evaluation of defense using GVDEP calculated by the above probabilities, we present examples to quantitatively and qualitatively evaluate games in a competition. To calculate GVDEP values, the classifiers first learned with all 51 games (UEFA EURO 2020) or 31 games (UEFA EURO 2022) and tested themselves. Note that, we performed the predictions and analyzed 36 group stage matches and 8 matches of the first round in UEFA EURO 2022 to analyze the best 16 teams with the same number of games. Similarly, in UEFA EURO 2022, 24 group stage games and 4 quarter-final games were analyzed for the same reasons. The GVDEP is then calculated using the probabilities obtained by the tests and equations \ref{eq:deltas} to \ref{eq:rGVDEP}. For correlation analysis between variables, we computed Pearson's correlation coefficient $r$. $p<0.05$ was considered significant for all statistical analysis. all statistical analyses were performed using SciPy (version 1.5.4) in the Python library \section*{Results} \label{sec:results} \subsection*{Verification of classifiers} To verify the GVDEP method, we first investigated the prediction performances by changes in the number of people around the ball ($n\_nearest$). As mentioned earlier, GVDEP has four classifiers of gains, attacked, scores and concedes. The latter two probabilities are the same as the previous work by \cite{Decroos19}. These classifiers predict probabilities of gains ($P_{gains}$), attacked ($P_{attacked}$), scores ($P_{scores}$) and concedes ($P_{concedes}$). In Figure \ref{fig:f1scores_euro2020}, we showed that changes in $n\_nearest$ influenced F1-scores of these predictions. For the prediction probability of gains, no improvement in F1-socres was observed after $n\_nearest$ is 3 or 4. In contrast, for the predictions of scores, concedes, and attacked, F1-scores did not increase with all players' location information. The results of UEFA EURO 2022 were similar and we show the results in Supplementary Figure 6. \subsection*{Valuation of team defenses} Next, we show examples of team defense valuations in UEFA EURO 2020 in Figure \ref{fig:valuations2020} (for UEFA EURO 2022, see Supplementary Figure 7). For the results of the previous VDEP definition, which is formulated by $P_{gains}$ and $P_{attacked}$, see Supplementary Figure 8. In the figures, the mean values of $\Delta P_{gains}$ and $\Delta P_{attacked}$ are defined as $gain\_value$ and $attacked\_value$ respectively, and GVDEP values are as \textit{g\_vdep\_value}. First, we characterize and evaluate team defenses using \textit{gain\_value} and the average of \textit{attacked\_value} in Figure \ref{fig:valuations2020}a. Overall, the trade-off between the averaged gain and attacked values were confirmed ($r_{14} = -0.757, p = 0.001 < 0.05$). That is, the teams with more gain values tended to more attacked (less value), and vice versa. This tendency was similar to the results of the previous work (the older definition and Japanese professional soccer league) by \cite{toda2022evaluation}. For specific teams valuation, for example, Italy that won UEFA EURO 2020 was able to keep their opponents (Turkey, Switzerland, Wales and Austria) from penetrating into the penalty area, suggesting that these would be connected with a less number of concedes (see also Figure \ref{fig:valuations2020}b). Also, both values of England that is one of the finalists was over each average values. In addition, Spain and Denmark that advanced to Semi-Final were both higher \textit{attacked\_value} but lower \textit{gain\_value}, suggesting that they may keep good defense where their opposite teams was unable to attack effectively. Second, we investigate the relationship between GVDEP values and actual concedes in Figure \ref{fig:valuations2020}b. There was no significant correlation between them ($r_{14} = -0.265, p = 0.321$). Please note that the concedes were integers and had small variations, then we proposed this approach to value the defense process (not the outcome). Thus, the difference between them is important rather than the correlation itself. For example, Italy and England were not conceded and were high GVDEP values, so we can guess they continued to protect themselves against concedes. Third, it should be noted that there was a strong correlation between GVDEP and attacked values ($r_{14} = 0.993, p = 3.161\times 10^{-14}$) in Figure 3c. This is probably because the attacked values and the absolute value of the coefficients ($|weight\_attacked| = 0.021$) were larger than those of gain values ($|weight\_gains| = 0.011$) in UEFA EURO 2020. Thus, we should be careful of the results interpretation in GVDEP values, which is similar to those of the attacked values. Specifically, the finalists tried to preserve the state where it was difficult for their opposite teams to attack effectively, so their GVDEP values are high. Also, Spain and Denmark were at their average levels of gain and attacked values. Lastly, for this reason, we investigated the relationship between the gain values and the concedes in Figure 3d. There was also no significant correlation between them ($r_{14} = 0.389, p = 0.136$). Yet, roughly speaking, the figure shows if a team tries to bring the state where they can gain the ball in the future, they will take more risks. For instance, the number of Italy's concedes and their \textit{gain\_value} were less as with the tendency. In short, they did not aim to gain the ball when defending. However, England's \textit{gain\_value} was over the mean value and their concedes were less than any other team, suggesting they tried such as tackles and interceptions and succeeded. In addition, \textit{gain\_value} of Spain and Denmark were low but they allowed more concedes than Italy. \section*{Discussion} In this study, we proposed a generalized valuation method of defensive teams by score-scaling the predicted probabilities of the events. First, we verified the existing probabilities based on the prediction performance. Second, we quantitatively analyzed the games in UEFA EURO 2020 using the defensive evaluations of the proposed. Finally, we discuss the limitations of the proposed methods and future perspectives. To calculate the probabilities of ball gain and being attacked, the previous study \cite{toda2022evaluation} used data including all player's features. However, the type of this data is not always available because this is often private or expensive. Hence, we used open-source data including all player's location data in the video frame, and we verified the existing classifiers' performances. This result suggests that although features of not only the ball but the players are important to improve the classifier in ball gain, we do not necessary need all players locations. Our results suggest that our approach can evaluate defensive performances only with the open-source data. Considering the team evaluations, according to the correlation analysis, we found the trade-off between the tendencies of gaining the ball and of not being effectively attacked. As a finalist team, England was able to maintain a good balance between both the ball gain and not being attacked at a high level, and they did not allow to concede until the analyzed Round of 16. A champion team, Italy's $attacked\_value$ is the highest in teams that went to the knockout phase. We also found that they did not concede except for the corner kick in the analyzed games. However, Belgium and Czech Republic were evaluated with lower values in our method in spite of the low number of concedes. This may be because their keepers made efforts to prevent concedes. Indeed, out of the four matches played up to the round of 16, Belgium and Czech Republic have kept three or two clean sheets (no concedes), respectively. In this study, we consider the evaluation of team defense, not the contributions of the keepers, thus we may acquire such results. Finally, we introduce the limitations of this study and future perspectives. The first is about the use of data. Since data used in this study does not necessarily include all players' features, results about verification of classifiers do not perfectly describe the performance. Another issue is that our formula is too affected by $attacked\_value$. As shown in Figure \ref{fig:valuations2020}c, we found a too high positive correlation between $attacked\_value$ and GVDEP values. It is true that not allowing the opponent to attack can be seen as reducing the probability of conceding a goal, but it is difficult to assess the defence on this basis alone. The last is the definition of off-ball features and the modeling. In this study, these include the x and y coordinates of positions of all players (22 players xy coordinates) and the distance and the angle of each player from the ball, sorted in the order of closest to the ball, at the analysis stage. For future work, we can consider more specific features of the off-ball defense (e.g., a defense line) or other nonlinear modeling such as using neural networks. \ifarxiv \section*{Acknowledgments} This work was supported by JSPS KAKENHI (Grant Numbers 20H04075 and 21H05300), JST START University Ecosystem Promotion Type (Grant Number JPMJST2183), and JST Presto (Grant Number JPMJPR20CA). \fi \bibliographystyle{apa} \section*{} \vspace{20mm} \Large{\bf{Supplementary materials for: \\ \\ \noindent Location analysis of players in men's Euro 2020 and women's Euro 2022 using generalized valuation of defense by estimating probabilities} } \vspace{10mm} \ifarxiv \noindent\large{Rikuhei Umemoto and Keisuke Fujii} \else \noindent\large{Anonymous} \fi \fi \newpage \begin{figure}[h] \begin{center} \vspace{30pt} \includegraphics[scale=0.7]{fig/histogram.png} \caption{ {\bf The number of players in broadcast videos of Euro 2020 and Euro 2022.} This figure shows how many players there were in broadcast videos of (a) Euro 2020 and (b) Euro 2022. There were no players in some scenes. In this study, to calculate the probabilities of the predictions, we removed the data (12,262 in Euro 2020 and 28,218 in Euro 2022) that did not have any player in a scene. } \label{fig:environment} \end{center} \end{figure} \newpage \begin{figure}[h] \centering \includegraphics[scale=0.8]{fig/f1scores_euro2022.png} \caption{ {\bf F1-scores of the predictions for n\_nearest in Euro 2022.} Box plots represent F1-scores of the predictions of (a) scores, (b) concedes, (c) gains and (d) attacked for n\_nearest in Euro 2022. As with Figure \ref{fig:f1scores_euro2020}, a green triangle means an average of F1-score at each n\_nearest and x is an outlier at a n\_nearest. In particular, (c), F1-scores stop to increase when n\_nearest is 3 or 4 like Euro 2020. } \label{fig:f1scores_euro2022} \end{figure} \newpage \begin{figure} \centering \includegraphics[scale=0.65]{fig/valuations_2022.png} \caption{ {\bf Defensive evaluations of teams in multiple games of Euro 2022.} Plots represent the relationship between (a) $gain\_value$ and $attacked\_value$ ($r_{14} = 0.156, p = 0.712$), (b) concedes and GVDEP values ($r_{14} = -0.557, p = 0.151$), (c) $attacked\_value$ and GVDEP values ($r_{14} = 0.999, p = 3.39\times 10^{-9}$), (d) concedes and $gain\_value$ ($r_{14} = -0.116, p = 0.784$) regarding last 8 teams of Euro 2022. The details about each axis were already explained in Figure \ref{fig:valuations2020}. As with England that was one of the finalists of Euro 2020, Germany Women's that was one of the finalists of Euro 2022 took high values for $gain\_value$ and $attacked\_value$. Also, similar to Italy that was another finalist of Euro 2020, another finalist of Euro 2022, England Women's, was able to keep their opponents from penetrating into the penalty area. These would be connected with a less number of concedes (see also Figure \ref{fig:valuations2022}b). } \label{fig:valuations2022} \end{figure} \newpage \begin{figure}[h] \begin{center} \vspace{30pt} \includegraphics[scale=0.65]{fig/valuations_2020_toda.png} \caption{ {\bf Defensive evaluations of teams in multiple games of Euro 2020 in the definition of \cite{toda2022evaluation}.} Plots representing the relationship between (a) $P_{gains}$ and $P_{attacked}$ ($r_{14} = -0.135, p = 0.617$), (b) concedes and VDEP values ($r_{14} = -0.110, p = 0.686$), (c) $P_{attacked}$ and VDEP values ($r_{14} = -0.669, p = 0.00463$), (d) concedes and $P_{gains}$ ($r_{14} = 0.0333, p = 0.903$) regarding last 16 teams of Euro 2020. he grey lines are the averaged values of each graph's axes. For $P_{gains}$, the more points plotted to the right, the more likely a team could be to gain the ball. Yet, for $P_{attacked}$, the more points plotted to the left or the below, the less likely an opposite team could be to attack effectively. Otherwise, the axes are the same meanings as Figure \ref{fig:valuations2020}. VDEP value that was proposed by \cite{toda2022evaluation} is calculated as follows: $P_{gains} - C \times P_{attacked}$. Since $C$ value is based on the number of occurrences of ball gains and being attacked, this varies with the data ($C = 0.313$ in this study). } \label{fig:res_testdata} \end{center} \end{figure} \section*{} \vspace{20mm} \Large{\bf{Supplementary materials for: \\ \\ \noindent Location analysis of players in men's Euro 2020 and women's Euro 2022 using generalized valuation of defense by estimating probabilities} } \vspace{10mm} \ifarxiv \noindent\large{Rikuhei Umemoto and Keisuke Fujii} \else \noindent\large{Anonymous} \fi \fi \newpage \begin{figure}[h] \begin{center} \vspace{30pt} \includegraphics[scale=0.7]{fig/histogram.png} \caption{ {\bf The number of players in broadcast videos of Euro 2020 and Euro 2022.} This figure shows how many players there were in broadcast videos of (a) Euro 2020 and (b) Euro 2022. There were no players in some scenes. In this study, to calculate the probabilities of the predictions, we removed the data (12,262 in Euro 2020 and 28,218 in Euro 2022) that did not have any player in a scene. } \label{fig:environment} \end{center} \end{figure} \newpage \begin{figure}[h] \centering \includegraphics[scale=0.8]{fig/f1scores_euro2022.png} \caption{ {\bf F1-scores of the predictions for n\_nearest in Euro 2022.} Box plots represent F1-scores of the predictions of (a) scores, (b) concedes, (c) gains and (d) attacked for n\_nearest in Euro 2022. As with Figure \ref{fig:f1scores_euro2020}, a green triangle means an average of F1-score at each n\_nearest and x is an outlier at a n\_nearest. In particular, (c), F1-scores stop to increase when n\_nearest is 3 or 4 like Euro 2020. } \label{fig:f1scores_euro2022} \end{figure} \newpage \begin{figure} \centering \includegraphics[scale=0.65]{fig/valuations_2022.png} \caption{ {\bf Defensive evaluations of teams in multiple games of Euro 2022.} Plots represent the relationship between (a) $gain\_value$ and $attacked\_value$ ($r_{14} = 0.156, p = 0.712$), (b) concedes and GVDEP values ($r_{14} = -0.557, p = 0.151$), (c) $attacked\_value$ and GVDEP values ($r_{14} = 0.999, p = 3.39\times 10^{-9}$), (d) concedes and $gain\_value$ ($r_{14} = -0.116, p = 0.784$) regarding last 8 teams of Euro 2022. The details about each axis were already explained in Figure \ref{fig:valuations2020}. As with England that was one of the finalists of Euro 2020, Germany Women's that was one of the finalists of Euro 2022 took high values for $gain\_value$ and $attacked\_value$. Also, similar to Italy that was another finalist of Euro 2020, another finalist of Euro 2022, England Women's, was able to keep their opponents from penetrating into the penalty area. These would be connected with a less number of concedes (see also Figure \ref{fig:valuations2022}b). } \label{fig:valuations2022} \end{figure} \newpage \begin{figure}[h] \begin{center} \vspace{30pt} \includegraphics[scale=0.65]{fig/valuations_2020_toda.png} \caption{ {\bf Defensive evaluations of teams in multiple games of Euro 2020 in the definition of \cite{toda2022evaluation}.} Plots representing the relationship between (a) $P_{gains}$ and $P_{attacked}$ ($r_{14} = -0.135, p = 0.617$), (b) concedes and VDEP values ($r_{14} = -0.110, p = 0.686$), (c) $P_{attacked}$ and VDEP values ($r_{14} = -0.669, p = 0.00463$), (d) concedes and $P_{gains}$ ($r_{14} = 0.0333, p = 0.903$) regarding last 16 teams of Euro 2020. he grey lines are the averaged values of each graph's axes. For $P_{gains}$, the more points plotted to the right, the more likely a team could be to gain the ball. Yet, for $P_{attacked}$, the more points plotted to the left or the below, the less likely an opposite team could be to attack effectively. Otherwise, the axes are the same meanings as Figure \ref{fig:valuations2020}. VDEP value that was proposed by \cite{toda2022evaluation} is calculated as follows: $P_{gains} - C \times P_{attacked}$. Since $C$ value is based on the number of occurrences of ball gains and being attacked, this varies with the data ($C = 0.313$ in this study). } \label{fig:res_testdata} \end{center} \end{figure} \section{Introduction} Soccer or association football has one of the highest number of players and spectators in the world. Because of the diversity, we need to establish a common indicator to quantitatively evaluate soccer teams, regardless of the background of the constituent players. However, analysis of soccer data is usually difficult because the ball and players are moving continuously during a game and specific event data (e.g., scores and concedes) are likely to be limited. In particular, defensive tactics are considered difficult to evaluate because of the limited amount of available statistics, such as goals scored in the case of attacks (for overview, see a review of \cite{fujii2021data}). Although publishing of tracking technologies and videos \citep{scott2022soccertrack, Cioppa_2022_CVPR} have recently progressed, a large amount of accurate all-time tracking data for all players remain unpublished. Therefore, many evaluations of attacking players with the ball are performed from event data and the coordinates of the ball. For example, based on scoring prediction using ball location data, researchers evaluated plays such as using the expected values of goals scored and conceded (\citet{rudd2011framework, Decroos19,Liu2020}, and others are reviewed in \citet{vanroy2020valuing}). Although there have been some studies on off-ball player evaluation \citep{spearman2018beyond,Fernandez18,teranishi2022evaluation} which require positional data for all players, these are usually private (i.e., not published). Recently, the location of all players in broadcast video frames of every event have been published by StatsBomb Inc. (a UK football analytics and data visualization company) in soccer games of men's Euro 2020 and women's Euro 2022 competitions. For example, using this data including a pass receiver and defenders, researchers developed a machine learning method for prediction and valuation of penetrative passes \citep{rahimian2022lets}. However, there have been a few studies on defensive play evaluation. For example, using only ball location and event data, researchers have evaluated interceptions \citep{Piersma20} and the effectiveness of defensive plays by the expected value of a goal-scoring opportunity conceded \citep{Robberechts19}. For team evaluation using all players' tracking data, \citet{toda2022evaluation} have previously proposed a method to evaluate team defense by predicting ball gain (or recovery) and being attacked. However, this approach has the following limitations: (1) they did not consider the importance weight of the ball gain and being attacked, (2) they assumed the perfect observation of all 22 players and the effect of the lack of the observation is unknown, and (3) they only investigated in a domestic male professional league and did not fully investigated in terms of the diversity (e.g., nations and sexes). The purpose of this study is to address these issues. First, we propose a generalized valuation method of defensive teams by score-scaling the predicted probabilities of the events called \textit{Generalized Valuing Defense by Estimating Probabilities} (GVDEP). Then, using the location of all players in the broadcast video frames of every event in football games of men's Euro 2020 and women's Euro 2022, we investigated the effect on the prediction models of reducing the number of players and validated our approach to analyze the games of each team. The main contributions of this work are as follows: (i) We generalize the previous team defense valuation method \citep{toda2022evaluation} by weighting the probabilities of ball gain and being attacked based on the prediction of scoring and conceding \citep{Decroos19}. (ii) We verified the classifiers for prediction of ball gain, being attacked, scoring, and conceding when we reduced the number of players. (iii) We verified our method using a diverse soccer dataset in terms of nations and both sexes (open data of men's Euro 2020 and women's Euro 2022). Our approach can encourage even non-professionals watching broadcast videos to discuss the valuation of defensive teams at a particular event. Furthermore, our approach can provide quantitative and useful indicator for valuing and scouting our and opponent teams in defense. \section*{Materials and methods} \subsection*{Dataset} Since having the location data of players of teams in diverse nations, we chose the open-source data with all games of UEFA men's Euro 2020 (51 games) and women's Euro 2022 (31 games) provided from StatsBomb Inc. (UK). This datasets include event data (i.e., labels of actions, such as passing and shooting, and the simultaneous xy coordinates of the ball) and location data (i.e., xy coordinates) of all players ``in the frame of broadcast video'' around every event. Many scenes of a soccer broadcast video do not show all 22 players; therefore, as shown in Figure \ref{fig:example}, note that some of datasets do not include all 22 players' information. In the case of this dataset, the min, the first quartile, the median, the third quartile and the max are 0.0, 11.0, 15.0, 18.0 and 22.0 respectively about the number of players in a scene (for the histogram, see Supplementary Figure 5). Also, we used ``event'' as the above meaning based on the previous studies of \cite{Decroos19,pappalardo2019playerank}. Data acquisition was based on the contract between the competitions (UEFA Euro2020/2022) and the company (StatsBomb, Inc), not between the players and us. The Statsbomb dataset has been widely used in academic studies (e.g., \cite{gregory2022influence, Decroos19}). It would be guaranteed that the use of the data would not infringe on any rights of players or teams. The dataset is available at \url{https://github.com/statsbomb/open-data}. In all 51 games of UEFA EURO 2020, 142 of the 1262 shots were on goal, 7,027 effective attacks were played, and 2,463 ball gains were realized. Similarly, in all 31 games of UEFA EURO 2022, 95 of the 880 shots were on goal, 4,717 effective attacks were played, and 1,839 ball gains were realized. An effective attack is defined as an event that ends in shooting or penetrating into the penalty area. Also, ball gains are defined as a change in the attacking by some factors such as tackle, interception and offside. In this study, an effective attack is defined as \textit{attacked} from viewpoints of defenders. It should be noted that we labeled for each event (not an attack segment) whether positive/negative attacked/ball gains occurred at an event. \subsection*{Proposed Method} The flow diagram of our method is shown in Figure \ref{fig:diagram}. We perform data pre-processing and feature creation, training classifiers, prediction with the classifiers, and computing GVDEP as well as Valuing Defense by Estmating Probabilities (VDEP) proposed by \cite{toda2022evaluation}. Here we describe the core idea of our proposed method and the details are described in the following subsection. Suppose that the state of the game is given by $S = [s_1,\ldots,s_N]$ in chronological order. We consider $s_i$, like \cite{toda2022evaluation}, as the $i$th state. This includes features about the event or action $a_i$, the on-ball features $b_i$ and the off-ball features $o_i$ not close to the ball at the time of the action such as attacking/defending players' coordinates at a state. Note that we show the result about the validation of the number of the players in the following section. To evaluate all state transition for defensive and offensive actions in this study from the defender's perspective, the following time index $i$ is used as the $i$th \textit{event}. As with \cite{toda2022evaluation}, we define the probability of future ball gain $P_{gains}(s_i)$ and attacked $P_{attacked}(s_i)$ in the game state $s_i$ of a certain interval at an $i$th event. These probabilities were given by the classifier trained. We focus on whether changing the game state affects the team defense like the work of \cite{Decroos19}, which directly used these probabilities for state evaluation. For this reason, first we use the difference between the probabilities at a state $s_i$ and ones at previous state $s_{i-1}$ as follows: \begin{align} \label{eq:deltas} \Delta P_{gains}(s_i, x) &= P_{gains}(s_i, x) - P_{gains}(s_{i-1}, x), \\ \Delta P_{attacked}(s_i, x) &= P_{attacked}(s_i, x) - P_{attacked}(s_{i-1}, x), \end{align} where $x$ is which team is defending. The most critical problem in the previous was constant parameter to balance the probabilities of ball gain and being attacked, which was defined by the frequencies of both. To appropriately weight both variables in a score scale, the value of defense in the proposed method $V_{gvdep}$ is weighted with VAEP of \cite{Decroos19} at a time when ball gains or attacked as follows: \begin{align} weight\_gains &= \sum_{j \in Ev_{gains}}\frac{sign(Teams_{j})V_{vaep}(s_j)}{|Ev_{gains}|}, \\ weight\_attacked &= \sum_{j \in Ev_{attacked}}\frac{sign(Teams_{j})V_{vaep}(s_j)}{|Ev_{attacked}|}, \end{align} \begin{equation} \label{eq:vGVDEP} V_{gvdep}(s_i) = weight\_gains \times \Delta P_{gains}(s_i) - weight\_attacked \times \Delta P_{attacked}(s_i), \end{equation} where each of $Ev_{gains}$ and $Ev_{attacked}$ (also $|Ev_{gains}|$ and $|Ev_{attacked}|$) is the number of ball gains/attacked in all games. $sign(Teams_{j})$ means the sign of which team performed ball gain or attacked at a $j$th event. Where the first term in equation (\ref{eq:vGVDEP}) can be regarded as the difference in the probabilities of scoring or conceding after gaining the ball, and the second term as that after being attacked. Furthermore, to use the framework of \cite{Decroos19} into our formula, we calculated VAEP as follows: \begin{align} \label{eq:VAEP} \Delta P_{scores}(s_i) &= P_{scores}(s_i) - P_{scores}(s_{i-1}) \\ \Delta P_{concedes}(s_i) &= P_{concedes}(s_i) - P_{concedes}(s_{i-1}) \\ V_{vaep}(s_i) &= \Delta P_{scores}(s_i) - \Delta P_{concedes}(s_i) \end{align} To evaluate the team defense, we define $R_{gvdep}(p)$ as the evaluation value per game for team $p$ as follows: \begin{equation} \label{eq:rGVDEP} R_{gvdep}(p) = \frac{1}{M}\Sigma_{s_i\in \bm{S}_{M}^p} V_{gvdep}(s_i), \end{equation} where $M$ is the number of events for team $p$ defending in all matches, and $\bm{S}_M^p$ is the set of states $S$ of team $p$ defending in all matches. \subsection*{Feature Creation} In this subsection, we describe pre-processing and feature creation. As mentioned above, we used the features at a state $s = [a,b,o]$ including the $i$th and $i-1$th events for the same reasons as in the framework of \cite{toda2022evaluation}. Using these features, we trained four classifiers and we predicted $P_{scores}(s_i)$ and $P_{concedes}(s_i)$, $P_{gains}(s_i)$ and $P_{attacked}(s_i)$. Each of the classifiers estimates whether a state $s_i$ is labeled positive ($= 1$) or negative ($= 0$). Positive label was assigned if a attacking team in a state $s_i$ scored or conceded in the subsequent $k'$ events, and the latter was labeled positive if the defending team in the state $s_i$ gained the ball or attacked in the subsequent $k$ events. An illustrative example is shown Figure \ref{fig:example}. In both classifications, $k$ is a parameter freely determined by the user. The smaller $k$ is, the shorter term the prediction is and the smaller positive labels, so we can predict the probabilities reliably and obtain unambiguous interpretation. On the other hand, the larger $k$ is, the longer term the prediction is and the larger positive labels, so we can predict these considering many factors but obtain ambiguous interpretation. Since it is intrinsically difficult to solve this trade-off, we set $k=5$ and $k'=10$ like the previous studies of \cite{toda2022evaluation} and \cite{Decroos19} respectively. Next, we describe the feature vector creation. We first created three types of features; events, on-ball features and off-ball features. The events' types defined by VAEP \citep{Decroos19} had 24 events: id, pass, cross, throw in, freekick crossed, freekick short, corner crossed, corner short, take on, foul, tackle, interception, shot, shot penalty, shot freekick, keeper save, keeper claim, keeper punch, keeper pick up, clearance, bad touch, non action, dribble, goalkick (for details, see the paper \citep{Decroos19}). Yet, as shown in Figure \ref{fig:diagram}, this type was used in the classifiers for VAEP but not used for VDEP due to the difference of each concept. Whereas we predicts future actions in VAEP, the prediction in VDEP was whether a team was able to gain the ball or was penetrated into the penalty area. Hence, feature leakage (to know ground truth for prediction in advance) might occur if we used data including the types of events into the classifiers for VDEP. On the other hand, we utilized on-ball features and off-ball features in the classifiers for VAEP and VDEP. We first created the 21 dimensional features of $i$th and $i-1$th events about on-ball features. We used the body part such as foot, head, other and head/other when the player acts at $i$th event (4 dim.), whether a yellow/red card is given at the event (2 dim.), the scoreboard and the goal difference at the event (3 dim.), the xy coordinates of the ball at the start and the end (4 dim.), the displacements of the ball from the start to the end (x, y, the Euclidean norm: 3 dim.), the distance and angle between the ball and the goal at the start and the end (4 dim.), whether a team possessing the ball is a visitor or not (1 dim.) In addition, we considered the off-ball feature $o_i$ at the time that the event occurred in the state $s_i$. It should be noted that data utilized in this study was provided from broadcast videos shown in Figure \ref{fig:example}. Thus, we first verified impacts on the prediction score by changing offensive/defensive players in the order of closest to the ball. This verification is described in the subsection \textit{Evaluation and Statistical Analysis}. Moreover, we used the x and y coordinates of positions of all players (22 players xy coordinates) Furthermore, we calculated the distance and the angle of each player from the ball and sorted the features in the order of closest to the ball. Therefore, the feature for VAEP has 133 dimensions in total ($24 + 21 + 22\times 4$) and for VDEP has 109 ($21 + 22 \times 4$). As with the previous studies by \cite{Decroos19} and \cite{toda2022evaluation}, we used $k'=10$ when calculating $P_{scores}$ and $P_{concedes}$ and $k=5$ when calculating $P_{gains}$ and $P_{attacked}$. Also, 12,262 out of the 112,590 events generated by all teams had no data about all players' xy coordinates. Thus, we removed its data and utilized 100,328 events for classifiers. Out of the number of events, defined by $k$ above, there were 1,101 positive cases of scores, 186 concedes, 3,723 ball gain and 11,895 attacked. In UEFA EURO 2022, we removed 28,218 out of 61,433 events and utilized 33,215 for the classifiers, including 454 positive cases of scores, 132 concedes, 2,551 ball gain and 5,401 attacked. As with the previous studies, we can interpret these as goals scored and conceded are rare events compared to ball gains and attacked. Therefore, to verify the goals scored and conceded in this study with a smaller dataset (compared with the larger dataset in the previous work \cite{Decroos19}), we used a indicator and described it in the subsection \textit{Evaluation and Statistical Analysis}. \subsection*{Prediction Model Implementation} According to the previous frameworks of \cite{Decroos19} and \cite{toda2022evaluation}, we adopted XGBoost (eXtreme Gradient Boosting) proposed by \cite{Chen16} as the classifiers to predict scores, concedes, ball gains and attacked. Gradient tree boosting has been a popular technique since \cite{friedman2001greedy} proposed. This technique is a prediction algorithm that uses a previous information to optimise the splitting of explanatory variables and construct a new decision tree at each stage. Also, this performs well on a variety of areas, and in a faster and more scalable way. Moreover, even if the prediction model itself do not consider the time series structure, the previous studies of \cite{Decroos19} proposed the way for prediction models to reflect the history of the input ($i$th and $i-1$th events) and that of the output (the subsequent $k$ events) . In calculating VDEP and VAEP values, we verified the classifiers using a 10-fold cross-validation procedure. Here we define the terms of training, validation, and test (datasets). We train the machine learning model using the training dataset, validate the model performance using the validation dataset (sometimes for determining some hyper-parameters), and finally test the model performance using the test dataset. Note that we remove data not including the x and y coordinates of positions of players from these dataset. Such procedure benefits us to verify a model which can test the performance using a new test data (not used during training). In our case, the validation data was not used and hyperparameters are predetermined as default in Python library ``xgboost'' (version 1.4.1). Our all computations were performed using Python (version 3.7.13). In particular, the code we used \ifarxiv is provided at \url{https://github.com/Rikuhei-ynwa/Generalized-VDEP}. \else will be shared if our manuscript is ready for publication. \fi \subsection*{Evaluation and Statistical Analysis} \label{subsec:eval} First, we verified how many attacker/defender we need to predict the above probabilities. To this end, we defined the number of nearest attackers/defenders to the ball as \textit{n\_nearest}. This value was $0 \leq n\_nearest \leq 11$ and by each of this value, we predicted the probabilities such as $P_{gains}$ and $P_{attacked}$. Here we used a 10-fold cross-validation procedure. Again, the datasets have 51 games in UEFA EURO 2020 and 31 in UEFA EURO 2022. Hence, in the step of verification of classifiers, 9 out of 10 times we repeated the learning of classifiers using the data of 46 games and a prediction using the data of 5 games, and finally used the data of 45 games into the learning of classifiers and 6 games into the prediction (i.e., data of all 51 games were finally predicted and evaluated) to analyze all games in UEFA EURO 2020. Similarly, in UEFA EURO 2022, first 9 times the data of 26 games was used into the learning of classifiers and 5 games into the prediction repeatedly, and last time we change 26 to 25 at the classifier stage and 5 to 6 at the prediction stage. To validate the classifiers for the predictions, we used the F1 score as with the previous study of \cite{toda2022evaluation}. Data in this study was much more negative than positive cases like ones used in the previous studies. Then, the (intuitive) accuracy score may not be better when there are extremely more negative than positive cases, as in this and previous studies. For example, the accuracy of attacked in VDEP and scored in VAEP will be $1-11,895/100,328\approx0.881$ and $1-186/100,328\approx0.998$ when all negative cases are predicted. For this reason, in this study, we used the F1 score to evaluate whether the true positives can be classified without considering the true negatives. The F1 score is the harmonic mean of Precision and Recall as follows. \begin{equation*} \text{F1score} = (2 \times \text{Precision} \times \text{Recall}) / (\text{Precision} + \text{Recall}) \end{equation*} where the Recall and the Precision is defined as the true-positive rate and the ratio of the sum of true positives and true negatives to false positives respectively. In this index, only true positives are evaluated, not true negatives. Therefore, we saw how changing $n\_nearest$ affected values of F1 score for $P_{scores}(s_i)$, $P_{concedes}(s_i)$, $P_{gains}(s_i)$ and $P_{attacked}(s_i)$. For the evaluation of defense using GVDEP calculated by the above probabilities, we present examples to quantitatively and qualitatively evaluate games in a competition. To calculate GVDEP values, the classifiers first learned with all 51 games (UEFA EURO 2020) or 31 games (UEFA EURO 2022) and tested themselves. Note that, we performed the predictions and analyzed 36 group stage matches and 8 matches of the first round in UEFA EURO 2022 to analyze the best 16 teams with the same number of games. Similarly, in UEFA EURO 2022, 24 group stage games and 4 quarter-final games were analyzed for the same reasons. The GVDEP is then calculated using the probabilities obtained by the tests and equations \ref{eq:deltas} to \ref{eq:rGVDEP}. For correlation analysis between variables, we computed Pearson's correlation coefficient $r$. $p<0.05$ was considered significant for all statistical analysis. all statistical analyses were performed using SciPy (version 1.5.4) in the Python library \section*{Results} \label{sec:results} \subsection*{Verification of classifiers} To verify the GVDEP method, we first investigated the prediction performances by changes in the number of people around the ball ($n\_nearest$). As mentioned earlier, GVDEP has four classifiers of gains, attacked, scores and concedes. The latter two probabilities are the same as the previous work by \cite{Decroos19}. These classifiers predict probabilities of gains ($P_{gains}$), attacked ($P_{attacked}$), scores ($P_{scores}$) and concedes ($P_{concedes}$). In Figure \ref{fig:f1scores_euro2020}, we showed that changes in $n\_nearest$ influenced F1-scores of these predictions. For the prediction probability of gains, no improvement in F1-socres was observed after $n\_nearest$ is 3 or 4. In contrast, for the predictions of scores, concedes, and attacked, F1-scores did not increase with all players' location information. The results of UEFA EURO 2022 were similar and we show the results in Supplementary Figure 6. \subsection*{Valuation of team defenses} Next, we show examples of team defense valuations in UEFA EURO 2020 in Figure \ref{fig:valuations2020} (for UEFA EURO 2022, see Supplementary Figure 7). For the results of the previous VDEP definition, which is formulated by $P_{gains}$ and $P_{attacked}$, see Supplementary Figure 8. In the figures, the mean values of $\Delta P_{gains}$ and $\Delta P_{attacked}$ are defined as $gain\_value$ and $attacked\_value$ respectively, and GVDEP values are as \textit{g\_vdep\_value}. First, we characterize and evaluate team defenses using \textit{gain\_value} and the average of \textit{attacked\_value} in Figure \ref{fig:valuations2020}a. Overall, the trade-off between the averaged gain and attacked values were confirmed ($r_{14} = -0.757, p = 0.001 < 0.05$). That is, the teams with more gain values tended to more attacked (less value), and vice versa. This tendency was similar to the results of the previous work (the older definition and Japanese professional soccer league) by \cite{toda2022evaluation}. For specific teams valuation, for example, Italy that won UEFA EURO 2020 was able to keep their opponents (Turkey, Switzerland, Wales and Austria) from penetrating into the penalty area, suggesting that these would be connected with a less number of concedes (see also Figure \ref{fig:valuations2020}b). Also, both values of England that is one of the finalists was over each average values. In addition, Spain and Denmark that advanced to Semi-Final were both higher \textit{attacked\_value} but lower \textit{gain\_value}, suggesting that they may keep good defense where their opposite teams was unable to attack effectively. Second, we investigate the relationship between GVDEP values and actual concedes in Figure \ref{fig:valuations2020}b. There was no significant correlation between them ($r_{14} = -0.265, p = 0.321$). Please note that the concedes were integers and had small variations, then we proposed this approach to value the defense process (not the outcome). Thus, the difference between them is important rather than the correlation itself. For example, Italy and England were not conceded and were high GVDEP values, so we can guess they continued to protect themselves against concedes. Third, it should be noted that there was a strong correlation between GVDEP and attacked values ($r_{14} = 0.993, p = 3.161\times 10^{-14}$) in Figure 3c. This is probably because the attacked values and the absolute value of the coefficients ($|weight\_attacked| = 0.021$) were larger than those of gain values ($|weight\_gains| = 0.011$) in UEFA EURO 2020. Thus, we should be careful of the results interpretation in GVDEP values, which is similar to those of the attacked values. Specifically, the finalists tried to preserve the state where it was difficult for their opposite teams to attack effectively, so their GVDEP values are high. Also, Spain and Denmark were at their average levels of gain and attacked values. Lastly, for this reason, we investigated the relationship between the gain values and the concedes in Figure 3d. There was also no significant correlation between them ($r_{14} = 0.389, p = 0.136$). Yet, roughly speaking, the figure shows if a team tries to bring the state where they can gain the ball in the future, they will take more risks. For instance, the number of Italy's concedes and their \textit{gain\_value} were less as with the tendency. In short, they did not aim to gain the ball when defending. However, England's \textit{gain\_value} was over the mean value and their concedes were less than any other team, suggesting they tried such as tackles and interceptions and succeeded. In addition, \textit{gain\_value} of Spain and Denmark were low but they allowed more concedes than Italy. \section*{Discussion} In this study, we proposed a generalized valuation method of defensive teams by score-scaling the predicted probabilities of the events. First, we verified the existing probabilities based on the prediction performance. Second, we quantitatively analyzed the games in UEFA EURO 2020 using the defensive evaluations of the proposed. Finally, we discuss the limitations of the proposed methods and future perspectives. To calculate the probabilities of ball gain and being attacked, the previous study \cite{toda2022evaluation} used data including all player's features. However, the type of this data is not always available because this is often private or expensive. Hence, we used open-source data including all player's location data in the video frame, and we verified the existing classifiers' performances. This result suggests that although features of not only the ball but the players are important to improve the classifier in ball gain, we do not necessary need all players locations. Our results suggest that our approach can evaluate defensive performances only with the open-source data. Considering the team evaluations, according to the correlation analysis, we found the trade-off between the tendencies of gaining the ball and of not being effectively attacked. As a finalist team, England was able to maintain a good balance between both the ball gain and not being attacked at a high level, and they did not allow to concede until the analyzed Round of 16. A champion team, Italy's $attacked\_value$ is the highest in teams that went to the knockout phase. We also found that they did not concede except for the corner kick in the analyzed games. However, Belgium and Czech Republic were evaluated with lower values in our method in spite of the low number of concedes. This may be because their keepers made efforts to prevent concedes. Indeed, out of the four matches played up to the round of 16, Belgium and Czech Republic have kept three or two clean sheets (no concedes), respectively. In this study, we consider the evaluation of team defense, not the contributions of the keepers, thus we may acquire such results. Finally, we introduce the limitations of this study and future perspectives. The first is about the use of data. Since data used in this study does not necessarily include all players' features, results about verification of classifiers do not perfectly describe the performance. Another issue is that our formula is too affected by $attacked\_value$. As shown in Figure \ref{fig:valuations2020}c, we found a too high positive correlation between $attacked\_value$ and GVDEP values. It is true that not allowing the opponent to attack can be seen as reducing the probability of conceding a goal, but it is difficult to assess the defence on this basis alone. The last is the definition of off-ball features and the modeling. In this study, these include the x and y coordinates of positions of all players (22 players xy coordinates) and the distance and the angle of each player from the ball, sorted in the order of closest to the ball, at the analysis stage. For future work, we can consider more specific features of the off-ball defense (e.g., a defense line) or other nonlinear modeling such as using neural networks. \ifarxiv \section*{Acknowledgments} This work was supported by JSPS KAKENHI (Grant Numbers 20H04075 and 21H05300), JST START University Ecosystem Promotion Type (Grant Number JPMJST2183), and JST Presto (Grant Number JPMJPR20CA). \input{reference.bbl} \else \bibliographystyle{apa}
{ "attr-fineweb-edu": 2.091797, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbrY25V5hRtaNkFve
\section{Introduction} Description logics (DLs) \cite{dlhandbook} are a well-known and widely used family of knowledge representation formalisms that, thanks to their clear syntax and formal semantics, have been used to represent and deal with the knowledge of various representation domains. Among the many members of this family, a subfamily of languages with a limited expressivity, known as the \text{DL-Lite}\xspace family \cite{dl-lite} has a prominent role. In fact, simplifying a bit, \text{DL-Lite}\xspace was originally designed with the goal of including background knowledge within the task of answering queries, and avoiding the need for an explicit enumeration of all the facts that are implicitly implied by the domain knowledge. Consider for example a touristic scenario, which includes information about museums, monuments, restaurants, and pubs. Knowing that museums and monuments are touristic attractions, and that restaurants and pubs are eateries, one can immediately deduce that the modern art museum and the peace monument are touristic attractions, and that the Irish pub is an eatery, without having to make this knowledge explicit. A user may thus ask for e.g., a \emph{tourist attraction that contains an eatery}. Using classical query answering techniques \cite{OrSi-RW12}, all attractions that satisfy this requirement can be efficiently retrieved. Being based on classical logic, DLs in general and \text{DL-Lite}\xspace in particular are unable to handle imprecise or vague knowledge effectively. In our touristic scenario, for instance, we may want to extend the knowledge with some additional properties of the objects of interest. For example, a tourist in a hurry may want to visit the \emph{popular} attractions first; or a backpacker on a budget may be more interested in finding \emph{cheap} eateries. Note that \emph{cheap} and \emph{popular} are two vague notions that do not allow for any precise definition. In a simplistic scenario, cheapness may be defined in terms of the mean cost for a meal, but even then, it is impossible to specify a precise price-point where an eatery stops being cheap; moreover this is also a subjective notion. The case of popularity is even worse, as there is no obvious proxy for it. To solve this issue, fuzzy extensions of DLs have been widely studied; see for example \cite{Borgwardt:PhD,BoPe-SUM17,BCE+-15,LuSt08,Cera:PhD} and references therein. In essence, fuzzy logic \cite{Haje98} extends classical logic by allowing truth degrees in the interval $[0,1]$ for the propositions that contain fuzzy (or vague) predicates. One can thus say, e.g., that the modern art museum is popular to a degree of $0.8$ meaning, intuitively, that it is popular, but more popular attractions may exist. Interestingly, although fuzzy DLs and their reasoning services have been widely studied, the task of answering queries based on fuzzy ontologies has been mostly ignored. Most of the earlier work from this point of view was carried out by Straccia and Pan. Specifically, Straccia \cite{Stra-JELIA06} studied the problem of computing the answers with highest degree on a query w.r.t.\ some background knowledge. This was followed by Pan et al. \cite{PSST-DL07}, who considered more complex queries to be answered. While from some perspective these works seem to cover the whole area of query answering, they were based on the so-called Zadeh semantics, which does not have adequate properties from a mathematical logic point of view \cite{Haje98}. Another limitation of all these approaches is that they allowed only the facts in the ontology to be graded, but restricted the terminological knowledge to be crisp (i.e., hold fully). Other work considering query answering in fuzzy DLs includes~\cite{Stra-IS12}, where the $k$ answers with the highest degree are retrieved. This latter work is closer to our approach but has several limitations. Perhaps the most obvious is that its semantics follows a closed-world assumption, even in the case of background knowledge. In addition, background knowledge is interpreted as a \emph{rule}, where the degrees of the body of an axiom define the degree of the head, but knowledge about the head cannot be used to infer knowledge about the body. We, in change, use the open world assumption, as typical in knowledge representation, and use the logical interpretation of axioms. Later on, Turhan and Mailis studied the problem of query answering w.r.t.\ background knowledge from the point of view of fuzzy logic \cite{Haje98}, where the semantics are based on the properties of continuous triangular norms \cite{KlMP00}. They developed a technique for computing the satisfaction degrees of conjunctive queries when the semantics were based on the G\"odel t-norm \cite{MaTu-JIST14}. This technique, which is based on the construction of a classical query, was later implemented and shown to be effective in \cite{MaTZ-DL15}. However, it still suffered from two main drawbacks: (i) it was only capable to handle the idempotent (G\"odel) t-norm, and (ii) terminological knowledge had to still be precise, allowing no graded axioms. The latter condition is essential for the correctness of their approach: their reduction is unable to keep track of the degrees used by the terminological axioms, as this would require an unbounded memory use. In this paper, we study the problem of query answering w.r.t.\ \text{DL-Lite}\xspace ontologies, filling out the gaps left by the existing work. To be more explicit, our work is the first to consider adequate semantics from the mathematical fuzzy logic point of view, alongside graded axioms stating vague knowledge beyond just vague data. We start by considering the kind of conjunctive queries studied by Turhan and Mailis, but allowing the association of numeric degrees also in the TBox. Interestingly, although this is a generalization of the previously studied setting, we are able to develop a much simpler method, which does not rely on rewriting, but rather on a reduction to a classical query answering scenario. The method is based on the idea of \emph{cut ontologies}, where all knowledge holding to a low degree is ignored. Hence, we obtain a more robust and easier to maintain approach than previous work. Still considering the G\"odel t-norm, we considered the case of threshold queries, also left open in previous work, in which every conjunct in a query is assigned a different degree. In this case, a direct reduction to classical query answering does not work, but we were able to adapt the classical rewriting methods to handle the degrees effectively. The final part of the paper considers other t-norms as the underlying semantics for the fuzzy constructors. In this case, we show through several examples that conjunctive queries cannot be easily handled, but we identify some special cases where queries can be effectively answered. On the other hand, we show that we can still apply the rewriting technique to answer threshold queries, even for non-idempotent t-norms. This is a surprising result because in the idempotent scenario threshold queries are a generalization of conjunctive queries. Some of the results in this paper were previously published in \cite{PaPe-RR20}. In addition to full proofs, deeper explanations, and examples, here we extend that previous work by handling threshold queries, including the full rewriting technique from Section \ref{sec:tq}. We also provide better results for non-idempotent t-norms, and highlight some of the problems of combining conjunctions and non-idempotent t-norms in the appendix. \section{Preliminaries} We briefly introduce the syntax and semantics of fuzzy \text{DL-Lite}\ensuremath{_R}\xspace and other related notions that will be important for this paper. Let \ensuremath{N_C}\xspace, \ensuremath{N_R}\xspace, and \ensuremath{N_I}\xspace be three mutually disjoint sets whose elements are called \emph{concept names}, \emph{role names}, and \emph{individual names}, respectively. The sets of \text{DL-Lite}\ensuremath{_R}\xspace\emph{concepts} and \emph{roles} are built through the grammar rules: \begin{align*} B::={}& A\mid \exists Q & C::={}&B\mid\neg B \\ Q::={}& P\mid P^- & R::={}&Q\mid\neg Q \end{align*} where $A\in\ensuremath{N_C}\xspace$ and $P\in\ensuremath{N_R}\xspace$. Concepts of the form $B$ and roles of the form $Q$ are called \emph{basic}, and all others are called \emph{general}. \begin{definition}[ontology] A \emph{fuzzy \text{DL-Lite}\ensuremath{_R}\xspace TBox} is a finite set of \emph{fuzzy axioms} of the form $\left<B\sqsubseteq C,d\right>$ and $\left<Q\sqsubseteq R,d\right>$, where $d$ is a number in $[0,1]$. An axiom is \emph{positive} if it does not have negation on its right-hand side and \emph{negative} otherwise. A \emph{fuzzy \text{DL-Lite}\ensuremath{_R}\xspace ABox} is a finite set of \emph{fuzzy assertions} of the form $\left<B(a),d\right>$ and $\left<P(a,b),d\right>$, where $a,b\in\ensuremath{N_I}\xspace$. A \emph{fuzzy \text{DL-Lite}\ensuremath{_R}\xspace ontology} is a pair of the form $\ensuremath{\mathcal{O}}\xspace=(\ensuremath{\mathcal{T}}\xspace,\ensuremath{\mathcal{A}}\xspace)$ where \ensuremath{\mathcal{T}}\xspace is a TBox and \ensuremath{\mathcal{A}}\xspace is an ABox. \end{definition} Note that negations can never occur on the left-hand side of an axiom. In the remainer of this paper, we will mostly exclude the qualifiers ``fuzzy,'' and ``\text{DL-Lite}\xspace'' and simply refer to axioms, ontologies, etc. The semantics of fuzzy \text{DL-Lite}\ensuremath{_R}\xspace is based on fuzzy interpretations, which provide a \emph{membership degree} or for objects belonging to the different concept and role names. Formally, following the basics of classical description logics, concept names are interpreted as fuzzy unary relations, and role names are interpreted as fuzzy binary relations. To fully define this semantics in the presence of other constructors according to fuzzy logic, we need the notion of a triangular norm (or \emph{t-norm} for short). \begin{definition}[t-norm] A \emph{t-norm} $\otimes$ is a binary operator over the real interval $[0,1]$ that is commutative, associative, monotonic, and has $1$ as the neutral element; i.e., $1\otimes x=x$ for all $x\in[0,1]$ \cite{KlMP00}. \end{definition} Triangular norms are used to generalize the logical conjunction to handle truth degrees that take values from the interval $[0,1]$. Every continuous t-norm defines a unique \emph{residuum} $\Rightarrow$ where $f\otimes d\le e$ iff $f\le d\Rightarrow e$. The residuum interprets implications. With the help of this operation, it is also possible to interpret other logical operators such as negation ($\ominus d:=d\Rightarrow 0$). The three basic continuous t-norms are the \emph{G\"odel}, \emph{\L ukasiewicz}, and \emph{product} t-norms, which are defined, with their residua and negations in Table~\ref{tab:tnorm}. \begin{table}[tb] \caption{The three fundamental continuous t-norms and related operations} \label{tab:tnorm} \centering \begin{tabular}{@{}l@{\qquad}l@{\qquad}l@{\qquad}l@{}} \toprule Name & $d \otimes e$ & $d \Rightarrow e$ & $\ominus d$ \\ \midrule G\"odel & $\min\{d,e\}$ & $\begin{cases}1&d\le e\\ e&\text{otherwise}\end{cases}$ & $\begin{cases}1&d=0\\ 0&\text{otherwise}\end{cases}$\\ \L ukasiewicz & $\max\{d+e-1,0\}$ & $\min\{1-d+e,1\}$ & $1-d$\\ product & $d\cdot e$ & $\begin{cases}1&d\le e\\ e/d&\text{otherwise}\end{cases}$ & $\begin{cases}1&d=0\\ 0&\text{otherwise}\end{cases}$\\ \bottomrule \end{tabular} \end{table} These t-norms are the ``fundamental'' ones in the sense that every other continuous t\mbox{-}norm is isomorphic to the ordinal sum of copies of them \cite{Haje98,MoSh-AM57}. Hence, as usual, we focus our study on these three t-norms. Note that the residuum always satisfies that $d\Rightarrow e=1$ iff $d\le e$, and that in the G\"odel and product t-norms the negation is annihilating in the sense that it maps to 0 any positive value, while the negation of 0 is 1. In particular, this means that the negation is not \emph{involutive}; that is, $\ominus\ominus d\not=d$ in general. In contrast, the negation operator for the \L ukasiewicz t-norm is involutive. In addition, the \L ukasiewicz t-norm is the only t-norm (up to isomorphism) with the property that for every $x\in (0,1)$ there exists a $y\in (0,1)$ such that $x\otimes y=0$. Specifically, this $y$ is $1-x$. In other words, the \L ukasiewicz t-norm is \emph{nilpotent}. From now on, unless specified explicitly otherwise, we assume that we have an arbitrary, but fixed, t-norm $\otimes$ which underlies the operators used. When the t\mbox{-}norm becomes relevant in the following sections, we will often use G, $\Pi$, and \L{} as prefixes to express that the underlying t-norm is G\"odel, product, or \L ukasiewicz, respectively, as usual in the literature. We can now formally define the semantics of the logic. An \emph{interpretation} is a pair $\ensuremath{\mathcal{I}}\xspace=(\Delta^\ensuremath{\mathcal{I}}\xspace,\cdot^\ensuremath{\mathcal{I}}\xspace)$, where $\Delta^\ensuremath{\mathcal{I}}\xspace$ is a non-empty set called the \emph{domain}, and $\cdot^\ensuremath{\mathcal{I}}\xspace$ is the \emph{interpretation function} which maps: (i) every individual name $a\in\ensuremath{N_I}\xspace$ to an element $a^\ensuremath{\mathcal{I}}\xspace\in\Delta^\ensuremath{\mathcal{I}}\xspace$; (ii) every concept name $A\in\ensuremath{N_C}\xspace$ to a function $A^\ensuremath{\mathcal{I}}\xspace:\Delta^\ensuremath{\mathcal{I}}\xspace\to[0,1]$; and (iii) every role name $P\in\ensuremath{N_R}\xspace$ to a function $P^\ensuremath{\mathcal{I}}\xspace:\Delta^\ensuremath{\mathcal{I}}\xspace\times\Delta^\ensuremath{\mathcal{I}}\xspace\to[0,1]$. That is, concept names are interpreted as fuzzy unary relations and role names are interpreted as fuzzy binary relations over $\Delta^\ensuremath{\mathcal{I}}\xspace$. The interpretation function is extended to other constructors with the help of the t-norm operators as follows. For every $\delta,\eta\in\Delta^\ensuremath{\mathcal{I}}\xspace$, \begin{align*} (\exists Q)^\ensuremath{\mathcal{I}}\xspace(\delta) := {} & \sup_{\delta'\in\Delta^\ensuremath{\mathcal{I}}\xspace}Q^\ensuremath{\mathcal{I}}\xspace(\delta,\delta') & (\neg B)^\ensuremath{\mathcal{I}}\xspace(\delta) := & \ominus B^\ensuremath{\mathcal{I}}\xspace(\delta) & (\top)^\ensuremath{\mathcal{I}}\xspace(\delta) := & 1 \\ (P^-)^\ensuremath{\mathcal{I}}\xspace(\delta,\eta) := {} & P^\ensuremath{\mathcal{I}}\xspace(\eta,\delta) & (\neg Q)^\ensuremath{\mathcal{I}}\xspace(\delta,\eta) := & \ominus Q^\ensuremath{\mathcal{I}}\xspace(\delta,\eta) & \end{align*} The interpretation \ensuremath{\mathcal{I}}\xspace \emph{satisfies} the axiom \begin{itemize} \item $\left<B\sqsubseteq C,d\right>$ iff $B^\ensuremath{\mathcal{I}}\xspace(\delta)\Rightarrow C^\ensuremath{\mathcal{I}}\xspace(\delta)\ge d$ holds for every $\delta\in\Delta^\ensuremath{\mathcal{I}}\xspace$; and \item $\left<Q\sqsubseteq R,d\right>$ iff $Q^\ensuremath{\mathcal{I}}\xspace(\delta,\eta)\Rightarrow R^\ensuremath{\mathcal{I}}\xspace(\delta,\eta)\ge d$ holds for every $\delta,\eta\in\Delta^\ensuremath{\mathcal{I}}\xspace$ \end{itemize} It is a \emph{model} of the TBox \ensuremath{\mathcal{T}}\xspace if it satisfies all axioms in \ensuremath{\mathcal{T}}\xspace. \ensuremath{\mathcal{I}}\xspace \emph{satisfies} the assertion \begin{itemize} \item $\left<B(a),d\right>$ iff $B^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)\ge d$; \item $\left<P(a,b),d\right>$ iff $P^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace,b^\ensuremath{\mathcal{I}}\xspace)\ge d$. \end{itemize} It is a \emph{model} of the ABox \ensuremath{\mathcal{A}}\xspace if it satisfies all axioms in \ensuremath{\mathcal{A}}\xspace, and it is a \emph{model} of the ontology $\ensuremath{\mathcal{O}}\xspace=(\ensuremath{\mathcal{T}}\xspace,\ensuremath{\mathcal{A}}\xspace)$ if it is a model of \ensuremath{\mathcal{T}}\xspace and of \ensuremath{\mathcal{A}}\xspace. We note that the classical notion of \text{DL-Lite}\ensuremath{_R}\xspace \cite{dl-lite} is a special case of fuzzy \text{DL-Lite}\ensuremath{_R}\xspace, where all the axioms and assertions hold with degree 1. In that case, it suffices to consider interpretations which map all elements to $\{0,1\}$ representing the classical truth values. When speaking of classical ontologies, we remove the degree and assume it implicitly to be 1. \begin{example} \label{exa:run} Consider an ontology $\ensuremath{\mathcal{O}_\textsf{exa}}\xspace=(\ensuremath{\mathcal{T}_\textsf{exa}}\xspace,\ensuremath{\mathcal{A}_\textsf{exa}}\xspace)$ representing some knowledge about a touristic location. The TBox \begin{align*} \ensuremath{\mathcal{T}_\textsf{exa}}\xspace =\{ & \ax{\textsf{Monument} \sqsubseteq \textsf{TouristAttraction}}, \quad \ax{\textsf{Museum} \sqsubseteq \textsf{TouristAttraction}}, \\ & \ax{\textsf{Pub} \sqsubseteq \textsf{Eatery}}, \quad \ax{\textsf{Restaurant} \sqsubseteq \textsf{Eatery}}, \quad \ax{\textsf{locIn}\sqsubseteq \textsf{Near}}\\ & \ax[0.6]{\textsf{Museum} \sqsubseteq \textsf{Popular}}, \quad \ax[0.5]{\exists \textsf{locIn}\sqsubseteq \neg \textsf{Cheap}} & \} \end{align*} defines some notions about eateries and tourist attractions, including some vague notions in the last two axioms. For example, it expresses that museums are popular (with a degree at least 0.6), and that services located at some attraction are not cheap (with degree at least 0.5). The ABox \begin{align*} \ensuremath{\mathcal{A}_\textsf{exa}}\xspace = \{ & \ax{\textsf{Monument}(\textsf{peace})}, \quad \ax{\textsf{Monument}(\textsf{love})}, \\ & \ax{\textsf{Museum}(\textsf{modernArt})}, \quad \ax{\textsf{Museum}(\textsf{contArt})}, \quad \ax{\textsf{Museum}(\textsf{comic})}, \\ & \ax{\textsf{Restaurant}(\textsf{sioux})}, \quad \ax{\textsf{Restaurant}(\textsf{gamberone})}, \\ & \ax{\textsf{Pub}(\textsf{irish})}, \quad \ax{\textsf{locIn}(\textsf{sioux},\textsf{modernArt})}, \\ & \ax[0.8]{\textsf{Popular}(\textsf{comic})}, \quad \ax[0.6]{\textsf{Cheap}(\textsf{irish})}, \quad \ax[0.7]{\textsf{near}(\textsf{irish},\textsf{comic})} & \} \end{align*} provides information about the specific attractions and services provided at the location. From this information, we can deduce, for example, that the \textsf{modernArt} museum is a \textsf{TouristAttraction}, and is \textsf{Popular} to a degree at least $0.6$. Under the G\"odel t-norm, a possible model of \ensuremath{\mathcal{O}_\textsf{exa}}\xspace is depicted graphically in Figure~\ref{fig:model}, where any assertion not depicted is considered to hold to degree 0. \begin{figure} \includegraphics[width=\textwidth]{model} \caption{A model for the ontology \ensuremath{\mathcal{O}_\textsf{exa}}\xspace from Example~\ref{exa:run}. Individual names are abbreviated to avoid cluttering, and start with a lower-case letter as customary in DLs. The shape and border of the nodes represent the crisp concepts, while vague concepts are associated to a degree. } \label{fig:model} \end{figure} For example, the model from Figure~\ref{fig:model} interprets the \textsf{irish} pub as being \textsf{Cheap} to degree 0.7, which satisfies the constraint in the ABox requiring this degree to be at least 0.6. In addition, the \textsf{peace} monument is \textsf{Popular} to degree 0.3, even though there is no explicit requirement for this in \ensuremath{\mathcal{O}_\textsf{exa}}\xspace. Note that under this semantics, any model \ensuremath{\mathcal{I}}\xspace of \ensuremath{\mathcal{O}_\textsf{exa}}\xspace should necessarily satisfy that $\textsf{Cheap}^\ensuremath{\mathcal{I}}\xspace(\textsf{sioux}^\ensuremath{\mathcal{I}}\xspace)=0$; this is in fact the case in the model from Figure~\ref{fig:model}. Hence, adding any assertion of the form \ax[d]{\textsf{Cheap}(\textsf{sioux})} with $d>0$ to this ontology would make it inconsistent. \end{example} For this paper, we are interested in answering two kinds of queries. The first kind are conjunctive queries, which consider whether a combination of facts can be derived from the knowledge in an ontology. In the fuzzy setting, the degree of such derivation must also be taken into account. Let \ensuremath{N_V}\xspace be a set of \emph{variables}, which is disjoint from \ensuremath{N_I}\xspace, \ensuremath{N_C}\xspace, and \ensuremath{N_R}\xspace. A \emph{term} is an element of $\ensuremath{N_V}\xspace\cup\ensuremath{N_I}\xspace$; that is, an individual name or a variable. An \emph{atom} is an expression of the form $C(t)$ (concept atom) or $P(t_1,t_2)$ (role atom). Henceforth, \ensuremath{\mathbf{x}}\xspace and \ensuremath{\mathbf{y}}\xspace denote tuples of variables. \begin{definition}[conjunctive query] A \emph{conjunctive query} (CQ) is a first-order formula of the form $\exists \ensuremath{\mathbf{y}}\xspace.\phi(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ where $\phi$ is a conjunction of atoms which only use the variables from \ensuremath{\mathbf{x}}\xspace and \ensuremath{\mathbf{y}}\xspace. The variables \ensuremath{\mathbf{y}}\xspace are called \emph{existential variables}, and those in \ensuremath{\mathbf{x}}\xspace are \emph{answer variables}. A \emph{union of conjunctive queries} (UCQ) is a finite set of CQs that use the same answer variables. Henceforth, $\ensuremath{\mathsf{At}}\xspace(\phi)$ denotes the set of all atoms appearing in $\phi$. \end{definition} As in the classical setting, an answer to a conjunctive query, or a union of conjunctive queries, is only considered when it is provided by every model of the ontology. This is usually known as a \emph{certain answer}. Given the CQ $q(\ensuremath{\mathbf{x}}\xspace)=\exists \ensuremath{\mathbf{y}}\xspace.\phi(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)$, the interpretation \ensuremath{\mathcal{I}}\xspace, and a tuple of individuals \ensuremath{\mathbf{a}}\xspace of the same length as \ensuremath{\mathbf{x}}\xspace, a \emph{match} is a mapping $\pi$ which assigns to each $a\in\ensuremath{N_I}\xspace$ the value $a^\ensuremath{\mathcal{I}}\xspace$; to each variable in \ensuremath{\mathbf{x}}\xspace the corresponding element of $\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace$; and to each variable in \ensuremath{\mathbf{y}}\xspace an element $\delta\in\Delta^\ensuremath{\mathcal{I}}\xspace$. We extend the match $\pi$ to apply to assertions as follows: $\pi(B(t))=B(\pi(t))$ and $\pi(P(t_1,t_2))=P(\pi(t_1),\pi(t_2))$. The \emph{degree} of the CQ $q(\ensuremath{\mathbf{x}}\xspace)$ w.r.t.\ the match $\pi$ is \[ q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace,\pi(\ensuremath{\mathbf{y}}\xspace)):=\bigotimes_{\alpha\in\ensuremath{\mathsf{At}}\xspace(\phi)}(\pi(\alpha))^\ensuremath{\mathcal{I}}\xspace. \] That is, a match maps all the variables in the query to elements of the interpretation domain, where the tuple \ensuremath{\mathbf{a}}\xspace is used to identify the mapping of the answer variables. The satisfaction or matching degree of the query is the (fuzzy) conjunction---that is, the t-norm---of the satisfaction or matching degrees of the atoms under this mapping. From now on, $\Pi(\ensuremath{\mathcal{I}}\xspace)$ denotes the set of all matches of $q(\ensuremath{\mathbf{x}}\xspace)$ w.r.t.\ the interpretation \ensuremath{\mathcal{I}}\xspace. An important difference between classical query answering and our setting is that the fuzzy semantics provides a degree to every possible atom. Hence, in reality $\Pi(\ensuremath{\mathcal{I}}\xspace)$ is always defined by the set of all tuples of individuals with length $|\ensuremath{\mathbf{x}}\xspace|$. However, the degree of these matches varies and may often be zero. For example, for the model \ensuremath{\mathcal{I}}\xspace in Figure~\ref{fig:model} and the query $q(x)=\textsf{Popular}(x)$, the set of all matches $\Pi(\ensuremath{\mathcal{I}}\xspace)$ assigns to the variable $x$ any of the constants $\{\textsf{mA},\textsf{cA},\textsf{c},\textsf{p},\textsf{l},\textsf{s},\textsf{i},\textsf{g}\}$ to degrees $0.7,0.6,0.9,0.3,0,0,0$, and $0$, respectively. When answering a query, one is often interested in the matches that hold to at least some degree $d$, as defined next. \begin{definition}[degree queries] A tuple of individuals \ensuremath{\mathbf{a}}\xspace is an \emph{answer} of the conjunctive query $q(\ensuremath{\mathbf{x}}\xspace)$ to degree $d$ w.r.t.\ the interpretation \ensuremath{\mathcal{I}}\xspace (denoted by $\ensuremath{\mathcal{I}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge d$) iff $q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace):=\sup_{\pi\in\Pi(\ensuremath{\mathcal{I}}\xspace)}q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace,\pi(\ensuremath{\mathbf{y}}\xspace))\ge d$. It is a \emph{certain answer} (or \emph{answer} for short) of $q(\ensuremath{\mathbf{x}}\xspace)$ over the ontology \ensuremath{\mathcal{O}}\xspace to degree $d$ (denoted by $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge d$) iff $\ensuremath{\mathcal{I}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge d$ holds for every model \ensuremath{\mathcal{I}}\xspace of \ensuremath{\mathcal{O}}\xspace. The crisp set of certain answers of the query $q(\ensuremath{\mathbf{x}}\xspace)$ w.r.t.\ \ensuremath{\mathcal{O}}\xspace and their degree is denoted by $\ensuremath{\mathsf{ans}}\xspace(q(\ensuremath{\mathbf{x}}\xspace),\ensuremath{\mathcal{O}}\xspace)$; that is, \[ \ensuremath{\mathsf{ans}}\xspace(q(\ensuremath{\mathbf{x}}\xspace),\ensuremath{\mathcal{O}}\xspace):=\{(\ensuremath{\mathbf{a}}\xspace,d)\mid \ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge d \text{ and for all }d'>d, \ensuremath{\mathcal{O}}\xspace\not\models q(\ensuremath{\mathbf{a}}\xspace)\ge d'\}. \] \end{definition} It is important to keep in mind that the atoms in a CQ are not graded, but simply try to match with elements in the domain as both concept and roles are interpreted as fuzzy relations (unary and binary, respectively). The use of the truth degrees in the ontology becomes relevant in the degree of the answers found. Moreover, recall that every tuple of individuals of length $|\ensuremath{\mathbf{x}}\xspace|$ belongs to $\ensuremath{\mathsf{ans}}\xspace(q(\ensuremath{\mathbf{x}}\xspace),\ensuremath{\mathcal{O}}\xspace)$, but with different associated degrees. Returning to our example, while all individuals belong to the set $\ensuremath{\mathsf{ans}}\xspace(q(x))$, for the query $q(x)=\textsf{Popular}(x)$ to some degree, the certain answers for $q(x)$ w.r.t.\ \ensuremath{\mathcal{O}_\textsf{exa}}\xspace to degree at least 0.6 are only \textsf{modernArt}, \textsf{contArt}, and \textsf{comic}. The latter one is the only answer to degree at least 0.8. The second kind of query we are interested in generalises that of degree queries, when considering the G\"odel semantics, by allowing a degree threshold for each of the atoms in the conjunction, rather than for the overall conjunction. We formally define this class next. \begin{definition}[threshold queries] A \emph{threshold atom} is an expression of the form $\alpha\ge d$, where $\alpha$ is an atom and $d\in[0,1]$. A \emph{threshold query} (TQ) is a first-order formula of the form $\exists \ensuremath{\mathbf{y}}\xspace.\phi(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ where $\phi$ is a conjunction of threshold atoms using only the variables from \ensuremath{\mathbf{x}}\xspace and \ensuremath{\mathbf{y}}\xspace. \end{definition} The notion of a match and an answer to a threshold query are analogous to those of degree queries, with the proviso that the degree bounds apply at the level of atoms, and not at the level of queries. \begin{definition}[TQ answer] Given an interpretation \ensuremath{\mathcal{I}}\xspace and a tuple of individuals \ensuremath{\mathbf{a}}\xspace, the match $\pi$ \emph{satisfies} the threshold atom $\alpha\ge d$ (denoted by $\pi\models\alpha\ge d$) iff $\alpha^\ensuremath{\mathcal{I}}\xspace\ge d$. It \emph{satisfies} the threshold query $q(\ensuremath{\mathbf{x}}\xspace)=\exists \ensuremath{\mathbf{y}}\xspace.\phi(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ ($\pi\models q(\ensuremath{\mathbf{a}}\xspace)$) iff $\pi\models \alpha\ge d$ holds for every threshold atom in $q$. A tuple of individuals \ensuremath{\mathbf{a}}\xspace is an \emph{answer} to the TQ $q(\ensuremath{\mathbf{x}}\xspace)$ w.r.t.\ the interpretation \ensuremath{\mathcal{I}}\xspace ($\ensuremath{\mathcal{I}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$) iff there is a match $\pi$ w.r.t.\ \ensuremath{\mathbf{a}}\xspace and \ensuremath{\mathcal{I}}\xspace such that $\pi\models q(\ensuremath{\mathbf{a}}\xspace)$. It is a \emph{certain answer} of $q(\ensuremath{\mathbf{x}}\xspace)$ over the ontology \ensuremath{\mathcal{O}}\xspace iff for every model \ensuremath{\mathcal{I}}\xspace of \ensuremath{\mathcal{O}}\xspace it holds that $\ensuremath{\mathcal{I}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$. \end{definition} Note that, differently from conjunctive queries, but in an analogous manner to degree queries, the answers to a threshold query are \emph{not} graded. Indeed, a tuple \ensuremath{\mathbf{a}}\xspace may or may not be an answer, and we are interested in finding those tuples which satisfy the degrees at each of the threshold atoms. In a sense, threshold queries provide a more fine-grained structure to deal with the properties of interest within a query in relation to degree queries. Indeed, in a degree query, one can only provide an overall degree which should be obtained after the degrees of all the atoms are conjoined via the t-norm. In particular, for non-idempotent t-norms and large queries, this conjunction will tend to be smaller and smaller, and the degrees of the independent atoms have the same influence overall. Even when considering the idempotent G\"odel t-norm, a degree query $q(\ensuremath{\mathbf{x}}\xspace)\ge d$ only expresses that all the atoms in $q$ should hold to degree at least $d$ (recall that the G\"odel t-norm refers to the minimum operator), but it is not possible to express that some atoms should hold with a higher degree. A threshold query, on the other hand, is capable of requiring different degrees for each of the atoms. \begin{example} \label{exa:queries} Suppose, in our running example, that we are interested in finding a cheap eatery that is nearby a popular tourist attraction, and that we are using the G\"odel semantics. This basic query could be expressed as% \footnote{For brevity, we conjoin the atoms in a CQ through commas (`,') instead of $\land$.} \[ q(x) = \exists y. \textsf{Cheap}(x), \textsf{Popular}(y), \textsf{near}(x,y). \] Since this query considers vague concepts and roles, we want to find answers that satisfy it to at least some degree. For the degree query $q(x)\ge 0.6$, the only possible answer is the \textsf{irish} pub. Suppose now that for us it is more important that the eatery is cheap than the popularity of the tourist attraction. For example, even though we are content with the tourist attraction being popular to only degree 0.6, the eatery should be cheap to a degree at least 0.8. This can be expressed through the threshold query \[ q'(x) = \exists y. \textsf{Cheap}(x)\ge 0.8, \textsf{Popular}(y)\ge 0.6, \textsf{near}(x,y)\ge 0.6. \] In this case, the TQ has no answers w.r.t.\ the ontology \ensuremath{\mathcal{O}}\xspace. However, any answer to $q'$ would also be an answer to $q(x)\ge 0.6$, as overall they define the same minimum over all the degrees of interest. Note that this last claim only holds for the case of the G\"odel semantics. Indeed, as we will see later in this paper, for other semantics degree queries are not properly special cases of TQs. \end{example} A class of conjunctive queries of special significance is that where the tuple of answer variables \ensuremath{\mathbf{x}}\xspace is empty. This means that the answer tuple of individuals provided as an answer must also be empty. In the classical setting, these are called \emph{Boolean queries}, because they can only return a Boolean value: true if there is a match for the existential variables in every model, and false otherwise. In the fuzzy setting, the set of answers to such a query will only contain one element $((),d)$. Thus, in that case, we are only interested in finding the degree $d$, and call those queries \emph{fuzzy queries}. This degree is the tightest value for which we can find a satisfying matching. Formally, the ontology \ensuremath{\mathcal{O}}\xspace \emph{entails} the fuzzy query $q()$ to degree $d$ iff $\ensuremath{\mathcal{O}}\xspace\models q()\ge d$ and $\ensuremath{\mathcal{O}}\xspace\not\models q()\ge d'$ for all $d'>d$. Fuzzy queries allow us to find the degree of a specific answer \ensuremath{\mathbf{a}}\xspace without having to compute $\ensuremath{\mathsf{ans}}\xspace(q(\ensuremath{\mathbf{x}}\xspace),\ensuremath{\mathcal{O}}\xspace)$: simply compute the degree of the fuzzy query $q(\ensuremath{\mathbf{a}}\xspace)$. In the case of threshold queries, we can also consider the special case where the answer tuple \ensuremath{\mathbf{x}}\xspace is empty. In that case, as in the classical case, the only possible answer is the empty tuple (if there is a match which satisfies the query) or no answer if no such match exists. For that reason, in the case of threshold queries without answer variables we preserve the classical terminology and call them Boolean (threshold) queries. \medskip As it is typically done for query answering in description logics, we consider two measures of complexity: \emph{data complexity}, where only the size of the ABox (and the candidate answer, if any) is considered as part of the input, and \emph{combined complexity} in which the size of the whole ontology (including the TBox) is taken into account.% \footnote{Note that our notion of \emph{combined complexity} does \emph{not} include the query as part of the input, but only the ontology. This view contrasts the usual database definition (and papers following it) where the combined complexity includes the query, but is in line with the terminology used in ontology-based query answering; e.g.~\cite{ACKZ09}. The motivation is to understand the influence of the knowledge in the complexity, abstracting from the query, which is already known to be a source of intractability already for databases. In the context of ontology-based query answering, combined complexity is typically only used in combination with simple fixed queries, which means that the query does not really have an important influence.} For data complexity, it is relevant to consider sub-linear complexity classes. In particular, we consider \textsc{AC}\ensuremath{^0}\xspace and \textsc{LogSpace}\xspace. For the full formal definitions, we refer the interested reader to \citeN{papa-complexity} and \citeN{BoSi90}. Here we only mention briefly that evaluation of FO-queries over a database is in \textsc{AC}\ensuremath{^0}\xspace on the size of the database \cite{Alice} and \textsc{AC}\ensuremath{^0}\xspace is strictly contained in \textsc{LogSpace}\xspace \cite{FuSS-MST84}. In classical \text{DL-Lite}\ensuremath{_R}\xspace, query answering w.r.t.\ an ontology is reduced to the standard problem of query answering over a database through a process known as query rewriting, and thus is in \textsc{AC}\ensuremath{^0}\xspace w.r.t.\ data complexity. The main idea is to include in the query all the information that is required by the TBox, in such a way that only assertions from the ABox need to be considered. In our running example, note that there is no assertion in the ABox \ensuremath{\mathcal{A}_\textsf{exa}}\xspace which explicitly mentions a tourist attraction. We only know that the two monuments and the three museums are tourist attractions thanks to the TBox. In this case, the query rewriting approach would take the query $q(x)=\textsf{TouristAttraction}(x)$ and transform it into the UCQ \[ \{ \textsf{TouristAttraction}(x), \quad \textsf{Museum}(x), \quad \textsf{Monument}(x) \} \] looking ``backwards'' over the axioms in the TBox. The answers of this UCQ over the ABox alone are exactly those of the original query over the whole ontology. As seen in this simple example, there are many possible choices to create the matches that comply with the TBox. Hence, this method results in a UCQ even if the original query is a simple CQ. At this point, the ABox is treated as a database, which suffices to find all the certain answers. Similarly, a special UCQ can be used to verify that the ontology is \emph{consistent}; that is, whether it is possible to build a model for this ontology. For the full details on how these query rewritings work in classical \text{DL-Lite}\ensuremath{_R}\xspace, see \cite{dl-lite}. In terms of combined complexity, consistency can be decided in polynomial time; in fact, it is \textsc{NLogSpace}\xspace-complete~\cite{ACKZ09}. \section{The Canonical Interpretation} A very useful tool for developing techniques for answering queries in \text{DL-Lite}\ensuremath{_R}\xspace is the canonical interpretation. We first show that the same idea can be extended (with the necessary modifications) to fuzzy ontologies, independently of the t-norm underlying its semantics. Let $\ensuremath{\mathcal{O}}\xspace=(\ensuremath{\mathcal{T}}\xspace,\ensuremath{\mathcal{A}}\xspace)$ be a \text{DL-Lite}\ensuremath{_R}\xspace ontology and assume w.l.o.g.\ that there are no axioms of the form $\left<\exists Q_1\sqsubseteq\exists Q_2,d\right>\in\ensuremath{\mathcal{T}}\xspace$; any such axiom can be substituted by the two axioms $\left<\exists Q_1\sqsubseteq A,1\right>,\left<A\sqsubseteq \exists Q_2,d\right>$ where $A$ is a new concept name not appearing in \ensuremath{\mathcal{T}}\xspace. The \emph{canonical interpretation} of \ensuremath{\mathcal{O}}\xspace is the interpretation $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)=(\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace,\cdot^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace)$ over the domain $\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace:=\ensuremath{N_I}\xspace\cup\ensuremath{N_N}\xspace$---where \ensuremath{N_N}\xspace is a countable set of \emph{constants}---obtained through the following (infinite) process. Starting from the \emph{empty} interpretation which sets $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)=0$ and $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)=0$ for every $A\in\ensuremath{N_C}\xspace, P\in\ensuremath{N_R}\xspace$ and $\delta,\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$, exhaustively apply the following rules: \begin{enumerate}[label=\textbf{R\arabic*.}] \item\label{rule:r1} if $\left<A(a),d\right>\in\ensuremath{\mathcal{A}}\xspace$ and $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)< d$, then update the value $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a):=d$; \item\label{rule:r2} if $\left<P(a,b),d\right>\in\ensuremath{\mathcal{A}}\xspace$ and $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a,b)< d$, then update the value $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a,b):=d$; \item\label{rule:r3} if $\left<A_1\sqsubseteq A_2,d\right>\in\ensuremath{\mathcal{T}}\xspace$ and $A_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)< A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d$, then update $A_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta):=A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d$; \item\label{rule:r4} if $\left<A\sqsubseteq \exists P,d\right>\in\ensuremath{\mathcal{T}}\xspace$ and for every $\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$, $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)<A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d$ holds, then select a fresh element $\eta_0$ such that $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta_0)=0$ and update the value $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta_0):=A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d$; \item\label{rule:r5} if $\left<A\sqsubseteq \exists P^-,d\right>\in\ensuremath{\mathcal{T}}\xspace$ and for every $\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\eta,\delta)<A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d$ holds, then select a fresh element $\eta_0$ such that $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\eta_0,\delta)=0$ and update the value $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\eta_0,\delta):=A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d$; \item\label{rule:r6} if $\left<\exists P\sqsubseteq A,d\right>\in \ensuremath{\mathcal{T}}\xspace$ and $\exists\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ such that $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)<P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\otimes d$, then update $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta):=P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\otimes d$; \item\label{rule:r7} if $\left<\exists P^-\sqsubseteq A,d\right>\in \ensuremath{\mathcal{T}}\xspace$ and $\exists\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ such that $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)<P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\eta,\delta)\otimes d$, then update $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta):=P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\eta,\delta)\otimes d$; \item\label{rule:r8} if $\left<Q_1\sqsubseteq Q_2,d\right>\in\ensuremath{\mathcal{T}}\xspace$ and $Q_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)<Q_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\otimes d$, then update $Q_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)$ to the value $Q_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\otimes d$. \end{enumerate} where the rules are applied in a fair manner; that is, an applicable rule is eventually triggered. The process of rule application is a monotone non-decreasing function, and as such has a least fixpoint, which is the canonical interpretation $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$.% \footnote{By Tarski's Theorem \cite{Tars-55}, this fixpoint is the limit of the (fair) application of the rules starting from the smallest element; in this case, the empty interpretation as described before.} Intuitively, $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ should be a minimal model of \ensuremath{\mathcal{O}}\xspace, which describes the necessary conditions of all other models of \ensuremath{\mathcal{O}}\xspace. Indeed, the first two rules ensure that the conditions imposed by the ABox are satisfied, by setting the degrees of the unary and binary relations to the smallest required value. The remaining rules guarantee that all elements of the domain satisfy the positive axioms from the TBox, and each rule is as weak as possible in satisfying these constraints. The canonical interpretation of the ontology \ensuremath{\mathcal{O}_\textsf{exa}}\xspace from Example~\ref{exa:run} is depicted in Figure~\ref{fig:canonical}. \begin{figure} \includegraphics[width=\textwidth]{canonical} \caption{The canonical interpretation for the ontology \ensuremath{\mathcal{O}_\textsf{exa}}\xspace from our running example.} \label{fig:canonical} \end{figure} Note that in general it provides a lower membership degree of each individual to every concept when compared to the model from Figure~\ref{fig:model}. This intuition justifies the name of \emph{canonical} interpretation. As in the classical case, $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ can be homomorphically embedded in every model of \ensuremath{\mathcal{O}}\xspace, and hence be used as a representative of them all. We show a similar result with the difference that in this case, the homomorphism needs to take into account the truth degrees from the interpretation function as well.% \footnote{A careful reader will notice that the exhaustive application of the rules may produce different interpretations. We discuss this issue in further detail later in this section. For now, it suffices to know that all the possible canonical interpretations are equivalent modulo homomorphisms.} This is described in the following proposition. \begin{proposition} \label{prop:min:can} Let \ensuremath{\mathcal{O}}\xspace be a consistent fuzzy \text{DL-Lite}\xspace ontology, $\ensuremath{\mathcal{I}}\xspace=(\Delta^\ensuremath{\mathcal{I}}\xspace,\cdot^\ensuremath{\mathcal{I}}\xspace)$ be a model of \ensuremath{\mathcal{O}}\xspace, and $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)=(\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace,\cdot^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace)$ its canonical interpretation. There is a function $\psi$ from $\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ to $\Delta^\ensuremath{\mathcal{I}}\xspace$ such that: \begin{enumerate} \item for each $A\in\ensuremath{N_C}\xspace$ and $\delta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$, $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\le A^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))$; and \item for each $P\in\ensuremath{N_R}\xspace$ and $\delta,\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$, $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\le P^\ensuremath{\mathcal{I}}\xspace(\psi(\delta),\psi(\eta))$. \end{enumerate} \end{proposition} \begin{proof} Let $\ensuremath{\mathcal{O}}\xspace=(\ensuremath{\mathcal{T}}\xspace,\ensuremath{\mathcal{A}}\xspace)$. We construct the function $\psi$ recursively through the rule applications that define $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$, and show that the two properties from the proposition are invariant w.r.t.\ the rule applications. We define first $\psi(a)=a^\ensuremath{\mathcal{I}}\xspace$ for all $a\in \ensuremath{N_I}\xspace$. Recall that initially, $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)=P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)=0$ for all $A\in \ensuremath{N_C}\xspace, P\in\ensuremath{N_R}\xspace, \delta,\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$. Hence, the properties hold trivially in this case. Assume now that the properties hold before a rule application; we show that they also hold afterwards by a case analysis over the rule used: \begin{enumerate}[label=\textbf{R\arabic*.}] \item if $\left<A(a),d\right>\in\ensuremath{\mathcal{A}}\xspace$, since \ensuremath{\mathcal{I}}\xspace is a model of this axiom, it follows that $A^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)\ge d$. The rule application sets $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=d$ and hence $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)\le A^\ensuremath{\mathcal{I}}\xspace(\psi(a))=A^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)$. \item if $\left<P(a,b),d\right>\in\ensuremath{\mathcal{A}}\xspace$, the rule application sets $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a,b)=d$. Since \ensuremath{\mathcal{I}}\xspace satisfies this axiom, it follows that $P^\ensuremath{\mathcal{I}}\xspace(\psi(a),\psi(b))=P^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace,b^\ensuremath{\mathcal{I}}\xspace)\ge d=P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a,b)$. \item if $\left<A_1\sqsubseteq A_2,d\right>\in\ensuremath{\mathcal{T}}\xspace$, the rule application over a given $\delta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ updates $A_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)$ to $A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d$. Since \ensuremath{\mathcal{I}}\xspace satisfies this axiom, by the induction hypothesis and monotonicity of $\otimes$ we know that $A_2^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\ge A_1^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\otimes d\ge A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d =A_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)$. \item if $\left<A\sqsubseteq \exists P,d\right>\in\ensuremath{\mathcal{T}}\xspace$, let $\delta$ be the element over which the rule is applicable, and $\eta_0$ the fresh element selected by the rule application. Since \ensuremath{\mathcal{I}}\xspace is a model, we know that there exists an element $\kappa\in\Delta^\ensuremath{\mathcal{I}}\xspace$ such that $P^\ensuremath{\mathcal{I}}\xspace(\psi(\delta),\kappa)\ge A^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\otimes d$. We thus define $\psi(\eta_0):=\kappa$. By the induction hypothesis and monotonicity of $\otimes$ we get $P^\ensuremath{\mathcal{I}}\xspace(\psi(\delta),\psi(\eta_0))P^\ensuremath{\mathcal{I}}\xspace(\psi(\delta),\kappa)\ge A^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\otimes d\ge A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\psi(\delta))\otimes d= P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta_0)$. \item if $\left<A\sqsubseteq \exists P^-,d\right>\in\ensuremath{\mathcal{T}}\xspace$, let $\delta$ be the element over which the rule is applicable, and $\eta_0$ the fresh element selected by the rule application. Since \ensuremath{\mathcal{I}}\xspace is a model, we know that there exists an element $\kappa\in\Delta^\ensuremath{\mathcal{I}}\xspace$ such that $P^\ensuremath{\mathcal{I}}\xspace(\kappa,\psi(\delta))\ge A^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\otimes d$. We thus define $\psi(\eta_0):=\kappa$. By the induction hypothesis and monotonicity of $\otimes$ we get $P^\ensuremath{\mathcal{I}}\xspace(\psi(\eta_0),\psi(\delta))P^\ensuremath{\mathcal{I}}\xspace(\kappa,\psi(\delta))\ge A^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\otimes d\ge A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\psi(\delta))\otimes d= P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\eta_0,\delta)$. \item if $\left<\exists P\sqsubseteq A,d\right>\in\ensuremath{\mathcal{T}}\xspace$, then for the chosen $\delta,\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ we have by the induction hypothesis that $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)=P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\otimes d\le P^\ensuremath{\mathcal{I}}\xspace(\psi(\delta),\psi(\eta))\otimes d \le A^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))$. \item if $\left<\exists P\sqsubseteq A,d\right>\in\ensuremath{\mathcal{T}}\xspace$, then for the chosen $\delta,\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ we have by the induction hypothesis that $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)=P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\eta,\delta)\otimes d\le P^\ensuremath{\mathcal{I}}\xspace(\psi(\eta),\psi(\delta))\otimes d \le A^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))$. \item if $\left<Q_1\sqsubseteq Q_2,d\right>\in\ensuremath{\mathcal{T}}\xspace$, the rule application over given $\delta,\eta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ updates $Q_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)$ to $Q_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\otimes d$. Since \ensuremath{\mathcal{I}}\xspace satisfies this axiom, by the induction hypothesis and monotonicity of $\otimes$ we know that $$ Q_2^\ensuremath{\mathcal{I}}\xspace(\psi(\delta),\psi(\eta))\ge Q_1^\ensuremath{\mathcal{I}}\xspace(\psi(\delta),\psi(\eta))\otimes d\ge Q_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\otimes d =Q_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta). $$ \end{enumerate} Hence, the result holds after the fair application of all possible rules. \end{proof} Importantly, note that the construction of $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ does not take the negations into account; e.g., the axiom \ax[0.5]{\exists\textsf{locIn}\sqsubseteq \neg\textsf{Cheap}} is never used during this construction. The effect of this is that $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ might not be a model of \ensuremath{\mathcal{O}}\xspace at all. \begin{example} \label{exa:incons} Consider the fuzzy \text{DL-Lite}\ensuremath{_R}\xspace ontology $\ensuremath{\mathcal{O}_\textsf{exa}}\xspace=(\ensuremath{\mathcal{T}_0}\xspace,\ensuremath{\mathcal{A}_0}\xspace)$ where \begin{align*} \ensuremath{\mathcal{T}_0}\xspace:={} &\{\left<A_1\sqsubseteq\neg A_2,1\right>\}, \\ \ensuremath{\mathcal{A}_0}\xspace:={} &\{\left<A_1(a),0.5\right>,\left<A_2(a),0.5\right>\}. \end{align*} Under the G\"odel semantics, by application of the first rule, the canonical interpretation maps $A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=A_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=0.5$. However, this violates the axiom in \ensuremath{\mathcal{T}_0}\xspace, which requires that $A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)\Rightarrow\ominus A_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=1$. That is, it requires that $A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)<\ominus A_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)$, which is only possible when $A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=0$ or $A_2^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=0$. Note that a similar phenomenon could be observed also in the TBox \ensuremath{\mathcal{T}_\textsf{exa}}\xspace of our running example, which contains an axiom with a negated concept. \end{example} The issue is that the negative axioms may introduce inconsistencies, by enforcing upper bounds in the degrees used, which are not verified by the canonical interpretation; recall, in fact, that the previously described construction monotonically increases the degrees to satisfy the minimal requirements, but never verifies whether these degrees affect some upper bound. On the other hand, we can prove that, as long as there is a model, $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ is one. \begin{proposition} \label{prop:can:model} $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ is a model of \ensuremath{\mathcal{O}}\xspace iff \ensuremath{\mathcal{O}}\xspace is consistent. \end{proposition} \begin{proof} The \emph{only if} direction is trivial, hence we focus on showing that if \ensuremath{\mathcal{O}}\xspace is consistent, then $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ is a model of \ensuremath{\mathcal{O}}\xspace. Note first that, by construction, $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ satisfies all positive axioms. Otherwise, a rule would trigger, and since the construction applies all rules fairly until exhaustion, no rule is applicable in the resulting interpreation $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$. Hence, if $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ is not a model of \ensuremath{\mathcal{O}}\xspace, there must exist a negative axiom of the form (i) $\left<B\sqsubseteq \neg C,d\right>$ or (ii) $\left<Q\sqsubseteq \neg R,d\right>$ that is not satisfied by the canonical interpretation. We consider the case (i); the other case can be treated analogously. If $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)\not\models\left<B\sqsubseteq \neg C,d\right>$, then there must exist an element $\delta\in\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace$ such that $B^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\Rightarrow (\neg C)^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)<d$ or, equivalently, $\ominus C^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)< B^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d$. Since \ensuremath{\mathcal{O}}\xspace is consistent, there must exist a model $\ensuremath{\mathcal{I}}\xspace=(\Delta^\ensuremath{\mathcal{I}}\xspace,\cdot^\ensuremath{\mathcal{I}}\xspace)$ of \ensuremath{\mathcal{O}}\xspace. By Proposition \ref{prop:min:can}, there exists a function $\psi:\Delta^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace\to \Delta^\ensuremath{\mathcal{I}}\xspace$ such that, in particular, $B^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)<B^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))$ and $C^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)<C^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))$. By antitonicity of $\ominus$, the latter means that $\ominus C^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\le \ominus C^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)$ and hence \[ \ominus C^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\le \ominus C^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta) < B^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\otimes d \le B^\ensuremath{\mathcal{I}}\xspace(\psi(\delta))\otimes d \] But this means that $\ensuremath{\mathcal{I}}\xspace\not\models \left<B\sqsubseteq \neg C,d\right>$, which contradicts the assumption that \ensuremath{\mathcal{I}}\xspace was a model of \ensuremath{\mathcal{O}}\xspace. \end{proof} It can be seen that the ontology \ensuremath{\mathcal{O}_0}\xspace from Example \ref{exa:incons} is inconsistent under the G\"odel semantics. On the other hand, under the \L ukasiewicz semantics, \ensuremath{\mathcal{O}_0}\xspace is in fact consistent which, by this proposition, means that $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ is a model of this ontology. This is easily confirmed by recalling that the \L ukasiewicz negation is involutive; that is $\ominus d=1-d$. In the case of the example, we have $\ominus 0.5=0.5$; the axiom \ax{A_1\sqsubseteq \neg A_2} is satisfied because $0.5 = A_1^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a) \le (\neg A_2)^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=0.5$. The consequence of the last two propositions is that $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ is complete for existential positive queries, and in particular for conjunctive queries and threshold queries. \begin{corollary} \label{cor:ican} If \ensuremath{\mathcal{O}}\xspace is a consistent fuzzy \text{DL-Lite}\ensuremath{_R}\xspace ontology, then \begin{enumerate} \item for every CQ $q(\ensuremath{\mathbf{x}}\xspace)$, answer tuple \ensuremath{\mathbf{a}}\xspace, and $d\in[0,1]$ it holds that $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge d$ iff $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)\models q(\ensuremath{\mathbf{a}}\xspace)\ge d$; \item for every TQ $q(\ensuremath{\mathbf{x}}\xspace)$ and answer tuple \ensuremath{\mathbf{a}}\xspace, $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$ iff $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)\models q(\ensuremath{\mathbf{a}}\xspace)$. \end{enumerate} \end{corollary} \begin{proof} Proposition \ref{prop:can:model} states that $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ is a model; hence anything that does not follow from it cannot be an answer. On the other hand, Proposition \ref{prop:min:can} states that the degree of every atom in any model is at least the degree given by \ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace, and hence if a tuple is an answer in \ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace, it is also an answer in every other model. \end{proof} \subsection*{A short note on the canonical interpretation} Before delving deeper into the process of answering queries (the main contribution of this paper), it is worth considering the canonical interpretation in more detail, starting with the definite article used in its naming. Indeed, although we always speak about \emph{the} canonical interpretation, the actual structure produced is not necessarily unique, and depends on the order in which rules are chosen to apply, specially in relation to rules \textbf{R4} and \textbf{R5}, which introduce new relevant elements. This is highlighted in the following example. \begin{example} \label{exa:can:mult} Consider an ontology containing the axioms \begin{align*} \ensuremath{\mathcal{A}}\xspace := {} & \{ \left<A(a), 1\right>, \left<B(a), 1\right> \} \\ \ensuremath{\mathcal{T}}\xspace := {} & \{ \left<A\sqsubseteq \exists R, 0.3 \right>, \left<B\sqsubseteq \exists R, 0.5 \right> \} \end{align*} After applying the rule \textbf{R1} over the ABox axioms, we have an interpretation where $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=B^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(a)=1$. At this point, rule \textbf{R4} is applicable for any of the two axioms in \ensuremath{\mathcal{T}}\xspace. If we first apply it to the first axiom, we select a fresh element, e.g.\ $\eta_0$ and set $R(a,\eta_0)=0.3$; at this point, the rule is still applicable to the second axiom. This application requires selecting a new fresh element (now $\eta_1$) and set $R(a,\eta_1)=0.5$. At this point, no rules are applicable and we have a canonical interpretation. If instead we chose first to apply the rule on the second axiom, we would choose a fresh element (say, $\eta_2$) and set $R(a,\eta_2)=0.5$. This application immediately disallows the application of the rule to the first axiom, and hence the process stops. \end{example} Note that the two interpretations built in this example are not equivalent (see Figure \ref{fig:can:cons}). \begin{figure}[tb] \includegraphics[width=\textwidth]{construction} \caption{Two canonical interpretation constructions from the ontology in Example \ref{exa:can:mult}. From the empty interpretation (a), \textbf{R1} is applied to each assertion to reach (c). One can either apply \textbf{R4} to $\left<A\sqsubseteq \exists R, 0.3 \right>$ and go through the upper branch (d) to build the interpretation (e); or to $\left<B\sqsubseteq \exists R, 0.5 \right>$ and obtain (f) directly.} \label{fig:can:cons} \end{figure} However, they are homomorphic in the sense specified by Proposition \ref{prop:min:can}. This is not a coincidence. In fact, note that the proofs of Propositions \ref{prop:min:can} and \ref{prop:can:model} do not depend on the order of rule applications, but only on the fact that these rules were exhaustively (and fairly) applied. If the ontology is consistent, by the latter proposition the interpretation obtained is a model, regardless of the order chosen, and by the former proposition, it is homomorphic to all the interpretations which can be derived following different application orderings. In other words, the canonical interpretation is unique \emph{up to homomorphism}. In the following, we disregard this issue and consider an arbitrary, but fixed, canonical interpretation as unique. \bigskip We now return to the issue of answering queries. Corollary \ref{cor:ican} states that these queries can be answered through the canonical interpretation. Obviously, such an approach is impractical; in fact, impossible, because it is an infinite model constructed through an infinitary process. Additionally, we still have the burden to prove that the ontology is consistent, which is a prerequisite for the use of Corollary \ref{cor:ican} to answer queries. Fortunately, for the G\"odel and product t-norms, we can resort to existing results from the literature for this latter task. \begin{definition}[classical version] Let $\ensuremath{\mathcal{O}}\xspace=(\ensuremath{\mathcal{T}}\xspace,\ensuremath{\mathcal{A}}\xspace)$ be a fuzzy \text{DL-Lite}\xspace ontology. The \emph{classical version} $\widehat\ensuremath{\mathcal{O}}\xspace$ of \ensuremath{\mathcal{O}}\xspace is defined by $\widehat\ensuremath{\mathcal{O}}\xspace:=(\widehat\ensuremath{\mathcal{T}}\xspace,\widehat\ensuremath{\mathcal{A}}\xspace)$, where \begin{align*} \widehat\ensuremath{\mathcal{T}}\xspace:={} & \{ B\sqsubseteq C \mid \left<B\sqsubseteq C,d\right>\in\ensuremath{\mathcal{T}}\xspace, d>0\} \cup \{ Q\sqsubseteq R \mid \left<Q\sqsubseteq R,d\right>\in\ensuremath{\mathcal{T}}\xspace, d>0\}, \\ \widehat\ensuremath{\mathcal{A}}\xspace:={} & \{ B(a) \mid \left<B(a),d\right>\in\ensuremath{\mathcal{T}}\xspace, d>0\} \cup \{ P(a,b) \mid \left<P(a,b),d\right>\in\ensuremath{\mathcal{T}}\xspace, d>0\}. \end{align*} \end{definition} That is, $\widehat\ensuremath{\mathcal{O}}\xspace$ contains all the axioms and assertions from \ensuremath{\mathcal{O}}\xspace which hold with a positive degree---note that any fuzzy axiom or assertion with degree 0 could be removed w.l.o.g.\ anyway. The following result is a direct consequence of work on more expressive fuzzy DLs \cite{BoDP-AIJ15}. \begin{proposition} \label{prop:reduc} Let \ensuremath{\mathcal{O}}\xspace be a G-\text{DL-Lite}\ensuremath{_R}\xspace or $\Pi$-\text{DL-Lite}\ensuremath{_R}\xspace ontology. Then \ensuremath{\mathcal{O}}\xspace is consistent iff $\widehat\ensuremath{\mathcal{O}}\xspace$ is consistent. \end{proposition} In those cases, consistency checking can be reduced to the classical case, without the need to modify the query or the basic formulation of the ontology. For the ontology \ensuremath{\mathcal{O}_0}\xspace in Example \ref{exa:incons}, we have $\widehat\ensuremath{\mathcal{O}_0}\xspace=(\{A_1\sqsubseteq \neg A_2\},\{A_1(a),A_2(a)\})$, which is inconsistent in the classical case, thus showing (through Proposition~\ref{prop:reduc}) that it is inconsistent under the G\"odel and product t-norm semantics. We note that the example also shows that Proposition \ref{prop:reduc} does not hold for the \L ukasiewicz t-norm, since we have established that \ensuremath{\mathcal{O}_0}\xspace is consistent under this semantics, although its classical version remains inconsistent under classical interpretations. A particular consequence of Proposition \ref{prop:reduc} is that deciding consistency of G-\text{DL-Lite}\ensuremath{_R}\xspace and $\Pi$-\text{DL-Lite}\ensuremath{_R}\xspace ontologies is in \textsc{AC}\ensuremath{^0}\xspace w.r.t.\ data complexity, and \textsc{NLogSpace}\xspace-complete w.r.t.\ combined complexity, where the \textsc{NLogSpace}\xspace lower bound comes from known results in classical \text{DL-Lite}\ensuremath{_R}\xspace \cite{ACKZ09}. Thus adding truth degrees does not affect the complexity of this basic reasoning task. We now turn our attention to the task of query answering with the different semantics, starting with the idempotent case of the G\"odel t-norm. We consider first the case of conjunctive queries, which allows for a simple solution, and then study threshold queries for which a rewriting technique is needed. Before studying how to answer queries over fuzzy \text{DL-Lite}\ensuremath{_R}\xspace ontologies and its complexity, we note that in the case that an ontology is classical---i.e., it uses only degree 1 in all its axioms---its canonical interpretation constructed as described in this section is equivalent to the classical canonical interpretation from \cite{dl-lite}. This fact will be used in the following sections. \section{Answering Conjunctive Queries over G\"odel Ontologies} For this and the following section, we are always considering the G\"odel t-norm as the underlying operator for interpreting all fuzzy statements, and in particular the conjunctive queries. The G\"odel semantics are very limited in their expressivity. On the one hand, we have seen that $\ominus d\in\{0,1\}$ for all $d\in[0,1]$. This means that whenever we have an axiom of the form $\left<B\sqsubseteq \neg B',d\right>$ or $\left<Q\sqsubseteq \neg Q',d\right>$ with $d>0$, we are in fact saying that for every element $\delta\in\Delta^\ensuremath{\mathcal{I}}\xspace$, if $B^\ensuremath{\mathcal{I}}\xspace(\delta)>0$, then $B'^\ensuremath{\mathcal{I}}\xspace(\delta)=0$---because in this case $\ominus B'^\ensuremath{\mathcal{I}}\xspace(\delta)=1$, which is the only possible way of satisfying the axiom. A similar argument holds for role axioms. Thus, for this section we can assume w.l.o.g.\ that all negative axioms hold with degree 1; i.e., they are of the form \ax{B\sqsubseteq \neg B'} or \ax{Q\sqsubseteq \neg Q'}. On the other hand, a positive axiom of the form $\left<B\sqsubseteq B',d\right>$ requires that for every $\delta\in\Delta^\ensuremath{\mathcal{I}}\xspace$, $B'^\ensuremath{\mathcal{I}}\xspace(\delta)\ge \min\{B^\ensuremath{\mathcal{I}}\xspace(\delta),d\}$. That is, the only way to guarantee that an atom gets a high degree is to use axioms with a high degree. We use these facts to reduce reasoning tasks in this setting to the classical \text{DL-Lite}\ensuremath{_R}\xspace scenario. Consider a consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontology \ensuremath{\mathcal{O}}\xspace. We can decide a lower bound for the degree of a CQ simply by querying a \emph{cut} of \ensuremath{\mathcal{O}}\xspace. \begin{definition}[cut ontology] Given a value $\theta\in(0,1]$, the \emph{$\theta$-cut} of the ontology \ensuremath{\mathcal{O}}\xspace is defined as the sub-ontology $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}:=(\ensuremath{\mathcal{T}}\xspace_{\ge \theta},\ensuremath{\mathcal{A}}\xspace_{\ge \theta})$ where \begin{align*} \ensuremath{\mathcal{T}}\xspace_{\ge \theta}:={} & \{ \left<\gamma,e\right>\in\ensuremath{\mathcal{T}}\xspace \mid e\ge \theta\}, \\ \ensuremath{\mathcal{A}}\xspace_{\ge \theta}:={} & \{ \left<\alpha,e\right>\in\ensuremath{\mathcal{A}}\xspace \mid e\ge \theta\}. \end{align*} \end{definition} That is, $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}$ is the subontology containing only the axioms and assertions that hold to degree at least $\theta$. To show that $\theta$-cuts suffice for answering queries, we use the canonical interpretation. Note that including new axioms or assertions to an ontology would result in an update of the canonical interpretation which only increases the degree of some of the elements of the domain. More precisely, if $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ is the canonical interpretation of $\ensuremath{\mathcal{O}}\xspace=(\ensuremath{\mathcal{T}}\xspace,\ensuremath{\mathcal{A}}\xspace)$, then the canonical interpretation of $\ensuremath{\mathcal{O}}\xspace'=(\ensuremath{\mathcal{T}}\xspace\cup\{\left<B\sqsubseteq C,d\right>\},\ensuremath{\mathcal{A}}\xspace)$ is the result of applying the construction rules starting from $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$. This holds because the resulting canonical interpretation is not dependent on the order in which rules are applied (and hence axioms taken into account) as long as this is done fairly.% \footnote{Formally, different rule application orderings yield homomorphic interpretations. See the discussion in the previous section.} Since $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ has already applied all the rules on axioms of \ensuremath{\mathcal{O}}\xspace exhaustively, the only remaining rule applications will be based on the new axiom $\left<B\sqsubseteq C,d\right>$ and new applications over \ensuremath{\mathcal{T}}\xspace arising from it. Under the G\"odel semantics, all the updates increase the interpretation function up to the value $d$; that is, if $\cdot^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace$ is the interpretation function of $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace')$, the difference between $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ and $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace')$ is that there exist some elements such that $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)<A^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace(\delta)=d$, and similarly for roles there exist some pairs $\delta,\eta$ such that $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)<P^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace(\delta,\eta)=d$. For all others, the degrees remain unchanged. Moreover, if $d_0$ is the smallest degree appearing in the ontology \ensuremath{\mathcal{O}}\xspace, then its canonical interpretation uses only truth degrees in $\{0\}\cup[d_0,1]$; that is, no truth degree in $(0,d_0)$ appears in $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$. With these insights we are ready to produce our first results. Recall, once again, that for the rest of this section, we always consider that the semantics is based on the G\"odel t-norm; i.e., we have a G-\text{DL-Lite}\ensuremath{_R}\xspace ontology. \begin{lemma} \label{lem:cut} Let \ensuremath{\mathcal{O}}\xspace be a consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontology, $q(\ensuremath{\mathbf{x}}\xspace)$ a query, \ensuremath{\mathbf{a}}\xspace a tuple of individuals, and $\theta\in(0,1]$. Then $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$ iff $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$. \end{lemma} \begin{proof} Since $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}\subseteq \ensuremath{\mathcal{O}}\xspace$, every model of \ensuremath{\mathcal{O}}\xspace is also a model of $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}$. Hence, if $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$, then $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$. For the converse, assume that $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}\not\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$. By Corollary \ref{cor:ican}, this means that $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace_{\ge \theta})\not\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$. That is, $q^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace)< \theta$. Let $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)=(\Delta^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace,\cdot^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace)$ be the canonical interpretation of \ensuremath{\mathcal{O}}\xspace. Recall that the difference between \ensuremath{\mathcal{O}}\xspace and $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}$ is that the former has some additional axioms with degrees smaller than $\theta$. As argued before, this means that the difference between $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ and $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace_{\ge \theta})$ are just some degrees, which are all smaller than $\theta$; that is, for every $A\in\ensuremath{N_C}\xspace$, $P\in\ensuremath{N_R}\xspace$, and $\delta,\eta\in\Delta^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace$, if $A^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace(\delta)\ge \theta$, then $A^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta)\ge \theta$ and if $P^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace(\delta,\eta)\ge \theta$, then $P^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\delta,\eta)\ge \theta$. By assumption, this means that $q^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{{\mathcal{I}'_\mathsf{can}}}\xspace)<\theta$, and hence $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)\not\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$. Thus, $\ensuremath{\mathcal{O}}\xspace\not\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$. \end{proof} What this lemma states is that in order to find a lower bound for the degree of a query, one can ignore all the axioms and assertions that provide a smaller degree than the bound we are interested in. However, one still needs to answer a query for a fuzzy ontology ($\ensuremath{\mathcal{O}}\xspace_{\ge \theta}$ is still fuzzy), for which we still do not have any effective method. The following lemma solves this issue, considering the classical version of this ontology. \begin{lemma} \label{lem:class} Let \ensuremath{\mathcal{O}}\xspace be a consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontology such that $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}=\ensuremath{\mathcal{O}}\xspace$ for some $\theta>0$. Then, $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$ iff $\widehat\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$. \end{lemma} \begin{proof} Every model of $\widehat\ensuremath{\mathcal{O}}\xspace$ is also a model of \ensuremath{\mathcal{O}}\xspace, with the additional property that the interpretation function maps all elements to $\{0,1\}$. If $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta>0$, then for every model \ensuremath{\mathcal{I}}\xspace of $\widehat\ensuremath{\mathcal{O}}\xspace$ it holds that $q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace)\ge \theta>0$, and thus $q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace)=1$, which means that $\widehat\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$. Conversely, if $\widehat\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$, the canonical interpretation $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ must be such that $q^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace)>0$; but as argued before, since \ensuremath{\mathcal{O}}\xspace only has axioms and assertions with degrees $\ge \theta$, it must be the case that all degrees of $\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathcal{O}}\xspace)$ are in $\{0\}\cup[\theta,1]$, and hence $q^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace)\ge \theta$. This implies, by Corollary \ref{cor:ican} that $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$. \end{proof} Note that the condition of this lemma, which requires that $\ensuremath{\mathcal{O}}\xspace_{\ge \theta}=\ensuremath{\mathcal{O}}\xspace$, is only stating that all the degrees in the ontology \ensuremath{\mathcal{O}}\xspace are at least $\theta$. That condition is immediately satisfied by a cut ontology, and hence the lemma can be applied directly to it. Lemmas~\ref{lem:cut} and~\ref{lem:class} together provide a method for reducing answering degree queries over G\mbox{-}\text{DL-Lite}\ensuremath{_R}\xspace ontologies to query answering in classical \text{DL-Lite}\ensuremath{_R}\xspace. \begin{theorem} \label{thm:reduct} If \ensuremath{\mathcal{O}}\xspace is a consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontology and $\theta>0$, then it holds that $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge \theta$ iff $\widehat\ensuremath{\mathcal{O}}\xspace_{\ge \theta}\models q(\ensuremath{\mathbf{a}}\xspace)$. \end{theorem} This means that we can use a standard ontology-based query answering system to answer fuzzy queries in \text{DL-Lite}\ensuremath{_R}\xspace as well. Note that the approach proposed by Theorem \ref{thm:reduct} can only decide whether the degree of an answer to a query is at least $\theta$, but it needs the value $\theta\in (0,1]$ as a parameter. If, instead, we are interested in computing the degree of an answer, or $\ensuremath{\mathsf{ans}}\xspace(q(\ensuremath{\mathbf{x}}\xspace),\ensuremath{\mathcal{O}}\xspace)$, we can still use a classical query answering method as an underlying black-box aid as described next. Since the TBox \ensuremath{\mathcal{T}}\xspace and the ABox \ensuremath{\mathcal{A}}\xspace which compose the ontology \ensuremath{\mathcal{O}}\xspace are both finite, the set $\ensuremath{\mathcal{D}}\xspace:=\{d\mid \left<\alpha,d\right>\in\ensuremath{\mathcal{T}}\xspace\cup\ensuremath{\mathcal{A}}\xspace\}$ of degrees appearing in the ontology is also finite; in fact, its size is bounded by the size of \ensuremath{\mathcal{O}}\xspace. Hence, we can assume that \ensuremath{\mathcal{D}}\xspace is of the form $\ensuremath{\mathcal{D}}\xspace=\{d_0,d_1,\ldots,d_n,d_{n+1}\}$ where $d_0\ge 0,d_{n+1}=1$ and for all $i,0\le i\le n$, $d_{i}<d_{i+1}$. In order to find the degree of an answer \ensuremath{\mathbf{a}}\xspace to a query $q$, we proceed as follows: starting from $i:=n+1$, we iteratively ask the query $\ensuremath{\mathcal{O}}\xspace_{\ge d_i}\models q(\ensuremath{\mathbf{a}}\xspace)$ and decrease $i$ until the query is answered affirmatively, or $i$ becomes 0 (see Algorithm \ref{alg:degree}). \begin{algorithm}[tb] \DontPrintSemicolon \KwData{Ontology \ensuremath{\mathcal{O}}\xspace, query $q$, answer \ensuremath{\mathbf{a}}\xspace, $\ensuremath{\mathcal{D}}\xspace=\{d_0,d_1,\ldots,d_{n+1}\}$} \KwResult{The degree of $q(\ensuremath{\mathbf{a}}\xspace)$ w.r.t.\ \ensuremath{\mathcal{O}}\xspace} $i\gets n+1$ \; $\ensuremath{\mathcal{N}}\xspace\gets\widehat\ensuremath{\mathcal{O}}\xspace_{\ge 1}$ \; \While{$\ensuremath{\mathcal{N}}\xspace\not\models q(\ensuremath{\mathbf{a}}\xspace)$ \textbf{and} $i>0$ }{ $i \gets i-1$ \; $\ensuremath{\mathcal{N}}\xspace\gets\widehat\ensuremath{\mathcal{O}}\xspace_{\ge d_i}$ \; } \Return $d_i$ \; \caption{Compute the degree of an answer to a query} \label{alg:degree} \end{algorithm} In the former case, $d_i$ is the degree for $q(\ensuremath{\mathbf{a}}\xspace)$; in the latter, the degree is 0---i.e., \ensuremath{\mathbf{a}}\xspace is not an answer of $q$.% \footnote{Note that the algorithm can be made more efficient using a binary search, instead of a linear decrease of available degrees. We chose this presentation to provide a clear association with Corollary \ref{cor:logspace}.} During the execution of this algorithm, each classical query needed at line 3 can be executed in \textsc{AC}\ensuremath{^0}\xspace (and in particular in \textsc{LogSpace}\xspace) in the size of the data; i.e., the ABox as shown in \cite{ACKZ09}. The iterations in the loop do not affect the overall space used, as one can simply produce a new query every time and clean up the previous information. Overall, this means that the degree of an answer can be computed in \textsc{LogSpace}\xspace in data complexity, using a classical query answering engine. \begin{corollary} The degree of an answer \ensuremath{\mathbf{a}}\xspace to a query $q$ w.r.t.\ the G-\text{DL-Lite}\ensuremath{_R}\xspace ontology \ensuremath{\mathcal{O}}\xspace is computable in logarithmic space w.r.t.\ the size of the ABox (i.e., in data complexity). \end{corollary} We will later see that this upper bound can indeed be reduced to \textsc{AC}\ensuremath{^0}\xspace by seeing a degree query as a special case of a threshold query. However, the method that provides a tight complexity bound requires a new implementation of the rewriting approach, with all its associated optimizations, in contrast to the method from Algorithm~\ref{alg:degree}, which can simply call any existing classical tool; e.g. \cite{GLL+ORE12,CCG+SEBD15}. Computing the whole set of pairs $\ensuremath{\mathsf{ans}}\xspace(q(\ensuremath{\mathbf{x}}\xspace),\ensuremath{\mathcal{O}}\xspace)$ is a more complex task. Although we can follow an approach similar to Algorithm \ref{alg:degree}, where the answers to $q(\ensuremath{\mathbf{x}}\xspace)$ are computed for each ontology $\widehat\ensuremath{\mathcal{O}}\xspace_{\ge d_i}$, in order to assign the appropriate degree to each answer, we need to either keep track of all the answers found so far, or add a negated query which excludes the answers with a higher degree. In both cases, we require a different approach and a potential larger use of memory. On the other hand, the whole set of answers $\ensuremath{\mathsf{ans}}\xspace(q(\ensuremath{\mathbf{x}}\xspace),\ensuremath{\mathcal{O}}\xspace)$ will usually contain many answers that hold with a very low degree, which may not be of much interest to the user making the query. When dealing with degrees, a more meaningful task is to find the $k$ answers with the highest degree, for some natural number $k$; i.e., the \emph{top-$k$ answers} of $q$. Algorithm \ref{alg:degree} once again suggests a way to compute the top-$k$ answers. As in the algorithm, one starts with the highest possible degree, and expands the classical ontology by including the axioms and assertions with a lower degree. The difference is that one stops now when the query returns at least $k$ tuples as answers. At that point, the tuples found are those with the highest degree for the query. As before, each of these queries can be answered in \textsc{AC}\ensuremath{^0}\xspace in data complexity, which yields a \textsc{LogSpace}\xspace upper bound for answering top-$k$ queries in data complexity. \begin{corollary} \label{cor:logspace} Top-$k$ queries over consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontologies can be answered in logarithmic space w.r.t.\ the size of the ABox. \end{corollary} \section{Threshold Queries over G\"odel Semantics} \label{sec:tq} We now turn our attention to the case of threshold queries, but keeping the assumption of the G\"odel semantics in place. The first thing to notice when considering threshold queries is that the simple approach developed in the previous section, where one calls a classical query answering engine over a cut subontology, cannot work. Indeed, as each atom needs to be satisfied potentially to a different degree, there is no one cut that can suffice to answer them all. Indeed, we have already seen a TQ in Example~\ref{exa:queries} which has no answers even though the natural cut ontology provides one answer. When considering Boolean threshold queries, it may be tempting to simply try to verify each threshold atom separatedly through a cut ontology. However, such an approach is not sound due to the existentially quantified variables which need to be associated to a (potentially anonymous) individual. This problem is not new, as it arises already for conjunctive queries over classical databases. To answer these queries, we will adapt the query rewriting technique from the classical setting. The underlying idea is essentially the same, as described previously in this paper, where an atom $B(x)$ may be substituted by an atom $C(x)$ if the TBox contains the axiom $C\sqsubseteq B$. However, one has to be careful with the degrees used. In fact, some axioms may not be applied during the rewriting, if their degree is not large enough. \begin{example} Consider once again the TBox \ensuremath{\mathcal{T}_\textsf{exa}}\xspace from Example~\ref{exa:run}, and suppose that we are interested in finding all popular attractions, up to a given degree $d\in[0,1]$; that is, we have the query $q(x)=\textsf{Popular}(x)\ge d$. The TBox contains the axiom \ax[0.6]{\textsf{Museum}\sqsubseteq \textsf{Popular}}. This means that answers to $q(x)$ w.r.t.\ this TBox should also include the answers to $\textsf{Museum}(x)$, but this depends on the value of $d$, as we explain next. Suppose that $d>0.6$; e.g., if we have $q(x)=\textsf{Popular}(x)\ge 0.7$. For an individual $a$ and any model \ensuremath{\mathcal{I}}\xspace of the ontology, we have no guarantee that $\textsf{Popular}^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)\ge 0.7$ regardless of the degree of $\textsf{Museum}^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)$. Indeed, even if $\textsf{Museum}^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)=1$, the only thing that can be guaranteed is that the degree of $a$ belonging to \textsf{Popular} is at least $0.6$, which does not suffice to become a positive answer to the query. Hence, there is no need to include \textsf{Museum} in the rewriting. Suppose now that $d\le 0.6$; e.g., with the query $q(x)=\textsf{Popular}(x)\ge 0.5$. In this case, we note that every individual $a$ such that $\textsf{Museum}^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)\ge 0.5$ must satisfy also that $\textsf{Popular}^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)\ge 0.5$. Indeed, recall that under the G\"odel semantics, $f\Rightarrow e$ is either $e$ if $e\le f$ or $1$ otherwise. Since \ensuremath{\mathcal{I}}\xspace satisfies the axiom \ax[0.6]{\textsf{Museum}\sqsubseteq \textsf{Popular}}, whenever $f=\textsf{Museum}^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)\ge 0.5$ holds, we know that the degree $e$ of $\textsf{Popular}^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)$ must be such that $f\Rightarrow e\ge 0.6$. If $e\le f$, this can only be true if $e\ge 0.6>0.5$. Otherwise, we know that $e>f\ge 0.5$, and hence any individual belonging to the concept \textsf{Museum} to degree at least $0.5$ is an answer to the query $q(x)$. \end{example} This example shows that during the rewriting process, we only need to consider the axioms that hold to a degree greater than the threshold of the current atom of interest. During the rewriting step, the original threshold is preserved regardless of the bound from the axioms. We now proceed to describe the rewriting process in detail, following the ideas developed originally for classical \text{DL-Lite}\ensuremath{_R}\xspace and other members of the \text{DL-Lite}\xspace family through the \textsf{PerfectRef} algorithm. To aid understanding from readers knowledgeable with the original method, we preserve as much of the terminology from~\cite{dl-lite} as possible. From now on, in a query $q(\ensuremath{\mathbf{x}}\xspace)=\varphi(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)$, we call all the variables in \ensuremath{\mathbf{x}}\xspace \emph{distinguished}, and any variable that appears at least twice within a query \emph{shared}. Note that there is no need to keep track of the variables that are not distinguished nor shared (from now on, called undistinguished, unshared variables); it is only relevant that they can be adequately assigned a value. Hence, those variables will be denoted by an underscore (`\rule{2mm}{1pt}\xspace'), and use $y=\rule{2mm}{1pt}\xspace$ to express that $y$ is one such variable. \begin{definition}[applicability] An axiom $\alpha$ is \emph{applicable} to the threshold atom $A(x)\ge d$ iff $\alpha$ is of the form \ax[e]{C\sqsubseteq A} and $d\le e$. It is \emph{applicable} to the threshold atom $P(x_1,x_2)$ iff either (i) $x_2=\rule{2mm}{1pt}\xspace$ and $\alpha$ is of the form \ax[e]{C\sqsubseteq \exists P} with $d\le e$; (ii) $x_1=\rule{2mm}{1pt}\xspace$ and $\alpha$ is of the form \ax[e]{C\sqsubseteq\exists P^-}; or (iii) $\alpha$ is of the form \ax[e]{Q\sqsubseteq P} or \ax[e]{Q\sqsubseteq P^-} with $d\le e$. If $\alpha$ is applicable to the threshold atom $\gamma$, the \emph{result} of the application is the atom $gr(\gamma,\alpha)$ defined through the rules in Figure~\ref{fig:rules}. \begin{figure} \begin{itemize} \item If $\gamma=A(x)\ge d$ and $\alpha=\ax[e]{A_1\sqsubseteq A}$, then $gr(\gamma,\alpha)=A_1(x)\ge d$ \item If $\gamma=A(x)\ge d$ and $\alpha=\ax[e]{\exists P\sqsubseteq A}$, then $gr(\gamma,\alpha)=P(x,\rule{2mm}{1pt}\xspace)\ge d$ \item If $\gamma=A(x)\ge d$ and $\alpha=\ax[e]{\exists P^-\sqsubseteq A}$, then $gr(\gamma,\alpha)=P(\rule{2mm}{1pt}\xspace,x)\ge d$ \item If $\gamma=P(x,\rule{2mm}{1pt}\xspace)\ge d$ and $\alpha=\ax[e]{A\sqsubseteq \exists P}$, then $gr(\gamma,\alpha)=A(x)\ge d$ \item If $\gamma=P(x,\rule{2mm}{1pt}\xspace)\ge d$ and $\alpha=\ax[e]{\exists P_1\sqsubseteq \exists P}$, then $gr(\gamma,\alpha)=P_1(x,\rule{2mm}{1pt}\xspace)\ge d$ \item If $\gamma=P(x,\rule{2mm}{1pt}\xspace)\ge d$ and $\alpha=\ax[e]{\exists P_1^-\sqsubseteq \exists P}$, then $gr(\gamma,\alpha)=P_1(\rule{2mm}{1pt}\xspace,x)\ge d$ \item If $\gamma=P(\rule{2mm}{1pt}\xspace,x)\ge d$ and $\alpha=\ax[e]{A\sqsubseteq \exists P^-}$, then $gr(\gamma,\alpha)=A(x)\ge d$ \item If $\gamma=P(\rule{2mm}{1pt}\xspace,x)\ge d$ and $\alpha=\ax[e]{\exists P_1\sqsubseteq \exists P^-}$, then $gr(\gamma,\alpha)=P_1(x,\rule{2mm}{1pt}\xspace)\ge d$ \item If $\gamma=P(\rule{2mm}{1pt}\xspace,x)\ge d$ and $\alpha=\ax[e]{\exists P_1^-\sqsubseteq \exists P^-}$, then $gr(\gamma,\alpha)=P_1(\rule{2mm}{1pt}\xspace,x)\ge d$ \item If $\gamma=P(x_1,x_2)\ge d$ and $\alpha\in\{\ax[e]{P_1\sqsubseteq P},\ax[e]{P_1^-\sqsubseteq P^-}\}$ then $gr(\gamma,\alpha)=P_1(x_1,x_2)\ge d$ \item If $\gamma=P(x_1,x_2)\ge d$ and $\alpha\in\{\ax[e]{P_1\sqsubseteq P^-},\ax[e]{P_1^-\sqsubseteq P}\}$ then $gr(\gamma,\alpha)=P_1(x_2,x_1)\ge d$ \end{itemize} \caption{The result $gr(\gamma,\alpha)$ of applying the axiom $\alpha$ to the threshold atom $\gamma$.} \label{fig:rules} \end{figure} \end{definition} The \textsf{PerfectRef} algorithm constructs a union of threshold queries by iteratively substituting atoms $\gamma$ for which an axiom $\alpha$ is applicable, with the result $gr(\gamma,\alpha)$ of the application. This follows the idea of tracing backwards the axioms in order to absorb the TBox into the query which was previously outlined. The pseudocode for \textsf{PerfectRef} is more formally described in Algorithm~\ref{alg:pr}. In the algorithm, $q[\gamma,\eta]$ is the query resulting from substituting in $q$ the atom $\gamma$ with the atom $\eta$. The function $reduce(p,\gamma_1,\gamma_2)$ called in line \ref{alg:pr:red} simply returns the query obtained by applying the most general unifier between $\gamma_1$ and $\gamma_2$ to $p$. For unification, all nondistinguished, unshared variables are considered different. For simplicity, we always assume that all nondistinguished, unshared variables are known, and hence call them \rule{2mm}{1pt}\xspace when testing applicability. \begin{algorithm}[tb] \DontPrintSemicolon \KwData{Threshold query $q$, G-\text{DL-Lite}\ensuremath{_R}\xspace TBox \ensuremath{\mathcal{T}}\xspace} \KwResult{Union of threshold queries $T$} $T\gets \{q\}$ \; \Repeat{$T'=T$}{ $T' \gets T$ \; \For{\textbf{each} $p\in T'$}{ \For{\textbf{each} $\gamma\in p$, and \textbf{each} $\alpha\in\ensuremath{\mathcal{T}}\xspace$}{ \If{$\alpha$ is applicable to $\gamma$}{ $T \gets T\cup \{p[\gamma/gr(\gamma,\alpha)]\}$\; } } \For{\textbf{each} $\gamma_1,\gamma_2\in p$}{ \If{$\gamma_1$ and $\gamma_2$ unify}{ $T\gets T\cup\{reduce(p,\gamma_1,\gamma_2)\}$ \; \label{alg:pr:red} } } } } \Return $T$ \; \caption{\textsf{PerfectRef}} \label{alg:pr} \end{algorithm} Note that, just as in the classical case, the application of the $reduce$ function is necessary to guarantee correctness of the rewriting. Specifically, a variable that is bound in a query $p$ may become unbound after the unification process, which may allow more axioms to be applied for the rewriting. Once again, the algorithm takes as input a threshold query $q$, and returns a union of threshold queries $T$ which is constructed by taking into account the information from the TBox \ensuremath{\mathcal{T}}\xspace. The importance of this rewriting is that at this point, the answers to the original query $q$ w.r.t.\ an ontology $\ensuremath{\mathcal{O}}\xspace=(\ensuremath{\mathcal{T}}\xspace,\ensuremath{\mathcal{A}}\xspace)$ can be obtained by applying the query $T$ to the ABox \ensuremath{\mathcal{A}}\xspace, seen as a standard database. Let $db(\ensuremath{\mathcal{A}}\xspace)$ be the ABox \ensuremath{\mathcal{A}}\xspace seen as a database. Note that since we have fuzzy assertions, the database will contain binary relations (representing concept assertions) and ternary relations (representing the role assertions), where the last element of the relation is the degree; a number in the interval $[0,1]$. Under this view, a threshold query can also be seen as a conjunctive query, taking into account the inequalities in the selection. Given a union of threshold queries $T$, $UCQ(T)$ denotes the fact that $T$ is being read as a UCQ in this sense. Given an ABox \ensuremath{\mathcal{A}}\xspace and a union of TQs $T$, we denote by $\ensuremath{\mathsf{ans}}\xspace(db(\ensuremath{\mathcal{A}}\xspace),UCQ(T))$ the set of answers to $T$ w.r.t.\ $db(\ensuremath{\mathcal{A}}\xspace)$ from a database perspective. We also denote by $\ensuremath{\mathsf{ans}}\xspace(q,\ensuremath{\mathcal{O}}\xspace)$ the set of answers to the TQ $q$ w.r.t.\ the ontology \ensuremath{\mathcal{O}}\xspace. We then obtain the following result. \begin{theorem} \label{thm:rewriting} Let $\ensuremath{\mathcal{O}}\xspace=(\ensuremath{\mathcal{T}}\xspace,\ensuremath{\mathcal{A}}\xspace)$ be a consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontology, $q$ a TQ, and $T$ the union of TQs obtained through the rewriting. Then $\ensuremath{\mathsf{ans}}\xspace(q,\ensuremath{\mathcal{O}}\xspace)=\ensuremath{\mathsf{ans}}\xspace(db(\ensuremath{\mathcal{A}}\xspace),UCQ(T))$. \end{theorem} A consequence of Theorem~\ref{thm:rewriting} is that, in terms of data complexity, answering a TQ w.r.t.\ a \text{DL-Lite}\ensuremath{_R}\xspace ontology is at most as costly as answering a CQ over a database. Indeed, note that althought the query $q$ is transformed into a larger UCQ, the data itself remains unchanged. This yields the following result. \begin{theorem} Answering threshold queries w.r.t.\ consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontologies is in \textsc{AC}\ensuremath{^0}\xspace w.r.t.\ data complexity. \end{theorem} Before finishing this section, we return to a question on complexity left open in the previous section; namely, the precise complexity of finding the degree of an answer to a conjunctive query. To answer this question, we first note that under the G\"odel semantics, we can always see a degree query as a special case of a threshold query. Given a CQ $q$, let $\ensuremath{\mathsf{At}}\xspace(q)$ be the set of all the atoms in $q$. For a degree $d\in[0,1]$, we can define the TQ $TQ(q,d)=\bigwedge_{\gamma\in\ensuremath{\mathsf{At}}\xspace(q)}\gamma\ge d$. That is, $TQ(q,d)$ uses the same atoms as $q$, but assigns a minimum degree of $d$ to each of them. Since the G\"odel semantics interprets the conjunction through the minimum operator, any answer of $TQ(q)$ yields a degree of at least $d$ to the original query $q$. \begin{lemma} \label{lem:cdtotq} Let \ensuremath{\mathcal{O}}\xspace be a consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontology, $q$ a CQ, \ensuremath{\mathbf{a}}\xspace an answer tuple, and $d\in[0,1]$. It holds that $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)\ge d$ iff $\ensuremath{\mathcal{O}}\xspace\models TQ(q(\ensuremath{\mathbf{a}}\xspace),d)$. \end{lemma} In order to find the degree of an answer, we can simply add as an answer variable after the rewriting one that looks at the degrees from the database $db(\ensuremath{\mathcal{A}}\xspace)$. This does not affect the overall data complexity, and hence remains in \textsc{AC}\ensuremath{^0}\xspace. \begin{corollary} Answering conjunctive queries w.r.t.\ consistent G-\text{DL-Lite}\ensuremath{_R}\xspace ontologies is in \textsc{AC}\ensuremath{^0}\xspace in data complexity. \end{corollary} This finishes our analysis of the G\"odel t-norm, which also provides our main results. In the following section we briefly visit the case where the underlying t-norm is not idempotent, and showcase that in general dealing with such semantics becomes harder. \section{Non-idempotent t-norms} We now move our attention to the t-norms that are not idempotent; in particular the product and \L ukasiewicz t-norms. Unfortunately, as we will see, the correctness of the reductions and algorithms presented in the previous sections rely strongly on the idempotency of the G\"odel t-norm, and does not transfer directly to the other cases. However, at least for the product t-norm, it is still possible to answer some kinds of queries efficiently. First recall that Proposition \ref{prop:reduc} holds for the product t-norm as well. Hence, deciding consistency of a $\Pi$-\text{DL-Lite}\ensuremath{_R}\xspace ontology remains reducible to the classical case and thus, efficient. We now show with simple examples that the other results do not transfer so easily. \begin{example} \label{exa:prod} Consider the ontology $\ensuremath{\mathcal{O}_\textsf{exb}}\xspace:=(\ensuremath{\mathcal{T}_\textsf{exb}}\xspace,\ensuremath{\mathcal{A}_\textsf{exb}}\xspace)$ where $\ensuremath{\mathcal{T}_\textsf{exb}}\xspace:=\{\left<A_i\sqsubseteq A_{i+1},0.9\right>\mid 0\le i <n\}$ and $\ensuremath{\mathcal{A}_\textsf{exb}}\xspace:=\{\left<A_0(a),1\right>\}$. Note that $\ensuremath{\mathcal{O}_\textsf{exb}}\xspace=({\ensuremath{\mathcal{O}_\textsf{exb}}\xspace})_{\ge 0.9}$, but the degree for the query $q()=A_n(a)$ is $0.9^n$ which can be made arbitrarily small by making $n$ large. \end{example} Similarly, it is not possible to find the top-$k$ answers simply by layering the $\theta$-cuts for decreasing values of $\theta$ until enough answers can be found. \begin{example} Let $\ensuremath{\mathcal{O}_\textsf{exb}}\xspace':=(\ensuremath{\mathcal{T}_\textsf{exb}}\xspace,\ensuremath{\mathcal{A}_\textsf{exb}}\xspace')$, where $\ensuremath{\mathcal{A}_\textsf{exb}}\xspace':=\ensuremath{\mathcal{A}_\textsf{exb}}\xspace\cup\{\left<A_n(b),0.85\right>\}$ and \ensuremath{\mathcal{T}_\textsf{exb}}\xspace, \ensuremath{\mathcal{A}_\textsf{exb}}\xspace are as in Example \ref{exa:prod}. The top answer for $q(x)=A_n(x)$ is $b$ with degree 0.85, but from $({\ensuremath{\mathcal{O}_\textsf{exb}}\xspace'})_{\ge 0.9}$ we already find the answer $a$, which is not the top one. \end{example} The main point with these examples is that, from the lack of idempotency of the t-norm $\otimes$, we can obtain low degrees in a match which arises from combining several axioms and assertions having a high degree. On the other hand, the product behaves well for positive values in the sense that applying the t-norm to two positive values always results in a positive value; formally, if $d,e>0$, then $d\otimes e>0$. Thus, if we are only interested in knowing whether the result of a query is positive or not, there is no difference between the G\"odel t-norm and the product t-norm. \begin{definition} A tuple \ensuremath{\mathbf{a}}\xspace is a \emph{positive answer} to the query $q(\ensuremath{\mathbf{x}}\xspace)$ w.r.t.\ the ontology \ensuremath{\mathcal{O}}\xspace (denoted by $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)>0$) iff for every model \ensuremath{\mathcal{I}}\xspace of \ensuremath{\mathcal{O}}\xspace it holds that $q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace)>0$. \end{definition} \begin{theorem} If \ensuremath{\mathcal{O}}\xspace is a consistent $\Pi$-\text{DL-Lite}\ensuremath{_R}\xspace ontology, then $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)>0$ iff $\widehat\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$. \end{theorem} \begin{proof} Every model of $\widehat\ensuremath{\mathcal{O}}\xspace$ is also a model of \ensuremath{\mathcal{O}}\xspace, with the additional property that the interpretation function maps all elements to $\{0,1\}$. If $\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)>0$, then for every model \ensuremath{\mathcal{I}}\xspace of $\widehat\ensuremath{\mathcal{O}}\xspace$ it holds that $q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace)>0$ and thus $q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace)=1$, which means that $\widehat\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$. Conversely, if $\widehat\ensuremath{\mathcal{O}}\xspace\models q(\ensuremath{\mathbf{a}}\xspace)$, then the canonical interpretation is such that $q^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{{\mathcal{I}_\mathsf{can}}}\xspace)>0$, and hence for every model \ensuremath{\mathcal{I}}\xspace it also holds that $q^\ensuremath{\mathcal{I}}\xspace(\ensuremath{\mathbf{a}}\xspace^\ensuremath{\mathcal{I}}\xspace)>0$. \end{proof} This means that, for the sake of answering positive queries over the product t-norm, one can simply ignore all the truth degrees and answer a classical query using any state-of-the-art engine. In particular, this means that positive answers can be found in \textsc{AC}\ensuremath{^0}\xspace in data complexity just as in the classical case. \medskip We now briefly consider the \L ukasiewicz t-norm, which is known to be the hardest to handle due to its involutive negation and nilpotence, despite being in many cases the most natural choice for fuzzy semantics \cite{BoCP-JAIR17}. As mentioned already, Proposition \ref{prop:reduc} does not apply to the \L ukasiewicz t-norm. That is, there are consistent \L\mbox{-}\text{DL-Lite}\ensuremath{_R}\xspace ontologies whose classical version is inconsistent (see Example \ref{exa:incons}). As a result, there is currently no known method for deciding consistency of these ontologies, let alone answering queries. The culprits for this are the involutive negation, which is weaker than the negation used in the other two t-norms, but also the nilpotence, which may combine positive degrees to produce a degree of 0. The latter also means that, even if one could check consistency, it is still not clear how to answer even positive queries. \begin{example} Consider the ontology $\ensuremath{\mathcal{O}_2}\xspace:=(\ensuremath{\mathcal{T}_2}\xspace,\ensuremath{\mathcal{A}_2}\xspace)$ where \begin{align*} \ensuremath{\mathcal{T}_2}\xspace:={} &\{\left<A_0\sqsubseteq A_1,0.5\right>,\left<A_1\sqsubseteq A_2,0.5\right>\} \\ \ensuremath{\mathcal{A}_2}\xspace:={} & \{\left<A_0(a),1\right>\}. \end{align*} Note that \ensuremath{\mathcal{O}_2}\xspace is consistent, but there is a model \ensuremath{\mathcal{I}}\xspace (e.g., the canonical interpretation) of this ontology which sets $A_2^\ensuremath{\mathcal{I}}\xspace(a^\ensuremath{\mathcal{I}}\xspace)=0$. Hence, $a$ is not a positive answer to the query $q(x)=A_2(x)$ even though it is an answer of $q(x)$ over $\widehat\ensuremath{\mathcal{O}_2}\xspace$. \end{example} Importantly, if we extend \text{DL-Lite}\ensuremath{_R}\xspace with the possibility of using conjunctions as constructors for complex concepts, one can show following the ideas from \cite{BoCP-JAIR17,BoCP-PRUV14} that deciding consistency of a \L-\text{DL-Lite}\ensuremath{_R}\xspace ontology is \textsc{NP}\xspace-hard in combined complexity even if negations are disallowed; see Appendix \ref{app:NP} for full details. In the classical case, this logic---which is called \text{DL-Lite}\ensuremath{_\text{Horn}}\xspace---has a polynomial time consistency problem \cite{ACKZ09}. This gives an indication that dealing with \L-\text{DL-Lite}\ensuremath{_R}\xspace may also lead to an increase in complexity. \medskip Interestingly, the rewriting technique from Section~\ref{sec:tq} also works for other t-norms---modulo some basic modifications---when answering threshold queries. Recall, for example, that given an axiom \ax[e]{A\sqsubseteq B}, and a threshold atom $B(x)\ge d$, if $e\ge d$ then the rewriting technique would substitute this atom with $A(x)\ge d$. Although this substitution is sound for the idempotent G\"odel t-norm, it does not work directly for the other ones. For example, under the product t-norm, if we set $d=e=0.9$ we note that guaranteeing $A^\ensuremath{\mathcal{I}}\xspace(x)\ge 0.9$ does not necessarily implies, in a model \ensuremath{\mathcal{I}}\xspace of \ax[e]{A\sqsubseteq B} that $B^\ensuremath{\mathcal{I}}\xspace(x)\ge 0.9$. Indeed, as long as $B^\ensuremath{\mathcal{I}}\xspace(x)\ge 0.81$, the axiom is satisfied in this case. A similar argument can be made for the \L ukasiewicz t-norm. Hence, we need to increase the required degree for the rewritten atom. Recall from the properties of the residuum that for every t-norm $\otimes$ it holds that $A^\ensuremath{\mathcal{I}}\xspace(x)\Rightarrow B^\ensuremath{\mathcal{I}}\xspace(x)\ge e$ iff $B^\ensuremath{\mathcal{I}}\xspace(x)\ge A^\ensuremath{\mathcal{I}}\xspace(x)\otimes e$. Thus, to ensure that $B^\ensuremath{\mathcal{I}}\xspace(x)\ge d$ it suffices to guarantee that $A^\ensuremath{\mathcal{I}}\xspace(x)\otimes e\ge d$. In the case of the product t-norm, this is akin to the condition $A^\ensuremath{\mathcal{I}}\xspace(x)\ge d/e$. For the \L ukasiewicz t-norm the condition translates to the inequality $A^\ensuremath{\mathcal{I}}\xspace(x)\ge \min\{1, d+1-e\}$. We can then apply the same \textsf{PerfectRef} algorithm, with a new definition of the function $gr$ that changes the last degree (which is always $\ge d$ in Figure~\ref{fig:rules}) with the new degree developed here. Overall, this yields the following complexity result. \begin{theorem} Answering threshold queries w.r.t.\ consistent \text{DL-Lite}\xspace ontologies is in \textsc{AC}\ensuremath{^0}\xspace in data complexity. \end{theorem} Note that this theorem does not solve the problems sketched before for non-idempotent t-norms. Indeed, it is still not clear how to check for consistency of a \L-\text{DL-Lite}\xspace ontology. Moreover, this result cannot be used to answer CQs because the analogous to Lemma~\ref{lem:cdtotq} does not hold. Indeed, suppose that we have a simple CQ with only two atoms: \[ q(x)= A(x) \land B(x). \] To turn $q(x)\ge d$ into a TQ, we need to assign a threshold to each of the atoms. Note however that, under a non-idempotent t-norm, we cannot assign the same degree $d$ to each atom, as their conjunction would become in fact lower than $d$. To be more precise consider the product t-norm and $d=0.9$. Note that an answer to the TQ $A(x)\ge 0.9 \land B(x)\ge 0.9$ is \emph{not} necessarily an answer to $q(x)\ge 0.9$ because there could be a model that assigns both atoms to degree 0.9; hence the product of those degrees is $0.81<0.9$. To use a TQ, we need to choose two degrees $d_1,d_2$ such that $d_1\cdot d_2=0.9$, and construct the TQ $A(x)\ge d_1 \land B(x)\ge d_2$. But there are infinitely many choices to make in this regard, hence we cannot even construct a finite UTQ. Thus, unfortunately, although we are able to answer TQs efficiently (if the ontology is known to be consistent), degree queries remain an open problem for non-idempotent t-norms. \section{Conclusions} In this paper we have studied the problem of answering queries over fuzzy ontologies written in \text{DL-Lite}\xspace. Our goal was to cover the gap in this area left by previous research. Indeed, although query answering w.r.t.\ ontologies is still an active topic, most work referring to fuzzy terminologies or ABoxes focused on the so-called Zadeh semantics, which does not preserve desired properties from the mathematical fuzzy logic point of view. To our knowledge, only Mailis and Turhan \cite{MaTu-JIST14,MaTZ-DL15} have studied this problem based on t-norms, and found solutions based on the G\"odel t-norm. However, they limited their approach to \emph{classical} TBoxes. They left open the problems of dealing with graded TBoxes, handling threshold queries, and dealing with non-idempotent t-norms. A second goal of our work was to reuse as much as possible the classical techniques, in order to avoid an implementation overhead when our algorithms will be, in future work, implemented and tested. As a result, we developed a method for answering degree queries which relies heavily on a classical query answering tool as a black box. Through this method, we can take advantage of all the existing optimisations and improvements from that area, and simply use a better tool whenever it becomes available without having to worry about the internal intricacies that make it work. In few words, our algorithm for answering CQs w.r.t.\ the G\"odel semantics simply considers the classical version of the cut of the ontology. That is, the method ignores all axioms that hold to a degree lower than the threshold imposed, and then sees the remaining query answering question as a classical one. We emphasise that this approach works perfectly even if the TBox is graded. This means that our results improve those from \cite{MaTu-JIST14} by allowing for fuzzy TBox axioms and not requiring a new rewriting of the query. Dealing with threshold queries, where each atom can be independently assigned a different degree, turns out to be more complex technically. In fact, we were not able to produce a direct reduction to a classical query answering problem---and it is unlikely that such a reduction could exist, given the nature of the graded axioms. However, we could still exploit the main ideas from the classical scenario, adapting the well-known \textsf{PerfectRef} method to the fuzzy scenario. In some sense \textsf{PerfectRef} absorbs the TBox into the query, forming a larger UCQ which can be answered classically, seeing the ABox as a database. In our case, we also need to take care of the degree at which the rewritten atoms should hold, when creating the new query. Under the G\"odel semantics, it suffices to preserve the same degree from the original query, but for non-idempotent t-norms the degree has to be increased accordingly to avoid including spurious answers. Importantly, this shows that answering threshold queries w.r.t.\ consistent fuzzy ontologies is in the same complexity class (\textsc{AC}\ensuremath{^0}\xspace) in data complexity as for the classical case, regardless of the t-norm underlying the semantics. The only caveat is that it is not known how to verify consistency of a fuzzy \text{DL-Lite}\xspace ontology in general. The idempotency of the G\"odel t-norm allowed us then to show that CQ answering w.r.t.\ consistent G-\text{DL-Lite}\xspace ontologies is also in \textsc{AC}\ensuremath{^0}\xspace in data complexity. This latter bound does not hold for non-idempotent t-norms. It is worth noting that the methods for answering degree and threshold queries both ultimately rely on a rewriting of the query to represent the information expressed in the TBox. While the rewriting does not affect the \emph{data} complexity, it is well known that the UCQ obtained through \textsf{PerfectRef} may grow exponentially \cite{dl-lite,PMH10:elrewritingtodatalog}. This means that, depending on the instance, answering these queries may still be impractical. For that reason, different rewriting and answering techniques have been developed and tested; for example, rewriting into a Datalog program instead of an UCQ \cite{GoSc-KR12,GoOP-ICDE11}. Our approach for solving threshold queries, given its reliance on \textsf{PerfectRef}, suffers from the same drawbacks. In order to use more optimised approaches, it is necessary to study whether other existing rewritings can also be adapted to the fuzzy setting. On the other hand, the approach to degree queries is, as mentioned already, fully black box: we only need to call an unmodified classical query answering tool repeatedly. This allows us to directly \emph{plug} whichever system performs best, without worrying about the implementation overhead of understanding and adapting the existing tools. Through illustrative examples, we showed that dealing with CQs is in general harder when the underlying t-norm is not idempotent. The main issue is that there is no unique way to decide the bounds for the different degrees to which atoms should hold to satisfy the lower bound for a conjunction of atoms. The problem is exacerbated by the fact that it is not even clear how to decide consistency of ontologies under the nilpotent t-norm. Indeed, even in the absence of an ABox, the \L ukasiewicz t-norm may impose upper bounds in the degrees of some assertions which are not obvious to detect, and could contradict other conditions. As future work, we are also interested in implementing and testing our ideas, with the help of some fuzzy ontologies which will be developed for specific application domains. \bibliographystyle{acmtrans}
{ "attr-fineweb-edu": 2.988281, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdGM5qWTA7lskyJrw
\section{Introduction} \textbf{} \\ The soccer FIFA World Cup is a global sporting event that attracts one of the highest audiences: according to the organizing entity, FIFA, over one billion television viewers watched the final between France and Croatia\footnote{The official report can be found at \textit{https://resources.fifa.com/image/upload/the-2018-fifa-world-cuptm-in-numbers.pdf?cloudid=veij99mubas9idvf47rl}.}on the $15^{th}$ of July 2018. To further improve the worldwide World Cup audience, FIFA is currently trying to include more countries in the final stages of the tournament and increase the attractiveness of all of the games played during this one-month competition. In its current format, the World Cup consists of 32 qualified teams (via continental qualifying tournaments) that are distributed into 8 groups of 4 teams each. At the group stage, teams in the same group play against each other once (for a total of 6 matches per group) with the group ranking based on the current football points system: 3 points for a win, 1 for a draw and 0 for a loss. The first two teams in each group qualify for the knockout stage. This schedule can lead to games being played in an unethical and unattractive way, with a good example being the infamous match between Austria and West Germany (Germany being at that time still divided) during the 1982 FIFA World Cup, in Gijon (Spain). Sometimes called the ``disgrace of Gijon", the game is known in Germany and Austria as the ``Gijon non-aggression pact". West Germany and Austria both played in group 2, with Algeria and Chile. To everyone's surprise, the German Mannschaft stumbled against Algeria, losing its first match (1-2) (the first time that a European team had lost to an African team in a World Cup), and Austria beat Chile (1-0). In the second round of games, West Germany beat Chile (4-1) while Algeria lost to Austria (0-2). At this point, Chile was already eliminated. The last two games in Group 2 were thus decisive for Germany, Austria and Algeria. On June 24, Algeria beat Chile 3-2 and the standing of the group became as in Table \ref{Table_Gijon_1}\footnote{In the 1982 version of the World Cup, 2 points were awarded for a win, 1 for a draw, 0 for a loss. The adoption of the current football points system (3 points for a win) would have led to an identical situation and could have not eliminated the collusion opportunity.}. From then on, an arrangement became possible between the West Germans and Austrians. A simple calculation shows that, with a simple 1-0 victory (or even 2-0 victory) for the Mannschaft\footnote{ In case of equality in the number of points between teams, the first tie-breaker was goal difference.}, both the Germans and Austrians would qualify while, oddly, either a large German victory, an Austrian victory or a draw would lead to Algeria qualifying. After 10 minutes of game, the Germans scored, following which both teams almost stopped for the remaining, long, 80 minutes, under the booing of infuriated Spanish spectators. After the scandal of Gijon (and similar games), the last two games in each group are now played simultaneously, but this has not completely eliminated collusion opportunities. Be it illegal match-fixing or tacit collusion, such incident is very detrimental to the game. At the other end of the spectrum, we can think of a situation where the two opposing teams played honestly (competitively) and the result of the game ended up benefiting both. We can clearly expect that both teams will be criticized and their reputation will be tarnished after this game (in the same way that a tacit collusion cannot be proved for sure, there is no way the teams can prove they played competitively). Either way, FIFA should reduce to the extent possible these situations of ``potential" match-fixing since, even if match-fixing did not occur, it is the existence of such a possibility in the minds of spectators, players that is detrimental to sportsmanship.\\ \begin{table}[!htb] \begin{minipage}{.5\linewidth} \caption{Group ranking before the last game between West Germany and Austria (MP = matches played)}\label{Table_Gijon_1} \centering \begin{tabular}{|l|c|c|c|} \hline Team & Pts & Goal dif. & MP\\ \hline Austria & 4 & +3 & 2\\ Algeria & 4 & 0 & 3\\ West Germany & 2 & +2& 2\\ Chile & 0 & -5 & 3\\ \hline \end{tabular} \end{minipage}% \begin{minipage}{.5\linewidth} \caption{Final group ranking}\label{Table_Gijon_2} \centering \begin{tabular}{|l|c|c|} \hline Team & Pts & Goal dif.\\ \hline West Germany & 4 & +3\\ Austria & 4 & +2 \\ Algeria & 4 & 0 \\ Chile & 0 & -5\\ \hline \end{tabular} \end{minipage}% \end{table} FIFA defines the groups of the World Cup through a draw procedure that changes slightly from year to year depending on the origin of the qualified teams (FIFA tries to spread out teams from the same continent in an even manner across all of the groups). Nevertheless, the main structure of the draw is the following: \begin{itemize} \item Teams are divided into pots, each of which is supposed to contain teams with similar levels of performance: the 8 best teams in pot A, the second best 8 teams in pot B, and so on. Team performance is based on a ranking whose methodology has changed over time, and has been criticized by some football experts \cite{Cea_Duran}, \cite{Gasquez}. The country hosting the tournament (which qualifies automatically) is included in pot A in order to maximize its chances of proceeding to the knockout stage of the tournament. \item Groups are formed by picking one team from each pot so that all groups have a "top-level" pot-A team, a "second-level" pot-B team, a pot-C team and a "weaker" pot-D team. \item Finally, the schedule of the games, i.e. the order in which the teams play against each other is drawn randomly. As such, in some groups the last round of games will consist of "pot A" vs "pot B", and "pot C" vs "pot D", while in other groups the last matches consists of "pot A" vs "pot C", and "pot B" vs "pot D", or "pot A" vs "pot D" and "pot B" vs "pot C". \end{itemize} \medskip Having in mind our target of eliminating situations of ``potential" match-fixing, we develop a method to evaluate the competitiveness and fairness of the last-round games, choose a model to simulate the group-stage outcomes and look for the optimal setting that maximizes our competitiveness metric (which turns out to be the same as minimizing the ``potential" match-fixing events). We also apply this method to real World Cup data (starting from the 1998 competition, in which the new format was adopted) and conclude that games were sub-optimally scheduled. We find out that the points-attribution scheme does not affect the quality of games in that a victory produces more points than a draw, which produces more points than a loss. However, the order in which games are played, and specifically the schedule for the last round of games, is critical and can substantially improve the competitiveness of the last round if well-designed. The methodology we have developed for assessing competitiveness is pretty general and can be applied to any tournament structure based on rankings and tie-breaking rules\footnote{Such tournament structures can be found in many other sports such as basketball, rubgy, volleyball, handball and even in non-sporting activities.}.\\ In the past decades, many models have been developed to predict or simulate the outcome of football games. For example, Lee \cite{Lee} and Dyte and Clarke \cite{Dyte_Clarke1} treat the goals scored by each team as conditionally-independent Poisson variables whose parameters depend on team attributes and the match venue. Maher \cite{Maher} found that introducing a correlation between the number of goals scored via a bivariate Poisson distribution improved predictive power in data from the English League. Reep, Pollard and Benjamin \cite{RPB} construct a model based on the negative binomial distribution, while Karlis and Ntzoufras \cite{Karlis_Ntzoufras} use Skellam's distribution to model the difference in the number of goals (the margin of victory). In a following article, the latter develop a robust fitting method to account for abnormal large scores \cite{KN2}. Most of this previous work has looked at national championships, but only few have considered the FIFA World Cup. The 1998 World Cup is covered in \cite{Dyte_Clarke1}. Suzuki et al. \cite{Suzuki} use a Bayesian approach to predict the result of the 2006 World Cup, while Groll et al. \cite{Groll} apply their model to the 2014 World Cup. Until recently, the articles focusing on international competitions looked at the predictive power of the model. Nevertheless, a growing litterature is now looking at the competition design of such tournaments as pointed out by Kendall and Lenten \cite{Kendall}. Based on backward induction analysis, Krumer et al. \cite{Krumer1} showed that in round-robin tournaments among three or four symmetric contestants, there is a first-mover advantage driven by strategic effects arising from the subgame perfect equilibrium. This article is a foundation for an empirical analysis carried by Krumer and Lechner \cite{Krumer2} applied to different sport events including the FIFA World Cup. Csato \cite{Csato} demonstrates the incentive incompatible design of recent UEFA qualification tournament which includes a repechage procedure. In a study applied to Super Rugby but which can be adapted to football, Winchester \cite{Winchester} determines the optimal allocation of points that most appropriately rewards strong teams. Furthermore, Brams and Ismail \cite{Brams_Ismail}, Anbarci, Sun and Unver \cite{Anbarci} design new penalty shoot-out approaches which improve fairness at the knock-out stage of the World Cup. Related to the penalty shoot-out, Lenten, Libich and Stehlik \cite{Lenten} explore a better tie-breaker mechanism altogether. Concerning the design of the draw procedure, Guyon \cite{Guyon1} and Laliena and Lopez \cite{Laliena} analyze the fairness of the FIFA World Cup draw and propose new draw schemes which are more equitable and preserve the geographic constraints enforced by FIFA. Finally, Guyon \cite{Guyon2} tackles the match scheduling of group games of the 2026 FIFA World Cup (16 groups of 3) and points out that this format of competition makes the "disgrace of Gijon" possible again. He suggests that the strongest team in the group should be the one to play the first two games of the group in order to minimize the risk of collusion in the last round game. In the same spirit as in Guyon's study \cite{Guyon2}, our article analyzes the tournament structure of the group stage having in mind the final target of reducing collusion opportunities and increasing competitiveness. We develop a rather general theoretical framework to assess the competitive level of games and combine it with a model which simulates game scores to come up with an exact quantitative assessment of any group format. We fit our simulation model to historical FIFA World Cup data, and evaluate the current format of the competition as well as potential future formats (recovering the optimal scheduling suggestion of Guyon for the "groups of 3" format). This article is organized as following: we first benchmark and calibrate the different models, based on the results in previous World Cups and team rankings. We next develop a classification method that allows us to quantify the attractiveness of the last round of games. Based on the chosen model and our original method, we then use Monte Carlo simulations to determine the key factors that affect the quality of the last round of games in the current World Cup format and propose a remedy to improve the competitiveness of the last round of the group stage. The last section applies our method to the new enlarged version of the World Cup. We here analyze the two ``well-known" options: 16 groups of 3 teams and the current "UEFA Euro"-format with groups of 4 groups with ``repechage" of the best third-ranked teams as well as 2 variants of an 8 groups of 5 alternative format. \section{Group-stage model} \textbf{} \\ We here benchmark the different models that will be used to simulate the game outcomes. For the sake of simplicity, a team's strength is completely described by one single variable. Instead of using FIFA rankings, we choose the Elo index as a proxy for team performances\footnote{All historical and current Elo ratings, as well as the details on how they are calculated, can be found at \textit{https://www.eloratings.net}}. The advantage of this index is that it is more transparent\footnote{Elo ratings are continuous rather than ordinal such that they allow for different rating ``gaps" between consecutively-ranked teams.} and is a more accurate reflection of a team's real level than the FIFA ranking \cite{Gasquez}. The calculation method has not changed over time, and it is thus better-suited for analysis over long time periods (we will here cover all the World Cups starting from that in 1998). Nevertheless, the Elo and FIFA point systems produce very similar country rankings.\\ Each team has an Elo index that belongs to the interval $[a,b]$, with $a$ being the lowest Elo rating while $b$ is the highest. As in the official draw procedure, we form groups of four teams with different Elo indices as follows: team A's Elo is uniformly drawn from the interval $[b-\frac{b-a}{4},b]$, team B from $[b-\frac{b-a}{2},b-\frac{b-a}{4}]$, team C from $[a+\frac{(b-a)}{4},b-\frac{b-a}{2}]$ and team D from $[a,a+\frac{b-a}{4}]$. In other words, pot A is constituted of the best ranked teams while pot D includes the lowest ranked teams. The bounds $a$ and $b$ are parameters in our model and will be calibrated in the next section. Intuitively, the greater the $b-a$ gap, the larger the performance gap between teams within the group. Based on the Elo indices of the teams in the group, we simulate the outcomes of their matches. \subsection{Simulating match outcomes} \textbf{} \\ As we only consider a team's Elo index, we analyze the following relatively simple parametric models: \begin{enumerate} \item \textbf{Simple Poisson model}: Each time a team has the ball it can attack and score a goal. With $n$ attack opportunities and a probability $p$ of scoring per attack, the number of goals scored follows a binomial distribution $\mathcal{B}(n,p)$. On average, $\lambda=np$ goals will be scored by the team per game. The binomial distribution limits the number of goals scored per game to the total amount of attacks $n$. If instead of considering discrete attacks, we look at ball possession and introduce the probability of scoring per unit of ball possession, $\lambda$, the number of goals scored follows a Poisson distribution with parameter $\lambda$. This distribution is the limit case of the binomial distribution as $n\to \infty$ (every ball possession signifies an attack) and $p=\frac{\lambda}{n}\to 0$ (the strict probability becomes a probability density of scoring per unit of time possession). The probability that team $i$ scores $k$ goals against team $j$ is: \begin{align} P(goals=k)=\frac{\lambda^k\cdot e^{-\lambda}}{k!} \end{align} where $\lambda$ is: \begin{align} \lambda=\alpha\cdot \frac{r_i}{r_i+r_j} \end{align} Here $r_i$ is the Elo index of team $i$, $r_j$ the Elo index of its opponent $j$, and $\alpha$ a parameter of the model to be calibrated. The stronger the scoring team (the higher is $r_i$) and the weaker its opponent (the lower is $r_j$), the easier it will be for team $i$ to score goals. The parameter $\alpha$ reflects how prolific games are in terms of goals scored (a higher $\alpha$ produces games with higher scores). Consequently, the result $(k_i,k_j)$ of a game between team $i$ with Elo index $r_i$ and team $j$ with Elo index $r_j$ is distributed: \begin{align} (k_i,k_j)\sim (X,Y) \end{align} where X and Y are independent, and $X\sim \mathcal{P}(\alpha\cdot \frac{r_i}{r_i+r_j})$ and $Y\sim \mathcal{P}(\alpha\cdot \frac{r_j}{r_i+r_j})$. We can notice that such model implies that each match yields $\alpha$ goals on average which are shared between the teams based on their ratings. Assuming that each match has the same average number of goals is not an unreasonable assumption in our context. First, we will only simulate outcomes for the first two rounds of the group stage during which the teams usually follow their ``baseline" tactic. Games in which a team will opt for an aggressive tactic (exposing himself to counter-attacks and leading to games with a high number of goals) or, on the contrary, to a very "defensive" tactic (leading to games with almost no goals) usually happen in the third round of the group stage where, under some circumstances, tie-breaking rules such as goal difference or the number of goals scored start driving the behavior of teams. Furthermore, the style of football may have evolved\footnote{Games used to have more goals in the earliest editions of the World cup, with averages above 4 goals per game.} slightly over the period covered by our sample (1998 to 2018) but not enough to have had a significant impact on the average number of goals scored per game (as per Table \ref{goals_per_game}). \item \textbf{Bivariate Poisson model}: The bivariate Poisson model is very similar to that above. The only difference is that it accounts for correlations between the number of goals scored by the two teams. The underlying idea here is that if one team scores, the other will attempt to equalize and put more effort into scoring. This leads to open games with a greater number of goals on both sides. On the contrary, if neither team scores the game will remain ``closed" with few goals. The final score of the game $(k_i,k_j)$ is: \begin{align} k_i=X+Z \qquad k_j=Y+Z \end{align} where $X$, $Y$ and $Z$ are independent, and $X\sim \mathcal{P}(\alpha\cdot \frac{r_i}{r_i+r_j})$, $Y\sim \mathcal{P}(\alpha\cdot \frac{r_j}{r_i+r_j})$, $Z\sim \mathcal{P}(\beta)$. The correlation between $k_i$ and $k_j$ comes from the term $Z$, and the greater is $\beta$ the higher the correlation. This model has one more parameter than that above (namely $\beta$). Based on its specification, this model also assumes that each match yields $\alpha+2\beta$ goals on average. \item \textbf{Negative binomial model}: In this model, when one team scores a goal, it becomes more motivated and has a greater probability of scoring a second goal. The scoring model starts as a Poisson distribution, and each time a goal is scored the probability of scoring the next goal rises by a given constant. The probability distribution of goals can be calculated and has a so-called negative binomial distribution. The probability that team $i$ scores $k_i$ goals against team $j$ is: \begin{align} P(k_i)= \begin{pmatrix} k_i+r-1\\ k_i \end{pmatrix} \cdot (1-\alpha\cdot \frac{r_i}{r_i+r_j})^r\cdot (\alpha\cdot \frac{r_i}{r_i+r_j})^{k_i} \end{align} where $r\in \mathbb{N}$ and $\alpha>0$ are two parameters to be calibrated. \item \textbf{Ordered logistic regression (OLR)}: as per Hvattum and Arntzen \cite{Hvattum}, we fit an OLR model to our data using the Elo indices as regressors. As opposed to the previous models, OLR does not predict exact scores but assigns probabilities for the three ``compact'' outcomes of a football game: first team wins the game, draw, first team losses the game. Since this model is specifically designed to classify games into the three above categories, we expect it to outperform the previous three on the learning sample when the evaluation criteria is only sensitive to the whether the outcome is a win, draw or loss. Consequently, the OLR model would set a benchmark to which we can compare the performances of the previous models (only for "coarse" criteria which completely disregard the exact final score of the game). \item \textbf{Naive uniform guess}: as its name suggests, each of the three ``compact" outcomes has the same probability of happening (1/3) regardless of the Elo indices of the competing teams. This ``model" goes into the same category as OLR since it cannot deal with exact scores and has only been introduced as a ``worst case" benchmark. \end{enumerate} \begin{center} \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline World Cup edition & 1998 & 2002 & 2006 & 2010 & 2014 & 2018\\ \hline Average number of goals per game & 2.6 & 2.7 & 2.4 & 2.1 & 2.9 & 2.5 \\ \hline \end{tabular} \caption{Average number of goals scored per game during the group stage of FIFA World Cups from 1998 to 2018}\label{goals_per_game} \end{table} \end{center} \subsection{Rescaling the Elo distribution} \textbf{} \\ The Elo indices usually fluctuate between 1500 and 2200 for the teams that qualify for the World Cup. As such, $\frac{r_i}{r_i+r_j}$ varies between $\frac{1500}{1500+2200}=0.4054$ and $\frac{2200}{1500+2200}=0.5946$. Consequently the ``raw" Elo indices will barely affect the number of goals scored by teams. We amplify the performance difference between teams via a linear transformation of the original Elo indices: \begin{align} r_i'=1+e^{gap}\cdot\frac{r_i-\min_j(r_j)}{\max_j(r_j)-\min_j(r_j)} \end{align} After this transformation, the weakest team will have an index of 1 and the strongest an index of $1+e^{gap}$, where $gap$ is a parameter to be calibrated\footnote{The introduction of $e^{gap}$ instead of a simple linear function of $gap$ has been chosen solely for practical purposes: ensuring that it is positive and to increase the sensitivity of our model to the $gap$ parameter (which can improve the convergence speed of the maximum-likelihood search algorithm).}. The higher is $gap$, the larger the performance gap between teams and the greater the impact of the Elo indices on a game's outcome. \subsection{Model selection and calibration} \textbf{} \\ We now carry out maximum-likelihood estimation of the three exact models presented above as well as the OLR, in order to decide which will be used to carry out our group simulations. The data used to fit these models are the results of the first two rounds of the World Cup group stage from 1998 up to 2018. This covers 192 games: the last round of games is not included as factors other than team performance may play a role (teams may prefer to lose or draw in the last game, and finish second in the group in order to have easier knock-out games). Table \ref{model selection} shows the results of our estimations. In a second step, we have chosen different metrics to evaluate the performance of the models. For the metrics "Log-likelihood", "AIC" and "BIC" we decided to restrict ourselves to the first three models and only show figures which can actually be compared: the first three models predict exact final scores (which is not the case for the last two models which can only predict wins-draws-losses). On the other hand, the "Logloss" and "Brier score" evaluate the performance of each model in predicting the win-draw-loss outcome of a game and allow for a broader comparison of the models. The two latter measures estimate a ``distance" between the predicted outcome of a model and the actual result (a rigorous presentation of these loss functions can be found in \cite{Witten_Frank}) \begin{center} \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|l|}{\textbf{Model}} & Simple Poisson & Bivariate Poisson & Negative Binomial & OLR & Uniform\\ \hline \multirow{4}{*}{ \textbf{Optimal parameter values}}& $gap$ & 3.7581 & 3.7582 & 3.4997 & \multirow{8}{*}{} & \multirow{8}{*}{}\\ & $\alpha$ & 2.5156 & 2.5156 & 0.1747 & &\\ & $\beta$ &-& $2.7518\cdot 10^{-10}$&- & &\\ & $r$ &-&-& 13 & & \\ \cline{1-5} \multicolumn{2}{|l|}{\textbf{Number of parameters}} & 2 & 3 & 3 & &\\ \cline{1-5} \multicolumn{2}{|l|}{\textbf{Log-Likelihood}}& -535.6698 & -535.6698 & -534.4337 & &\\ \cline{1-5} \multicolumn{2}{|l|}{\textbf{AIC}} & 1075.3 & 1077.3 & $1074.9^*$ & &\\ \cline{1-5} \multicolumn{2}{|l|}{\textbf{BIC}} & $1081.9^*$ & 1087.1 & 1084.6 & &\\ \hline \multicolumn{2}{|l|}{\textbf{Logloss}} & $0.9498$ & 0.9498 & 0.9491 & 0.9334 & 1.0986\\ \hline \multicolumn{2}{|l|}{\textbf{Brier score}} & $0.5645$ & 0.5645 & 0.5639 & 0.5495 & 0.6667\\ \hline \end{tabular} \caption{Estimation results with different models for the prediction of game scores}\label{model selection} \end{table} \end{center} First, since the optimal value of $\beta$ is $2.7518\cdot 10^{-10}$, we conclude that the introduction of a correlation term between the goals does not improve the accuracy of the simple Poisson model (there is no change in the log-likelihood between the simple and the bivariate Poisson models either). Furthermore, the value of $\alpha$ which maximizes the likelihood is equal to the average number of goals per game of our sample ($2.5156$ goals per game). This result serves as an additional check since $\alpha$ (resp. $\alpha+2\beta$) is equal to the average number of goals per game in the simple Poisson (resp. bivariate Poisson) model. In addition to that, we can notice that the optimal value of $gap$ is strictly positive, so that the Elo indices do have predictive power for our game outcomes (Figure \ref{contourfs} shows the likelihood in the simple Poisson and Negative-Binomial distributions as a function of the model parameters).\\ As for the logloss and Brier scores, we notice that our ``score-predicting" models are almost as efficient in predicting win-draw-loss outcomes for the given sample than the specifically-designed OLR model. Furthermore, it is also possible that the better scores for the OLR model are due to over-fitting (as the choice of parameters of the OLR model are specifically made to match the win-draw-loss outcomes of the sample games). Among the ``exact" models, the simple Poisson model minimizes BIC since it has one less parameter than the other two while the negative binomial model minimizes AIC.\\ For the following part of the paper, we decide to adopt the simple Poisson model for its tractability and great performance: its parameters can be clearly interpreted and it offers identical performance to the other two exact-score models. In terms of predicting win-draw-losses, its performance is very close to the one of OLR and is less prone to over-fitting since its parameters have a close-to-reality, structural foundations. Finally, having an exact-score generating model sets the ground for research work which requires exact scores (including goal differences and other goals-linked tie-breaking rules) which is the case of the second part of this study. \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{contourf_poisson.png} \caption{Poisson distribution} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{contourf_negbin.png} \caption{Negative-Binomial distribution} \end{subfigure} \caption{The Log-likelihood of different models as a function of their parameters} \label{contourfs} \end{figure} \subsection{Additional performance checks for the Poisson model} \textbf{} \\ We now compare a number of statistics from our chosen Poisson model to those in our sample data at three different levels of aggregation: \begin{itemize} \item \textbf{Detailed level}: Each exact final score is considered as a separate event. As our data set is composed of 192 games, we use the Poisson model to carry out 15000 simulations of each of the 192 games. We then count the number of occurrences of each event in each simulation run, and average the results over the 15000 simulations and calculate the standard deviations for each event. The results appear in Table \ref{simtable}, to be compared to the actual outcomes in Table \ref{realtable}. The games in which at least one team scores more than four goals are rare, and do not appear in the tables. The Poisson model provides a good approximation to the actual data. \item \textbf{Compact level}: Here an event is characterized by the difference in the scores, so that games finishing 3-1 or 4-2 would be similarly categorized as "2-goal difference" events. Figure \ref{histogram} shows the results of our simulations (in red), compared to the actual data (in blue). As above, the Poisson model performs well in reproducing the actual scores. \item \textbf{Draw frequency}: This is extreme case where we group all non-zero goal difference outcomes into a single event, with draws being the complement (a goal difference of zero). This allows us to see if the ratio of draws to total number of games in our model matches the sample data. The frequency of draws in the Poisson model is $0.2133$ with a standard deviation of $0.0297$, while the sample frequency is $=0.2552$. As the data only includes $192$ games, we cannot say whether the fit could be improved (by reducing the $a$ parameter, for example) or if our data is not truly representative of the underlying generating process. \end{itemize} Overall, the previous results suggest that the Poisson model fits the score-generating process well. \begin{table} \begin{center} \caption{Number of exact scores in our World Cup data sample of 192 games (goals scored by the team with higher Elo in rows, by the team with lower Elo in columns).}\label{realtable} \begin{tabular}{|c|ccccc|} \hline \shortstack{Final score} &\textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} \\ \hline \textbf{0} & $14$ & $13$ & $4$ & $2$ & $0$\\ \textbf{1} & $31$ & $23$ & $8$ & $1$ & $0$ \\ \textbf{2} & $18$ & $21$ & $11$ & $2$ & $1$ \\ \textbf{3} & $9$ & $12$ & $1$ & $1$ & $0$ \\ \textbf{4} & $8$& $1$ & $1$ & $0$ & $0$ \\ \hline \end{tabular} \caption{Average number of occurrences of exact scores $\pm $ standard deviation based on 15000 Monte Carlo simulations (goals scored by the team with higer Elo in rows, by the team with lower Elo in columns).}\label{simtable} \begin{tabular}{|c|ccccc|} \hline Final score &\textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} \\ \hline \textbf{0} & $15.6 \pm 3.7$ & $11.0 \pm 3.2$ & $4.6 \pm 2.1$ & $1.4 \pm 1.2$ & $0.3\pm 0.6$\\ \textbf{1} & $27.9 \pm 4.9$ & $18.5 \pm 4.1$ & $7.4\pm 2.7$ & $2.2 \pm 1.5$ & $0.5 \pm 0.7$ \\ \textbf{2} & $26.1 \pm 4.7$ & $15.9 \pm 3.9$ & $6.0 \pm 2.4$ & $1.7 \pm 1.3$ & $0.4 \pm 0.6$ \\ \textbf{3} & $16.6 \pm 3.9$ & $9.2 \pm 3.0$ & $3.4 \pm 1.8$ & $0.9 \pm 1.0$ & $0.2 \pm 0.4$ \\ \textbf{4} & $8.1 \pm 2.8$ & $4.2 \pm 2.0$ & $1.4 \pm 1.2$ & $0.4 \pm 0.6$ & $0.1 \pm 0.3$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \centering \includegraphics[scale=0.5]{histogram.png} \caption{Score difference: Poisson model vs. actual data} \label{histogram} \end{figure} \newpage \section{The group-stage classification method} \textbf{} \\ We now assess the attractiveness of the group format using the previous model to simulate all the rounds of games in a group, except for the last one. For example, when considering the current format of the World Cup with groups of four countries, we simulate the first two rounds of games (for a total of four games). Then, based on the points system and given that goal difference is the tie-breaker, we calculate the ranking in the group. In case two teams have the same number of points and the same goal difference then they share the same ranking in the group (no other tie-breaker is included in the model).\\ During the last round of games, the qualification of team $i$ will likely not only depend on the outcome of its own game against team $j$ but also on that of the other game, between teams $k$ and $l$. As the last-round games are played simultaneously, we assume that team $i$ does not know the outcome of the other game when playing against $j$. From the point of view of team $i$, all of the following scenarios are possible: $k$ beats $l$ by a 5 goals difference, ... , $k$ beats $l$ by a 1 goal difference, $k$ draws with $l$, $l$ beats $k$ by a 1 goal difference, ... , $l$ beats $k$ by a 5 goals difference.We decide to stop at 5 goals difference because modern football games ending with higher goal differences are extremely rare (making it likely for team $i$ to disconsider such outcome), and, above all, gives the same numerical results in our simulations as when higher goal differences are taken into account. For each outcome of the $i$ vs $j$ game, we check under which scenarios team $i$ qualifies for the next phase (i.e. ends up in the top two group teams). Occupying a ``clean" first or second position (more points or same points but better goal difference than the team in third position) is better than sharing the position with the third-ranked team (additional tie-breakers which are not included in our model may end up disqualifying team $i$). Based on this analysis, team $i$ will choose the lowest-effort outcome that maximizes its chances of qualification: a 5 goals difference win (winning by 5 goals difference is better than winning by 4 goals difference in at least one scenario), ... , a 1 goal difference win (winning by 1 goal difference is no different than winning by more goals difference in all scenarios but better than a draw in at least one scenario), a draw (winning=drawing in all scenarios but is better than losing in at least one scenario), losing by 1 goal difference (same chances of qualification as winning or drawing but is better than losing by 2 goals difference in at least one scenario), ..., and 5 goals difference loss (the team has exactly the same chances of qualifying no matter his performance in the last game). Note that the last situation refers to the case where team $i$ is indifferent since it is already qualified or cannot qualify regardless the result of game $k$ vs. $l$. For example, suppose that any scenario other than $k$ beating $l$ automatically qualifies $i$. If team $k$ wins, a draw between $i$ and $j$ will qualify $i$, while a loss for $i$ will not lead to qualification (if $i$ wins, it will obviously progress as wins gain more points than draws). In this case, team $i$ will play for a draw: even though team $i$ may still qualify if it loses against $j$, a draw will increase its chances of qualification (if $k$ wins, tying is better for team $i$ than losing). A victory will also qualify $i$, however it does not improve the probability of qualification as compared to a draw. In this same example, if in a given scenario, a draw leads to a shared second place while a 1 goal difference win leads to a "clean" second place, or even ``clean" first place, then team $i$ will play to win by a 1 goal difference. Note that, in their strategic choices, teams do not distinguish between first and second place in the group, in the sense that teams only care about qualification to the next round. In the actual World Cup, teams do not always want to finish first in their groups. There have been many occasions where teams seem to have intentionally lost in order to finish second in their group and play against weaker opponents in the knock-out stage. This may well have occurred in the 2018 World Cup in the group with England and Belgium, in which the winner faced more difficult opponents (Brazil and France) in the knock-out stage. \\ After having determined what team $i$ would prefer, we carry out an analogous analysis for its opponent $j$, yielding the following classification for the game $i$ vs $j$ as per figure \ref{classification_games}: \begin{itemize} \item \textbf{Competitive games}: Neither team is indifferent here, and their targets are incompatible: if one team reaches its target, the other will not reach his. To be clear, both teams may qualify to the second round but there is at least one outcome in the parallel game ($k$ vs $l$) which ``threatens" $i$ and $j$ if their respective targets are not reached. In the example presented in figure \ref{classification_games}, team $i$ wants to win by a 2 goals difference while team $j$ is looking for a draw. The two teams will thus do their best in this competitive game and the tournament organizer's aim is to ensure that this type of game occurs as often as possible. \item \textbf{Collusive games}: The targets of both teams are compatible and none is indifferent: there is a non-empty subset of final scores (in terms of goal difference) which will put the two teams in the best position to qualify (potentially qualifying them at the expense of the other teams of the group). The example presented in figure \ref{classification_games} is the \textit{Scandal of Gijon}: Germany (team $i$) is looking for a 1 goal difference win while Austria (team $j$) is qualified if it loses by, at most, a 2 goals difference. The ``compatibility/collusion zone" is the subset which contains the 1 goal difference and 2 goals difference victories for Germany (Germany won the game by a 1 goal difference). These types of situations can lead to collusion and should be avoided at all cost. \item \textbf{Stake-less games}: At least one of the teams is completely indifferent between winning, drawing or even losing by 5 goals difference. In these games, the indifferent team has, in general, nothing to gain and may field second-team players. This is unfair for the other teams, $k$ and $l$, as they played against a stronger opponent $i$ in the previous rounds. In addition, a team that is already qualified may take into account other factors such as the opponents it will face in the next stage of the competition. Thus, depending on the results in other groups and the scheduling of the next stage, winning the game may not be in its best interest. It can be noticed that these games also present a non-empty compatibility zone (as for collusive games). However, in a collusive set up, both teams are at risk of being disqualified and are expected, ex ante, to play competitively. If the final result is in the compatibility zone, suspicions will immediately be raised even if the two teams did truthfully play competitively. In a stake-less game, spectators and other teams know, ex ante, that the indifferent team will not play competitively (if already qualified, it is his right to put its major players on rest and, if already disqualified, the team can also be encouraged to field young players so they gain experience). Consequently, we strongly believe that stake-less games are less harmful than collusive games but should also be reduced. \end{itemize} \begin{figure} \begin{center} \includegraphics[scale=1]{classification_games.png} \caption{Classification of last-round games: competitive, collusive, stake-less (numbers in black represent the final goal difference result of the game in favor of team $i$)}\label{classification_games} \end{center} \end{figure} \section{Assessing the current World Cup format} \label{current_format} \textbf{} \\ We use the above to assess the last round of the current World Cup format, with groups of four teams of which the top two qualify for the next round. We test both different point-attribution systems and changes to the scheduling of the last round of games: \begin{itemize} \item \textbf{Setting 1}: pot A vs. pot D and pot B vs. pot C\\ \item \textbf{Setting 2}: pot A vs. pot C and pot B vs. pot D\\ \item \textbf{Setting 3}: pot A vs. pot B and pot C vs. pot D \end{itemize} \vspace{0.5cm} We carry out 15000 simulations for each setting and point-attribution system: the results appear in Table \ref{MC_table}. Our conclusions are as follows: \begin{itemize} \item The points system has no impact on the quality of games (systems with 4-points for a win produce no visible changes in the results)\footnote{It is worth noticing that our model has been calibrated using 3-points for a win, 1-point for a draw and 0-points for a loss data.}; \item Collusive games are relatively rare, but stake-less games are not; and \item The setting has a considerable impact on the quality of games, with setting 1 being the best and setting 3 the worst. \end{itemize} The last of the above results is intuitive: in setting 3, A and B have already played against the weakest teams in the previous rounds. Before the last game, they are likely to have a good number of points, while teams C and D have few or no points. The last round matches the best two teams (who are already or almost qualified) against the weakest two teams (who are already or almost eliminated): the outcome of both games has very little impact on the final group ranking. Furthermore, we have noticed that the only collusion situation which takes place is one in which opposing teams decide to draw. The pressure exerted by the other game which is played in parallel makes a collusion based on goal differences such as the scandal of Gijon very risky (accepting to lose/win but in a given range of goal difference). The introduction of simultaneous matches in the current format of the World Cup has been effective in reducing collusion opportunities.\\ Using our historical data set, we check the proportion of each type (competitive, collusive, stake-less) for the last games of the group round given the scheduling of the group (setting A, B or C). The new format has been applied to six World Cups with 8 groups each. The game schedule is drawn randomly, so that each setting is equally likely. Table \ref{data_freq} shows the frequency of each setting in the previous World Cups. We use our classification method to calculate the frequencies of each game type as a function of the setting. Table \ref{data_type} shows the results: setting 3 produces the least-exciting last round of games, in line with our predictions. Nevertheless, the sample size is only small to check whether the sample estimates fit our model predictions (only 48 last rounds of groups have been played since 1998). \begin{figure} \begin{center} \includegraphics[scale=0.5]{mc_w3_d1.png} \caption{The cumulative frequencies in Monte Carlo simulations for a win giving 3 points and a draw 1 point (setting 1).}\label{MC} \end{center} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c|ccc|ccc|} \hline \textbf{Setting type :} & 1&2&3 & 1&2&3 \\ \hline \textbf{Points for :} & \multicolumn{3}{|c|}{\textbf{Win} = 2 }& \multicolumn{3}{|c|}{\textbf{Win} = 3} \\ \hline \multirow{3}{*}{\textbf{Draw = 1}} & \textcolor{green}{$63.05\%$} & \textcolor{green}{$59.50\%$} & \textcolor{green}{$42.95\%$}& \textcolor{green}{$63.14\%$} & \textcolor{green}{$59.49\%$} & \textcolor{green}{$42.69\%$}\\ & \textcolor{red}{$1.02\%$} & \textcolor{red}{$1.15\%$} & \textcolor{red}{$1.68\%$} & \textcolor{red}{$0.94\%$} & \textcolor{red}{$1.32\%$} & \textcolor{red}{$1.76\%$}\\ & $35.93\%$ & $39.35\%$& $55.37\%$ & $35.92\%$ & $39.19\%$& $55.55\%$\\ \hline \multirow{3}{*}{\textbf{Draw = 2}} & \textcolor{green}{-} & \textcolor{green}{-} & \textcolor{green}{-}& \textcolor{green}{$63.13\%$} & \textcolor{green}{$59.76\%$} & \textcolor{green}{$43.54\%$}\\ & \textcolor{red}{-} & \textcolor{red}{-} & \textcolor{red}{-} & \textcolor{red}{$0.88\%$} & \textcolor{red}{$1.30\%$} & \textcolor{red}{$1.60\%$}\\ & - & - & - & $35.99\%$ & $38.94\%$& $54.87\%$\\ \hline \end{tabular} \caption{Results of the Monte Carlo simulations with 15000 iterations per run (\textcolor{green}{competitive}, \textcolor{red}{collusive} and stakeless games).}\label{MC_table} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Setting} & \textbf{1} & \textbf{2} & \textbf{3}\\ \hline \textbf{Occurrences} & 15 & 19 & 14\\ \textbf{Frequencies} & $31.25\%$ & $39.58\%$ & $29.17\%$\\ \hline \end{tabular} \caption{Number of occurrences and frequencies of the different group settings in our sample data.}\label{data_freq} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Type of game} & \textbf{Competitive} & \textbf{Stake-less} & \textbf{Collusion opportunity}\\ \hline \textbf{Setting 1} & $50.00\%$ & $50.00\%$ & $0.00\%$\\ \textbf{Setting 2} & $57.89\%$ & $36.84\%$& $5.26\%$ \\ \textbf{Setting 3} & $32.14\%$ & $67.86\%$ & $0.00\%$\\ \hline \end{tabular} \caption{Frequencies of types of games as a function of the group setting.}\label{data_type} \end{center} \end{table} \newpage \section{The 2026 World Cup} \textbf{} \\ In 2026, FIFA plans to have 48 qualified teams, distributed into 16 groups of 3 teams. The first two teams in each group (2/3 of teams) then qualify to the knockout stage. The transition from 1/2 to 2/3 teams qualifying has already been tested in the second biggest soccer competition: the UEFA Euro. Between 1996 and 2012, the proportion of teams qualifying for the knockout stage was 1/2 in a 16-team tournament (4 groups of 4 teams), growing to 2/3 for the 2016 UEFA Euro (24 teams divided into 6 groups of 4, first 2 teams per group + 4 best third-ranked teams among all groups qualify). In addition to the change in this ratio, the new FIFA World Cup format will introduce ``passive teams" in the last round of games: in groups of three, one team will have to stand on the side and wait for the result of the last game decide its fate. Let us notice that, for the new UEFA format, the third ranked teams in groups that have played all their matches also experiment a form of passivity since they have to wait for the outcomes of groups that still have not played their last round of games in order to know if they will qualify. We believe that these changes have led to a decrease in the number of competitive games and would, at least partially, explain the decrease in the number of goals scored in the UEFA Euro Cup (from an average of over 2.5 goals per match in the group phases between 1996 and 2012 to a figure of 1.92 goals per match during the 2016 group phase). This natural experiment then suggests that tournaments with a higher proportion of teams qualifying and/or with passive teams lead to less attractive games and more collusion opportunities. In addition to that, the group-match schedule (the order in which the teams play against each other) may potentially have a critical impact on the quality of the last round games as well.\\ In this last section, taking into consideration FIFA's intention to increase the number of participating teams, we evaluate the new 2026 World-Cup format as well as an ``augmented UEFA-style" format. Two other alternatives have been analyzed and can be found in the appendices. We should point out that, in addition to having an impact on the ``quality" of games in the FIFA World Cup itself, the new formats have an impact on the ``quality" of games in the continental qualifying tournaments which have to be adapted in order to accomodate the higher number of qualified teams. Such ``secondary" effects of the new World Cup format are not tackled in this article and can set the path for further research. \subsection{First option: 16 groups of 3} \textbf{} \\ Some FIFA officials are currently proposing a ``48-team, groups of 3" format, in which the best two teams in each group qualify for the next knock-out stage. As the groups contain an odd number of teams, one team per group will not play in the last round of games. There are therefore only three possibilities in the last round: \begin{itemize} \item \textbf{Setting 1}: The weakest team is the passive team in the last round \item \textbf{Setting 2}: The middle team is the passive team in the last round \item \textbf{Setting 3}: The strongest team is the passive team in the last round \end{itemize} We carry out 15000 Monte-Carlo simulations to assess the quality of the last round and the results are found in Table \ref{MC_tablegroupsof3}.\\ Two main remarks can be formulated concerning this format\footnote{Similarly to the groups of 4 format of the World Cup, Monte Carlo simulations are performed by randomly (uniformly) drawing Elo indices from 3 consecutive intervals of similar size covering [a,b].} (same conclusions as in Guyon \cite{Guyon2}). First of all, the existence of a passive team re-introduces the \textit{Gijon scandal}-type of collusion opportunities. If the passive team has not accumulated enough points during the first two games of the group, there is a high chance that a win-loss outcome with low goal difference qualifies both the last teams playing. Consequently, the proportion of collusive games is greater in this format than in the current World Cup format (which has no passive team). In the 4-team per group format, there is ``pressure" from the unknown result in the other last-round game which is played simultaneously. This no longer applies in the 3-team per group format.\\ The second point (probably the most consequential) is that the game scheduling has a critical impact on the quality of games in the last round: it is key that the passive team in the last round be the strongest team. In our model, when the strongest team plays the first two rounds of games, there is a $89\%$ chance of a last game in which both teams will give their best. This probability drops to around $68\%$ when the pot B team is the passive one and to $16\%$ when the weakest team is passive ! Indeed, in our model, if the passive team happen to win both his first two games, or draw both his first two games, or win and draw the other or win and lose the other with a total goal difference of zero, then the two non-passive teams will have to play competitively in the last round game. Such scenarios are very likely to take place when the passive team is the pot A team (top pot).\\ \textbf{In case this World Cup format would be carried forward, in order to preserve the fairness and beauty of the game, FIFA should improve its group randomization draw by implementing a predefined schedule in which the pot A team will be the passive team in the last round}. \begin{table} \begin{center} \caption{16 groups of 3 (with 32 qualified teams): Monte Carlo simulations with 15000 iterations.}\label{MC_tablegroupsof3} \begin{tabular}{|c|ccc|} \hline \textbf{Setting type} & \textbf{1} & \textbf{2} & \textbf{3}\\ \hline \textbf{Interesting games} & $16.07\%$ & $68.40\%$ & $89.09\%$ \\ \textbf{Stakeless games} & $78.62\%$ & $23.45\%$ & $8.17\%$ \\ \textbf{Collusion games} & $5.31\%$ & $8.15\%$ & $2.74\%$\\ \hline \end{tabular} \end{center} \end{table} \subsection{Second option: 12 groups of 4, 32 qualified to the next stage} \textbf{}\\ In the spirit of the new format of the UEFA Euro competition, this format consists of 12 groups of 4 teams each. The first 2 teams of each group qualify in addition to the best 8 third-ranked teams among all groups amounting to a total of 32 qualified teams. \\ First of all, the qualification of the best third-ranked teams creates a considerable informational asymmetry between teams of different groups. For obvious logistic reasons, only a given number of games can be played simultaneously. In the current format of the Euro competition, only the last 2 games of each group are played in parallel. Consequently, the third-ranked team in the first group to play has very few information concerning the number of points and goal difference needed to qualify among the best 8 third-ranked teams. This is not at all the case of the third ranked team of the last group to play since it perfectly knows the number of points and goal difference of the other 11 third-ranked teams allowing it to clearly define its target and guarantee its qualification with certainty. In other words, the ``virtual" group of thirds has teams who play sequentially and the first eleven to play become passive teams when the last one plays. This disadvantage for teams playing first is created before any game is played (and even before the final draw since the pot A team of the first group is usually the host of the competition, giving him an actual disadvantage).\\ Since we focus on improving the competitiveness and reducing collusion opportunities in the last round of games, we decide to assess the last round of games for the last group to play. The proportion of non-competitive games will reach its maximum in this last group to play since its third ranked team knows exactly his target to qualify (the results of the previous groups being known to the teams of the last group). In our model, teams look for a ``sure" qualification and do not distinguish between qualifying in 1st, 2nd or 3rd position (among the 8 best thirds). The results of the simulations can be found in table \ref{second_option_uefa}.\\ Compared to the current format of the World Cup, the number of competitive games decreases significantly while collusion opportunities increase. This confirms our intuition that introducing passive teams (namely the 11 teams in the ``virtual" group of thirds) creates collusion opportunities and should be avoided or reduced by increasing the number of parallel games in the last-round of games\footnote{Having 4 games played in parallel could offer a good compromise between having competitive games, not having overwhelming logistic issues, and maximizing viewership (spectators will have to chosen one out of the four simultaneous games).} The other less intuitive result is that the order of optimal game schedules is inversed compared to what we found in section \ref{current_format}: setting 3 maximizes the number of competitive games while setting 1 minimizes it and leads to collusive games $10\%$ of the time. One explanation for this result is that in setting 1, the pot B team which is likely to be in the 2nd or 3rd position after playing against the strongest and weakest teams of the group, will confront the pot C team in the last round. Pot C team will also likely be in the second or third position since it has played against the strongest and weakest team in its first two games. In this game, a draw or a mild win for the team in the third position may end up qualifying both teams (one qualified among the best 2 teams of the group, the other among the best 8 third-ranked teams). On the other hand, setting 3 witnesses a confrontation between the two strongest teams and the two weakest. It is highly likely that the two weakest (pot C vs pot D) will be in 3rd and 4th position so both teams will give their best to win the game and qualify among the 8 best third-ranked teams. \textbf{The conclusion is that the "UEFA Euro"-format leads to more collusion opportunities than the current FIFA World Cup format and, games should be scheduled according to setting 3 in order to minimize collusive games.} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Type of game} & \textbf{Competitive} & \textbf{Stake-less} & \textbf{Collusion opportunity}\\ \hline \textbf{Setting 1} & $34.14\%$ & $56.36\%$ & $9.50\%$\\ \textbf{Setting 2} & $37.60\%$ & $55.63\%$& $6.77\%$ \\ \textbf{Setting 3} & $41.71\%$ & $55.47\%$ & $2.82\%$\\ \hline \end{tabular} \caption{12 groups of 4 with 32 qualified: results of Monte Carlo simulations with 15000 iterations for the last round of games in the last group to play (settings are the same as for the current World Cup format).}\label{second_option_uefa} \end{center} \end{table} \section{Conclusion} \textbf{} \\ This article has presented an assessment method of the competitiveness and attractiveness of the last round of games in the FIFA World Cup group stages. We find that in order to reduce the occurrence of match-fixing opportunities, the tournament structure should be optimized so that most of the matches are ``competitive" as per our classification. Applying this new method, we notice that the scheduling of games, in particular the choice of teams playing each other in the last round, is crucial for obtaining exciting and fair last-round games. Furthermore, our results underline that the introduction of passive teams (teams which do not play during the last round of games) increases significantly collusion opportunities and should be avoided, to the extent possible, by scheduling simultaneous games during the last round of the group stage\footnote{In case the number of teams in the group is odd or due to logistical reasons, it may be impossible not to have any passive team.}. The optimal game schedule depends on the format of the World Cup, but our clear recommendation is that FIFA should drop its current schedule-randomization process in the draw for the group matches. In the current World Cup format, we recommend that the last group games should be pot-A teams against pot-D teams, and pot-B teams against pot-C teams. Scheduling these games in advance has no negative impact on any aspect of the competition (logistics, fairness etc.), but increases the attractiveness and competitiveness of the last round. In the forthcoming 48 teams in groups of 3 format, the ``pot-A'' team should be the passive team in the last round. As a path for future research work,
{ "attr-fineweb-edu": 2.263672, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUczQ5qX_BhIhya-wH
\section{Calibrated data representation} \mysection{Contribution.} The calibration estimates and player localization are not easy to handle efficiently. Hence, to encourage their use in subsequent soccer-related works, we propose and release various easy-to-use representations of the calibration data extracted from the previous section. We illustrate these representations in Figure~\ref{fig:main_figure} and describe them in this section. We also discuss their pros and cons. \subsection{Top view image representations} In this section, we provide image representations of the player localization information. We use the calibration of CCBV-SN to generate a synthetic top view of the game containing generic field lines, the players represented by small squares, and the polygon delimiting the portion of the field seen by the camera. We represent that top view in two ways. \mysection{Color composition (CC).} We generate a RGB image where we first set field pixels in black and line pixels in white. Then, we superimpose with white pixels the contour of the polygon of the field seen by the camera. Finally, we represent the players by squares filled with their associated RGB color, overriding previous pixels in case of intersection. \mysection{Binary channels (BC).} We generate an ``image'' composed of 3 binary channels: one for the generic field lines, one for the filled polygon of the field seen by the camera, and one for the players without their color information. \mysection{Pros and cons.} A major advantage of image representations is their interpretability, since a human observer can directly understand relevant information about the game from such top views. Besides, they can be easily processed with convolutional neural networks in deep learning pipelines. As a drawback, they have a relatively large memory footprint compared with the low amount of actionable information that they actually contain. The color composition view has the advantage over the binary channels of keeping track of the color of the players, necessary for team discrimination and tactics analysis. On the other hand, the representation of a player in the binary channels is not influenced by a poor segmentation in the raw image or a color shift due to \eg an occlusion. Also, players located on field lines do not prevent those lines to be encoded properly in their binary channel, while they hide the lines in the color composition. \subsection{Feature vector representation} Inspired by Giancola \etal~\cite{Giancola2018SoccerNet}, we compress our top views as frame feature vectors extracted by pre-trained backbones. This is common practice in deep learning approaches, as universal networks trained on \eg ImageNet have an excellent transfer capability to encode meaningful visual information about any given image. We use top views of 224 $\times$ 224 pixels, with field lines of 4 pixels width and players of 8 $\times$ 8 pixels. We consider two backbones with similar number of parameters, both trained on ImageNet. \mysection{ResNet-34 (RN).} This network has 21.8 million parameters and achieves 73.27\% top-1 accuracy on ImageNet. We use a frozen ResNet-34~\cite{He2016Deep} and collect the feature vectors of dimension 512 in its penultimate layer. \mysection{EfficientNet-B4 (EN).} This more recent network has 19 million parameters and achieves 82.6\% top-1 accuracy on ImageNet. We use EfficientNet-B4~\cite{tan2019EfficientNet}, which yields feature vectors of dimension 1792 in its penultimate layer. \mysection{Pros and cons.} We choose these networks for their good trade-off between performance on ImageNet and inference time. Indeed, they allow for a much faster training of neural networks compared with the top views, as computationally expensive spatial convolutions have already been performed. As a drawback, the features collected from these networks are not interpretable anymore, which may reduce the possibilities of developing explainable models. \subsection{Player graph representation} \mysection{Player graph (PG).} Our third approach consists in encoding per-frame players information in a graph. Each player is represented with a node, whose features are defined by their associated RGB color, their position in real-world coordinates, and the area of the detected bounding box in the image frame. Two players are linked to each other with an edge if their real-world distance is below 25 meters, which we consider sufficient to pass contextual information between the nodes in the graph (\ie the players in the field). \mysection{Pros and cons.} The player graph is a compromise between the compactness of feature representations and the interpretability of top views. Indeed, it explicitly encodes in a compact way the interpretable information that we want to embed in our descriptive features: the players color, their position in the field and their interactions with each other. Contrary to top view images, it does not encode any empty portion of the field, nor considers the field lines that are constant across the videos, which makes the learning focusing more on the interesting player features. The graph convolutional network (see next section) that processes the player graph aggregates features from neighboring players, which helps it understand real-world distances by discarding players further away. Yet, that aggregation does not consider different clusters of neighbors, which could lead to a misunderstanding between teammates and adversaries. \subsection{Action spotting} \begin{itemize} \item We investigate how to use the calibration to improve the action spotting on SoccerNet-v2. \item We compute a top view of the field with the players on it. \item We feed that to our great network (described in detail) \item We have super performances. \item We investigate graph neural networks with players positions in it... still bad at the moment. \end{itemize} \subsection{Contributions} \begin{itemize} \item In this paper, our contributions are the following. \item We release a fast lightweight calibration tool, distilled from a SOTA commercial tool. \item We include the calibration information for the task of action spotting in different ways and show why it is useful. \item We have awesome results on action spotting that can be reproduced and exported on other matches (well, maybe, if they are good enough, as always). \end{itemize} \section{Conclusion} In this paper, we examine the problem of computing, representing, and exploiting the camera calibration information for the large-scale SoccerNet dataset, composed of 500 soccer games. We leverage a powerful commercial tool to generate pseudo ground truths and manage to distill it into a recent deep learning algorithm. As first contribution, we release our distilled network, which is the first public soccer calibration algorithm trained on such a large dataset, along with its calibration estimates for the SoccerNet videos to enrich the dataset. We use our calibration and a player detection algorithm to obtain the player localization in real-world coordinates. To further serve the scientific community, our second contribution is to provide three actionable ways of representing those calibration data: top view images, feature vectors representations, and player graphs. Eventually, we investigate the benefit of using these representations in a deep learning network for the task of action spotting in SoccerNet-v2. Standing for our third contribution, we design an appropriate concatenation of generic video and specific calibration information within the current best network to achieve a novel state-of-the-art performance. \mysection{Acknowledgments.} This work is supported by the DeepSport project of the Walloon Region and the FRIA, EVS Broadcast Equipment, and KAUST Office of Sponsored Research (OSR) under Award No. OSR-CRG2017-3405. \section{Discussion} Blaaaa. \section{Experiments} \mysection{Contribution.} In this section, we first validate with performance metrics the effectiveness of CCBV-SN as calibration algorithm. Then, we leverage our various calibration data representations in the particular use case of the action spotting task in SoccerNet-v2. We build on top of the current best network to achieve a new state-of-the-art performance. \subsection{Validating the camera calibration distillation} \begin{figure} \centering \includegraphics[width=\linewidth]{sections/figures/calib_results.png} \caption{\textbf{Examples} of calibrations obtained with CCBV-SN. Globally, the results are satisfying and allow for an effective use of the calibration for downstream tasks such as action spotting.} \label{fig:calib_results} \end{figure} In order to validate our calibration-based data representations and their use for an action spotting task, we first validate CCBV-SN as camera calibration algorithm. \mysection{Dataset.} The World Cup 2014 dataset~\cite{Homayounfar2017Sports} stands as reference for evaluating soccer camera calibration methods. The test set comprises 186 calibrated images taken from 10 different games in various stadiums, perspectives, lighting conditions, and moments of the day. \mysection{Metric.} Following~\cite{Chen2019Sports, Sha2020End}, for each test image, we compute the entire intersection over union (\textit{IoU entire}) between the top view projections of the field model by the ground-truth camera and by the estimated camera, as well as the IoU restricted to the part of the field actually shown on the image (\textit{IoU part}). For both metrics, we report the mean and the median value across the test images. \mysection{Results.} We report the calibration performances in Table~\ref{tab:calibration-results}. The private teacher achieves the best results on 2 out of 4 metrics, which validates its use as a teacher in our distillation approach. It is topped by Citraro \etal~\cite{Citraro2020Real} on the IoU (entire) metrics, which finetune their method with additional manual annotations on the dataset. In comparison, none of our methods are trained on the that evaluation dataset. Thus we actually measure the generalization capabilities of our teacher and CCBV-SN on a completely new dataset. This evaluation also allows us to quantify the performance drop induced by our distillation procedure. CCBV-SN loses 6 to 12 points in the distillation process, making its performances close to~\cite{Sharma2018Automated}, especially on the IoU (part). This metric is actually the most relevant for us, as our use of the calibration is limited to the visible part of the field for the calibration data representations. Therefore, CCBV-SN is legitimately usable in the rest of our experiments, and is presumably even better on SoccerNet, since it is the dataset on which it has been trained. Some calibration results obtained with CCBV-SN are shown in Figure~\ref{fig:calib_results}. \input{sections/tables/calibration} \subsection{Use case: calibration-aware action spotting} \begin{figure} \centering \includegraphics[width=\columnwidth]{sections/figures/method_calibration.png} \caption{\textbf{Our action spotting pipeline} for the patterned actions. We include calibration information within CALF~\cite{cioppa2020context}, by concatenating frame feature vectors extracted from our various representations. This allows us to mix generic information from the SoccerNet features with player-specific information from the calibration. For each chunk, the network outputs $p$ spotting predictions.} \label{fig:method} \end{figure} In this section, we investigate a possible use case of our calibration representations, by leveraging a state-of-the-art network for the task of action spotting in SoccerNet-v2. \mysection{Dataset.} The action spotting dataset of SoccerNet-v2~\cite{Deliege2020SoccerNetv2} consists in 110,458 action timestamps spread over 17 classes within the 500 complete games of the SoccerNet~\cite{Giancola2018SoccerNet} dataset, with 22,551 actions related to the 100 test games. Each action is annotated with a single timestamp, that must be retrieved as precisely as possible. \mysection{CALF architecture.} We focus on integrating the calibration information along the original SoccerNet features in the Context-Aware Loss Function (CALF) architecture of Cioppa \etal in~\cite{cioppa2020context}. This architecture achieves state-of-the-art performances on the task of action spotting in SoccerNet-v2. As original features, we choose the ResNet features further reduced from 2048 to 512 components by PCA, as they yield the best results both in~\cite{cioppa2020context} and in ~\cite{Giancola2018SoccerNet}, which we also noticed in our preliminary experiments. CALF is composed of three trainable modules: a frame feature extractor, a temporal segmentation module, and an action spotting module. The first one is a convolutional spatio-temporal pyramid (STP) that aggregates the ResNet features across various time scales, and outputs a feature vector of user-defined dimension $d$ per frame. Our goal is to concatenate such features judiciously along frame feature vectors extracted from our calibration representations, as shown in Figure~\ref{fig:method}. The remaining two modules and the training protocol are kept as is to assess the improvement brought by only the calibration information. \mysection{Processing our representations.} Each calibration data representation must be processed appropriately for a seamless integration within the network. We proceed as follows. \textsl{Top views.} We process our top views with our own 3D-convolutional network \textbf{(3D)}. We choose the same structure as the STP module but where the kernels are extended to take into account the extra spatial dimension of the top view compared to the original ResNet features. The output is a $d$-dimensional vector for each frame that gathers the spatial and temporal information of the top view representation. \textsl{Feature vectors.} We investigate two ways of further processing the feature vectors obtained from the pre-trained backbones: (1) we use the trainable STP of CALF to extract $d$-dimensional frame feature vectors \textbf{(STP)}, (2) we fully connect our feature vectors through a trainable layer directly to feature vectors of dimension $d$ followed by a ReLU activation \textbf{(FCL)}. In the second case, we obtain per-frame feature vectors solely based on the raw frame information, without any temporal aggregation. \textsl{Player graph.} We design a graph convolutional neural network \textbf{(GCN)} to extract per-frame features from the player graph. For that purpose, we follow DeeperGCN~\cite{li2020deepergcn}. In particular, we build our architecture with 14 GCN blocks with residual skip connections. We leverage two layers of GENeralized Graph Convolution (GENConv) per block, that aggregate the lifted neighboring nodes using a softmax with a learnable temperature parameter. Then, a max operation across the node pools a global feature for the player graph. This feature is later lifted with a single fully connected layer to the desired dimension $d$. \mysection{Class separation.} Intuitively, the player localization extracted with the calibration can prove more helpful for spotting some classes (\eg penalty) than others (\eg shot off target). Hence, we leverage our domain knowledge to split the 17 action classes of SoccerNet-v2 into two sets: ``patterned'' and ``fuzzy'' actions. We consider an action as ``patterned'' when its occurrence is systematically linked with typical player placements: penalty, kick-off, throw-in (one player outside the field), direct free-kick (player wall), corner, yellow card, red card, yellow then red card (players grouped around the referee for the card-related actions). On the other hand, a ``fuzzy'' action may occur in many different player configurations: goal, substitution, offside, shot on target, shot off target, clearance, ball out of play, foul, indirect free-kick. Given our class separation, we train two networks: one on the patterned classes that uses the calibration information and the original ResNet features, one on the fuzzy classes that only uses those ResNet features. \mysection{Feature fusion.} For the network trained on the patterned classes, we input SoccerNet's ResNet features to the STM, collect $d$-dimensional feature vectors, and concatenate them with our $d$-dimensional vectors extracted by one of the above processing steps. This is illustrated in Figure~\ref{fig:method}. We set $d=$152, which allows us to simply plug a calibration-related branch next to the original branch of CALF working on SoccerNet's ResNet features. The concatenation yields feature vectors of dimension 304 and is performed just before the temporal segmentation module of the whole network. For the network trained on the fuzzy classes, we use SoccerNet's ResNet features only as in CALF, and set $d=304$ after the STM to have the same input dimension for the segmentation modules of the two networks. \mysection{Training.} Following CALF, we process 2-minute video chunks. We extract frame feature vectors as described above, concatenate them when necessary, and input them to a temporal segmentation module, that provides per-class features and per-class actionness scores per frame. This module is trained with a context-aware loss that aggregates the temporal context around the actions. Those features and scores are concatenated and sent to an action spotting module, which provides predicted action vectors for the chunk, containing a localization estimate and a classification per predicted action. An iterative one-to-one matching connects those predictions with ground-truth action vectors, allowing to train the module with an element-wise MSE loss. \mysection{Metric.} As defined in~\cite{Giancola2018SoccerNet}, we measure the action spotting performance with the Average-mAP. First, predicted action spots are said positive when they fall within a margin $\delta$ of a ground-truth timestamp from their predicted class. Then, the Average Precision (AP) is computed from Precision-Recall curves, then averaged over the classes (mAP). Finally, the Average-mAP is the AUC of the mAP obtained at margins $\delta$ varying from 5 to 60 seconds. Given our class separation, we merge the predictions of our two networks before computing the Average-mAP. \mysection{Results.} We achieve our best result with the color composition reduced to frame features by ResNet-34 as calibration data representation, further bridged to $d$-dimensional feature vectors with a fully connected layer. This yields an Average-mAP of 46.8\% on the test set, reported in Table~\ref{tab:ActionSpotting-updated}, the current SoccerNet-v2 action spotting leaderboard. We achieve a novel state-of-the-art performance, outperforming the other methods by a comfortable margin. In particular, we prevail on 15 of the 17 classes, only topped by Vanderplaetse \etal~\cite{Vanderplaetse2020Improved} for kick-offs and penalties. Besides, kick-offs are the only actions for which our performances degrade compared to the original network, most probably because those actions are regularly unshown in soccer broadcasts~\cite{Deliege2020SoccerNetv2}. We illustrate some action spotting results in Figure~\ref{fig:quantitative_action_spotting}. We manage to spot actions that CALF misses, and some false positives of CALF are correctly avoided. On the current open competition of action spotting in SoccerNet-v2, organized on EvalAI, we achieve an Average-mAP of 46.4\% on the private challenge dataset. This validates the generalization capabilities of our network. \input{sections/tables/action-spotting} \begin{figure} \centering \includegraphics[width=\linewidth]{sections/figures/qualitative_spotting.png} \caption{\textbf{Examples} of action spotting results on a game between Manchester United and Chelsea in December 2015. In this case, we spot correctly two more direct free-kicks than the original network, and we rightly avoid predicting a corner around 35 minutes.} \label{fig:quantitative_action_spotting} \end{figure} For completeness, we give additional results with the different combinations of calibration data representation and feature extraction in Table~\ref{tab:ablations}. We see that the color composition with the 3D network and the player graph representation yield performances that are practically equivalent to our best result, while other variants are less effective. Hence, each calibration data representation is able to reach competitive performances. We do not report any result with extracted feature representations from top view images composed of binary channels as they globally yield much lower performances. Finally, fusing features from the top view and the player graph does not appear useful either as these contain essentially the same type of information. \input{sections/tables/all-action-spotting} \section{Calibration and player localization} \begin{figure*} \centering \includegraphics[width=\linewidth]{sections/figures/main_fig_calibration.png} \caption{\textbf{Calibration and player localization representations.} The original SoccerNet dataset (left) provides raw videos of 500 complete soccer games as well as generic per-frame feature vectors. We distill a commercial calibration tool into a recent network architecture on SoccerNet, which we release along all the calibrations. We combine Mask R-CNN player detections with the calibration to provide 3 representations of the calibrated data, thus enriching the dataset with specific player-based information: (a) top view representations, (b) feature vectors representations, (c) a player graph representation. The red boxes, also released, further serve as inputs in neural networks to investigate the usefulness of calibration for the task of action spotting in SoccerNet-v2, leading to a new state-of-the-art performance.} \label{fig:main_figure} \end{figure*} \mysection{Contribution.} In SoccerNet~\cite{Giancola2018SoccerNet}, the frames of the raw videos are subsampled at 2 fps, then transformed into feature vectors, by passing through a ResNet-152~\cite{He2016Deep}, I3D~\cite{carreira2017quo}, or C3D~\cite{tran2015learning} network pre-trained on ImageNet~\cite{deng2009imagenet}, all of which are released with the dataset. Hence, those vectors only encode generic information about the frames. As first contribution, shown in Figure~\ref{fig:main_figure}, we enrich the SoccerNet dataset with actionable camera calibration estimates, along with players and referee localization. Such information provides a soccer-specific insight and is explicitly linked with the game in real-world coordinates. Besides releasing the largest set of calibration estimates to date, we are also the first to deliver a calibration algorithm trained on a large scale dataset such as SoccerNet. For synchronization purposes, we compute the calibration, player and referee localization for the 2-fps-subsampled set of frames considered in SoccerNet. In the following, we make no difference anymore between players and referees, all of which are called ``players'', and we call ``per-frame information'' any information computed for each of those subsampled frames. \mysection{Calibration algorithm.} We base our calibration on the Camera Calibration for Broadcast Videos (CCBV) of Sha \etal \cite{Sha2020End}, but we write our own implementation, given the absence of usable public code. They use as calibration parameterization the homography between the field plane and the image, which is valid under the assumption of a planar field \cite{Hartley2004Multiple}. First, we describe their original algorithm, then we give the details of our changes. The algorithm relies on a dictionary, \ie a set of pairs of artificial field zone segmentations, called ``templates'', and homographies. The dictionary is built in a pre-processing step, according to the camera parameters distribution over the training dataset. Since this distribution is unknown, it is estimated with a clustering algorithm based on Gaussian Mixture Models, that also determines the number of modes necessary to fit the distribution. The mean of each mode corresponds to a homography of the dictionary, that defines a camera perspective from which its corresponding template is generated as an artificial semantic image of the field. CCBV itself consists of three steps, each performed by a specific neural network. First, a zone segmentation of the field is computed with a U-Net architecture~\cite{Ronneberger2015Unet}, where a zone is a field area enclosed by field lines. Second, a rough estimate of the homography between the field plane and the image is obtained. A siamese network~\cite{Bromley1993Signature,Chicco2021Siamese} encodes the zone segmentation and the templates of the dictionary in feature vectors. The homography associated with the template encoding with the shortest $L^2$ distance to the zone segmentation encoding is the rough estimate of the sought homography. Third, this template homography is refined, in two steps. A Spatial Transform Network first regresses the homography between the zone segmentation and the template. Then, the final homography prediction is obtained by multiplying the regressed homography with the template homography, giving the estimated calibration parameters. \mysection{Our training process.} Given the absence of a large-scale corpus of ground-truth camera calibrations in the literature, we opt for a student-teacher distillation approach. We consider a commercial tool~\cite{EVSXeebra} as teacher to generate a dataset of 12,000 pseudo-ground-truth calibrations on the SoccerNet dataset, which we use to train our student calibration algorithm. Our training dataset is 60x larger than the World Cup 2014 dataset~\cite{Homayounfar2017Sports} used in~\cite{Sha2020End} and contains a larger variety of camera viewpoints, making our student calibration network a valuable candidate for universal camera calibration in the context of soccer. In fact, during the creation of the dictionary, more than 1000 modes are found by the clustering algorithm. Besides, during the training phase of the Spatial Transform Network, we notice vanishing gradient issues. To overcome this problem, we first pre-train it with a MSE loss and use leaky ReLU activations instead of ReLUs. After convergence, we compute the calibration estimates of the SoccerNet video frames with our trained calibration network. A binary score about the relevance of the calibration, set to 1 for frames with a plausible estimated calibration, is also computed by our student. This allows to discard cameras views that are not recorded by the main camera, such as close-up views, or public views. We release those estimates along our trained calibration network, which can be used with a wide variety of soccer videos. We denote CCBV-SN our student trained on SoccerNet. \mysection{Player localization.} For each calibrated frame, we use Mask R-CNN~\cite{He2017MaskR} to obtain a bounding box and a segmentation mask per detected person instance. Then, we compute a field mask following~\cite{Cioppa2018ABottom} to filter out the bounding boxes that do not intersect the field, thus removing \eg staff and detections in the public. We use the homography computed by CCBV-SN to estimate the player localization on the field in real-world coordinates from the middle point of the bottom of their bounding box. Finally, we also store the average RGB color of each segmented player to keep track of a color-based information per person. As for the calibrations, we release this raw player-related information. \section{Related work} \mysection{Calibration.} In the context of sports events, camera calibration often benefits from the presence of a field whose layout is specified by the rules of the game. The camera may be parameterized using the full perspective projection model, but also using a homography model. Indeed, the field being most often planar, it is a convenient calibration rig to estimate the homography between the field plane and the image. Hereafter, ``camera calibration'' means the estimation of the intrinsic and extrinsic camera parameters. For soccer, existing methods are assessed on the World Cup 2014 dataset \cite{Homayounfar2017Sports}, which introduces a metric based on the Intersection over Union (IoU) between the reference field model and its predicted deprojection from an image. This work leverages the segmentation of horizontal and vertical lines to derive a set of plausible field poses from the vanishing points, and selects the best field after a branch-and-bound optimization. However, it requires at least two of both vertical and horizontal lines to estimate the vanishing points. Some areas of the field contain few line markings, restricting the practical use of the method to goal areas. Another common approach is to rely on a dictionary of camera views. The dictionary associates an image projection of a synthetic reference field model to a homography used to produce said projection. Each input image is first transformed to resemble a projection of the synthetic field, typically by a semantic segmentation of the field lines \cite{Chen2019Sports,Sharma2018Automated} or of the areas defined by the field lines \cite{Sha2020End}. That segmentation is then associated with its closest synthetic view in the dictionary, giving a rough estimate of the camera parameters, which is eventually refined to produce the final prediction. One drawback of this kind of approach is that the processing time scales poorly with the size of the dictionary. Some applications require a large dictionary, which may become a bottleneck if real-time processing is required. Some other calibration methods rely on tracking algorithms. Lu \etal~\cite{Lu2019PanTilt} use an extended Kalman filter to track the pan-tilt-zoom (PTZ) camera parameters. Citraro \etal~\cite{Citraro2020Real} use a particle filter to track the camera orientation and position. Due to the nature of tracking, these methods are restricted to deal with uncut, single-sequence video streams, making them inappropriate for a dataset of broadcast videos with many discontinuities, as in SoccerNet. Kendall \etal~\cite{Kendall2015PoseNet} introduced the concept of training a neural network to directly predict the camera parameters from an image. This approach was further investigated successfully by Jiang \etal\cite{Jiang2020Optimizing} where the predicted homography is further refined by iterative differentiation through a second neural network that predicts the error. Due to the amount of computation needed in this latter step, this method is quite slow ($0.1$ fps). Sha \etal~\cite{Sha2020End} also use a neural network to refine the camera parameters found within the dictionary for the input image. They use a spatial transform network, trained to predict the homographic correction necessary to align two segmented images. In our work, we opt for the latter method because it does not involve tracking, reports a processing rate of up to 250 fps, and achieves good performances on the World Cup dataset. \mysection{Action Spotting.} The task of action spotting in soccer considered in this work was introduced by Giancola \etal~\cite{Giancola2018SoccerNet} along with the large-scale SoccerNet dataset. The objective is to identify at which moment various salient game actions occur, such as goals, corners, free-kicks, and more. Retrieving such information is valuable for downstream tasks such as camera selection in live game production, post-game soccer analytics, or automatic highlights generation. While detecting players on broadcast images can now be achieved with existing deep learning algorithms~\cite{Cioppa2019ARTHuS, He2017MaskR}, combining spatio-temporal information about their localization to infer the occurrence of game actions remains challenging as it requires a high level of cognition. Besides, in broadcast videos, several cameras are used and important actions are replayed, breaking the continuity of the stream. In SoccerNet~\cite{Giancola2018SoccerNet}, Giancola \etal focus on three types of actions: goals, cards, and substitutions, which are temporally annotated with single anchors to retrieve. Several baselines are proposed, all of which rely either on ResNet~\cite{He2016Deep}, I3D~\cite{carreira2017quo}, or C3D~\cite{tran2015learning} frame features computed at 2 frames per second followed by temporal pooling methods (NetVLAD and MaxPool), with the ResNet features yielding the best results. Several works followed, building on the same set of pre-computed ResNet features. Cioppa \etal~\cite{cioppa2020context} develop a particular loss function that takes into account the context surrounding the actions in the temporal domain. They use it to perform a temporal segmentation of the videos before using a spotting module, achieving state-of-the-art results. Similarly, Vats \etal~\cite{vats2020event} handle the temporal information around the actions with a multi-tower CNN that takes into account the noise due to the single anchor annotation scheme. Tomei \etal~\cite{tomei2020RMS} randomly mask a portion of the frames before the actions to force their network to focus on the following frames, as those may contain the most discriminative features to spot actions. By further fine-tuning the last block of the ResNet backbone, they achieve a strong state-of-the-art results on SoccerNet-v1. Rongved \etal~\cite{rongved-ism2020} directly learn a whole 3D ResNet applied to the video frames on 5-seconds clips. This turns out to be an ambitious approach with moderate results, given the huge volume of data to process from scratch. Vanderplaetse \etal~\cite{Vanderplaetse2020Improved} propose a multimodal approach by including audio features, first extracted with a pre-trained VGG-ish network, then averaged over 0.5 seconds windows and synced with the 2 fps original ResNet features. They are processed in parallel streams before undergoing a late fusion, yielding the best results in several action classes. Besides those works, the literature is rich in papers using either small custom datasets, such as ~\cite{fakhar2019event,jiang2016automatic}, or focusing on event recognition from pre-cut clips and selected frames rather than spotting actions in untrimmed videos, such as~\cite{khan2018soccer,Khan2018Learning,khaustov2020recognizing}, or even targeting a single class, such as goals~\cite{Tsagkatakis2017GoalED}. In this work, we tackle the large-scale action spotting task of SoccerNet-v2, the extension of SoccerNet proposed by Deli{\`e}ge \etal~\cite{Deliege2020SoccerNetv2}. It covers 17 classes of actions, annotated for the 500 untrimmed SoccerNet games, and constitutes the most appropriate public benchmark for research on action spotting in soccer. \section{Introduction} \begin{figure} \centering \includegraphics[width=\linewidth]{sections/figures/Graphical_Abstract.png} \caption{\textbf{Overview.} We compute and release the camera calibration parameters along player localization in real-world coordinates for the 500 soccer games of the SoccerNet dataset, we generate various types of calibration-based data representations, and we leverage them for the task of action spotting in SoccerNet-v2.} \label{fig:graphical_abstract} \end{figure} Soccer is often regarded as one of the most unifying activities worldwide, with thousands of professionals entertaining millions of amateurs. Such a large audience makes soccer a very lucrative business, generating billions of dollars of revenue each year from broadcast events~\cite{statista}. The audiovisual data recorded during the games hides valuable insights about the players positions, the tactics, the strengths and weaknesses of each team. Hence, it is important for clubs and coaches to stay at the top of the data analytics wave, and for the fans, the data can be leveraged to provide customized services, such as personalized replays or enhanced player and game statistics. However, many general challenges of computer vision in sports have to be faced~\cite{moeslund2014computer,thomas2017computer}. Besides, the amount of data to process is so large that automated tools need to be developed. This explains the recent rise in deep learning algorithms to perform various tasks such as action spotting~\cite{cioppa2020context,Deliege2020SoccerNetv2,Giancola2018SoccerNet}, player counting~\cite{cioppa2020multimodal} and tracking~\cite{Hurault2020Self,manafifard2017asurvey}, ball tracking~\cite{kamble2019adeep}, tactics analysis~\cite{Suzuki2019Team}, pass feasibility~\cite{Sangesa2020UsingPB}, talent scouting~\cite{decroos2019actions}, game phase analysis~\cite{Cioppa2018ABottom}, or highlights generation~\cite{agyeman2019Soccer,sanabria2019adeep}. In this work, we investigate the topic of camera calibration for researchers in computer vision focused on soccer. Camera calibration serves as a bridge between the images recorded and the physical world. It allows to project any point located on the field of the recorded frame to its real-world coordinates on a plane of the actual soccer field. It can thus provide knowledge about the part of the field recorded by the camera or the localization of the players on that field. One of the main commercial uses of camera calibration is the insertion of graphical elements in augmented reality. Inserting graphical elements may be used to ensure that the rules of the game are respected, such as automatic offside or goal line technologies \cite{EVSXeebra}. However, most common applications aim to improve the viewer experience by fancier storytelling and with game analytics \cite{VizLibero}. Given the value of camera calibration tools, it is not surprising that the best methods belong to private companies. This prevents scientific research on that topic to flourish at large scale. For that reason, we leverage a powerful commercial tools~\cite{EVSXeebra} to train a neural network on the large-scale SoccerNet dataset~\cite{Giancola2018SoccerNet}, and we release the latter to the community, along with calibration estimates for the 500 complete games available. Furthermore, we propose 3 different ways of representing the player localization in real-world coordinates obtained from the camera calibration: a top view image of the game, a feature representation, and a player graph. From an application perspective, we investigate the use of calibration-related information for the task of action spotting in SoccerNet-v2~\cite{Deliege2020SoccerNetv2}. Those contributions are illustrated in Figure~\ref{fig:graphical_abstract} and further outlined below. \mysection{Contributions.} We summarize our contributions as follows. \textbf{(i) Calibration for SoccerNet.} We provide calibration estimates and player localization for the 500 soccer games of the SoccerNet dataset, and we release the first calibration algorithm trained on such a large-scale soccer dataset. \textbf{(ii) Data representations.} We provide top view image-based, compressed feature-based, and player graph-based representations of the calibration data and player localization. \textbf{(iii) SOTA on action spotting in SoccerNet-v2.} As use case, we investigate the use of these representations in a state-of-the-art network for the action spotting task of SoccerNet-v2 and we improve its performances.
{ "attr-fineweb-edu": 2.050781, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbVM5qsBB3Od3ggUs
\section{Introduction} Video analysis is a commonly used method in many sports disciplines in order to assess the performance and analyze the technique as well as the tactics of the athletes. In individual sports, the position of the body and the sports equipment, if any is used, is at the center of interest. With these measurements, coaches can derive evaluations of actions, body posture, speed, etc., of the athlete. For example, ski jumpers are interested in body and ski angles measured relative to the flight trajectory. \begin{figure}[t] \begin{subfigure}{0.45\linewidth} \centering \includegraphics[height=4.2cm]{img/visualizations/2_00537} \end{subfigure} \hfill \begin{subfigure}{0.45\linewidth} \centering \includegraphics[height=4.2cm]{img/visualizations/8_26458} \end{subfigure} \hfill \caption{Two detection results of arbitrary keypoints on the limbs and skis of ski jumpers using our model, visualized with four equally spaced lines to both sides of each limb including the outer boundary in pure color and the central line in white and four equally spaced lines on the skis with a color gradient from one side to the other.} \label{fig:example_prediction} \vspace{-0.4cm} \end{figure} In order to automate the detection of the desired keypoints, 2D Human Pose Estimation (HPE) methods can be used. Their usage reduces the time consumed for video analyses. Therefore, performance analyses are available to more athletes to improve their training outcome. Since annotating a large amount of videos is usually infeasible in the sports domain due to time and money constraints, the datasets for specific sports disciplines are typically very small and comprise only the keypoints that are most important for the analyses. For sports disciplines with human poses that are closely related to standard human activities and keypoints that are common, this problem is solvable with transfer learning. However, images of ski jumpers differ from images of everyday activities, and detecting sports equipment such as skis adds a new complexity. These challenges are reasons why automated keypoint detection is usually bounded to a limited set of standard keypoints. However, the estimation of arbitrary keypoints could open the possibility for more advanced video analyses. The most common problem of 2D HPE is to estimate the location of a standardized, predefined set of keypoints for each human. A typical size of a keypoint set is around 15 to 30. Often, CNNs are used for the keypoint detection. They involve a backbone network to extract features of the images and a small head network that estimates the location of each predefined standard keypoint. A typical method to predict the keypoint locations is to use a 2D heatmap as the output for each keypoint. Adding additional keypoints would add an additional output channel and require retraining the network head, which makes this approach infeasible for the detection of arbitrary keypoints. Therefore, we use an approach based on a Vision Transformer (ViT) \cite{visiontransformer} architecture. For 2D HPE, a common ViT architecture is TokenPose \cite{tokenpose}. This method appends a learnable token to the sequence of input image tokens and generates the heatmaps with a small Multi-Layer Perceptron (MLP) from the output of the learnable tokens after the last Transformer layer. Our approach uses the vectorized keypoint approach \cite{ludwig2022recognition} for TokenPose with a slight adaptation for the skis. This approach requires a segmentation mask for every image in order to detect arbitrary points on the limbs of humans. Since many images of ski jumpers are too far from the domain of common segmentation models, we are only capable of collecting a few segmentation masks of body parts that are partly correct and even fewer of the skis. The reason is that we avoid to costly annotate the images by hand but use existing segmentation models \cite{densepose, detectron2} to obtain the masks. We propose and analyze different methods to train a model with this small amount of segmentation masks. Results of our model can be seen in Figure \ref{fig:example_prediction}. The contributions of this work can be summarized as follows: \begin{itemize} \item We propose an adapted representation for the vectorized keypoint query tokens introduced by \cite{ludwig2022recognition} in order to detect arbitrary keypoints on the skis of ski jumpers. \item Further, we improve their model such that keypoint outputs are independent of the number of keypoint tokens in the input sequence. \item We release a new dataset with 2867 annotated ski jumpers including 13 keypoints on the body and 4 keypoints on the skis together with 424 partly correct segmentation masks sampled from 14 hours of TV broadcast videos during 10 competitions. The dataset is available here: \url{https://www.uni-augsburg.de/en/fakultaet/fai/informatik/prof/mmc/research/datensatze/} \item We analyze different methods to train a model with only a few partly correct segmentation masks such that it is capable of estimating arbitrary keypoints on limbs and skis of ski jumpers while maintaining similar performance on the standard keypoint set. Our best approach is available here: \url{https://github.com/kaulquappe23/arbitrary-keypoints-skijump} \end{itemize} \section{Related Work} Analyzing athletes in video footage of training or competition scenarios is common for professional athletes in most sports disciplines. This includes trajectory analysis, e.g. the reconstruction of the 3D trajectory of a badminton shuttle from monocular videos proposed by Liu et al. \cite{badminton}. Furthermore, using the players' trajectories, Wei et al. \cite{wei2015predicting} detect the basketball location from monocular basketball video footage. In individual sports, the poses of athletes are of great importance. Using the swimming style as an additional input, Einfalt et al. \cite{einfalt2018activity} estimate swimmers' poses and improve the resulting poses by using a pose refinement over time. Furthermore, computer vision is also used in different ski disciplines, mostly for human pose and ski estimation. Wang et al. \cite{wang2019ai} propose a pose correction and exemplar-based visual suggestions to freestyle skiers using human pose estimation. Ludwig et al. \cite{ludwig2020robust} calculate the flight angles for ski jumpers during their flight phase by using robust estimation methods for human and ski pose recognition. Stepec et al. \cite{skijumpstyle} use estimated poses of ski jumpers and their trajectories in order to automatically generate the style score. As already mentioned, 2D HPE is an important method among computer vision based analysis applications in sports. The best scores on common benchmarks like COCO \cite{coco} or MPII Human Pose \cite{mpii} are still based on CNNs \cite{huang2020joint, bulat2020toward}, although Transformer \cite{transformer} based architectures are increasing in popularity. Among CNN architectures, the High Resolution Net (HRNet) \cite{hrnet} is often used, like in \cite{huang2020joint}. It differs from encoder-decoder architectures that are used in former best performing backbones like \cite{maskrcnn, hourglass, simplebaselines} as it keeps large resolution feature maps in the whole network and uses a continuous information exchange between different resolutions. Among Transformer \cite{transformer} based HPE approaches, TokenPose \cite{tokenpose} is usable without any convolutions, but using the first stages of an HRNet as feature extractor leads to its best results. TokenPose uses a ViT \cite{visiontransformer}, which embeds small image or feature patches to 1D token vectors serving as the input sequence to the Transformer. Apart from the image patches, learnable keypoint tokens are appended to the input sequence and their output is then transformed through an MLP to heatmaps. Ludwig et al. \cite{ludwig2022detecting} adapt this approach to estimate arbitrary keypoints that lie on the straight line between the fixed keypoints of a dataset and further improve the method to detect freely selected keypoints on the limbs of humans \cite{ludwig2022recognition}. Shi et al. \cite{end2endhpe} propose the first fully end-to-end multi person pose estimation framework based on Transformers. Zeng et al. \cite{tokenclusteringtransformer} cluster the tokens of the Transformer such that less important image areas like the background are represented by less tokens than the humans. \section{Dataset}\label{sec:dataset} \begin{figure*}[htb] \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/segmasks/0_36624} \end{subfigure} \hfill \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/segmasks/3_15242} \end{subfigure} \hfill \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/segmasks/1_18264} \end{subfigure} \hfill \\ \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/segmasks/3_32721} \end{subfigure} \hfill \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/segmasks/2_83774} \end{subfigure} \hfill \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/segmasks/3_04531} \end{subfigure} \hfill \caption{Example images from our dataset. The images are darkened and the segmentation masks (that are sometimes only partly correct or incomplete) are visualized with an overlay. Annotated keypoints are displayed with white circles.} \label{fig:dataset} \vspace{-0.1cm} \end{figure*} We collect broadcast TV footage from 10 skijump competitions available on YouTube in order to provide a publicly available dataset for benchmarking arbitrary keypoint detection in ski jumping images. Each video consists of 24 to 62 individual jumps with a total of 370 jumps. We annotate at maximum 8 frames per jump to have a broad diversity of jumps in the dataset. We select images during in-run and during the flight until the moment right before the landing. Over 80\% of the images correspond to the flight phase. The dataset consists of images of various quality and lighting conditions, male and female athletes, and various perspectives of the ski jumpers. We annotate frames during the slowmotion replays as well, since their fidelity is often higher. We include the information if a frame was collected during a slowmotion replay in the dataset. Furthermore, the athlete's names provided in the TV broadcast were collected and added to the dataset. We split the dataset in a train, test and validation subset such that each athlete is only present in one subset. Our dataset consists of 2867 annotations: 2159 for training, 148 for validation, and 560 for testing. The annotated keypoints are head, left/right shoulder, left/right elbow, left/right wrist, left/right hip, left/right knee, left/right ankle, left/right ski tip, left/right ski tail. We use the detectron2 \cite{detectron2} framework to generate segmentation masks for our dataset. In a first step, we use DensePose \cite{densepose} to obtain segmentation masks of the body parts. Since images of ski jumpers are far from the domain of DensePose, most of the masks are completely or partly wrong. We select all masks that are mostly correct and discard the other ones, which results in 424 images. As detectron2 is also trained to segment skis, we feed the remaining images through an instance segmentation model in the second step. However, only a small proportion of skis is detected, and even less skis are detected correctly. A second look shows that some skis are detected, but wrongly classified as snowboards, surfboards, etc. Hence, we select and aggregate all masks that belong to skis by hand and split the ski masks in left and right ski. In many cases, only one ski is detected and/or only parts of a ski are contained in the mask. Some example images are displayed in Figure \ref{fig:dataset}. Segmentation masks of the head, torso, left/right upper arm, left/right forearm, left/right hand, left/right thigh, left/right lower leg, left/right foot, and left/right ski are contained in the dataset: 326 segmentation masks in the train subset, 81 in the test subset and only 17 in the validation subset. Because these are too few masks for profound decisions, we coarsely label additional images with the body parts that are of interest for our research (limbs and skis), such that the validation set consists of 46 images. \section{Method} \subsection{Architecture Design} The basis of all models used in this work is the TokenPose-Base architecture \cite{tokenpose}. It is a combined convolutional and Transformer architecture. It uses the first three stages of an HRNet \cite{hrnet} as a first feature extractor. The resulting features of the branch with the highest resolution are then split into feature patches and converted to visual tokens by a linear projection. These tokens are fed to a ViT \cite{visiontransformer}. All methods proposed in this paper are also usable with any other TokenPose variant. \subsubsection{Generation of Ground Truth Keypoints} \begin{figure*}[htb] \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/kp_gen/3_93741} \end{subfigure} \hfill \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/kp_gen/3_44665} \end{subfigure} \hfill \begin{subfigure}{0.32\linewidth} \centering \includegraphics[height=3cm]{img/kp_gen/2_20890} \end{subfigure} \hfill \caption{Examples for the ground truth keypoint generation process on skis. $p$ is visualized in green, the orthogonal line in white, $c_1$ and $c_2$ in yellow and the generated point in red. The middle and right image show that the projection point can be located outside of a ski mask.} \label{fig:kp_gen} \vspace{-0.3cm} \end{figure*} We follow the strategy presented in \cite{ludwig2022recognition} to generate labels for arbitrary keypoints on the limbs. At first, a random point is selected on the straight line between two keypoints enclosing a body part, called the projection point $p$. Second, a line orthogonal to this straight line through $p$ is created. This line has two intersection points $c_1$ and $c_2$ with the border of the segmentation mask of the corresponding body part. Now, a random point $r$ either between $c_1$ and $p$ or between $c_2$ and $p$ is selected such that both sides are equally likely and more points lie close to the intersection points, as these keypoints are more difficult for the model to detect. However, this method is not directly applicable for arbitrary points on skis because the straight lines between ski tips and ski tails do not necessarily lie entirely within the segmentation mask of the skis, depending on the perspective and the bending of the skis. As a consequence, $p$ might also lie outside the segmentation mask of the ski. See the middle and right images of Figure \ref{fig:kp_gen} for examples. Hence, we randomly select a point $r$ between $c_1$ and $c_2$, with a high probability that the point is located close to one of the intersection points and a lower probability that it lies in the middle, as evaluations show that it is harder for the model to learn the keypoints close to the boundary. \subsubsection{Keypoint Query Tokens}\label{sec:keypoint_token} TokenPose is only able to detect the fixed keypoints defined for each dataset. For each keypoint, it learns a token that is appended to the sequence of visual tokens created by extracting features from the input image or the input image directly. As shown in \cite{ludwig2022detecting}, these tokens exhibit no correlation. Therefore, it is necessary to redesign the tokens to control their meaning. In order to represent arbitrary tokens on the segmentation masks, we use the \emph{vectorized keypoints} approach presented in \cite{ludwig2022recognition} for the limbs and adapt it for skis. Hence, a keypoint vector $v^p$ and a thickness vector $v^t$ are designed and converted via a learned linear projection to a vector of half of the embedding size used in the ViT. Then, $v^p$ and $v^t$ are concatenated to a single keypoint query token. All keypoint query tokens are appended to the sequence of visual tokens and then fed jointly through the Transformer network. A positional encoding is added only to the visual tokens after each Transformer layer. After the last layer, a small MLP with shared weights is used to convert the keypoint query tokens to heatmaps. For a dataset with $n$ keypoints, the projection point is encoded as a vector $v^p \in \mathbb{R}^n$. Let $k_i$, $k_j$ be the keypoints enclosing the body part. Then, each projection point $p$ can be formalized as $p = \alpha k_i + (1-\alpha)k_j$ and $v^p$ is created as \begin{equation} v^p_h =\left\{\begin{array}{ll} 1-\alpha, & h = j\\ \alpha, & h = i\\ 0, & h \neq i \land h \neq j\end{array}\right . h = 1, ..., n \end{equation} If a standard keypoint should be detected, $\alpha = 1$. The position of an arbitrary keypoint $r$ is now encoded relative to $p$ which we call thickness. If $r$ is a point on a limb, it can be formalized as $r = \beta p + (1-\beta) c_{1/2}$, with $c_{1/2}$ being the intersection point closer to $r$. If $r$ is a point on the skis, it can be formalized as $r = \beta c_1 + (1-\beta) c_2$. Furthermore, we define the thickness vector $v^t \in \mathbb{R}^3$ as \begin{equation} v^t =\left\{\begin{array}{ll} \left(1-\beta, \beta, 0 \right)^T , & r \text{ is on limb} \land c_{1/2} = c_1\\ \left(0, \beta, 1-\beta \right)^T , & r \text{ is on limb} \land c_{1/2} = c_2\\ \left( \beta, 0, 1- \beta\right)^T , & r \text{ is on ski} \end{array}\right. \end{equation} For standard points on limbs \textbf{and} skis, $(0, 1, 0)^T$ is used. \subsubsection{Attention Targets}\label{sec:attention} \begin{figure*}[htb] \begin{subfigure}{0.19\linewidth} \centering \includegraphics[height=3.4cm]{img/attention/2_20648_1} \caption{Adapted att. (ours)} \end{subfigure} \hfill \begin{subfigure}{0.19\linewidth} \centering \includegraphics[height=3.4cm]{img/attention/2_20648_7} \caption{All points} \end{subfigure} \hfill \begin{subfigure}{0.19\linewidth} \centering \includegraphics[height=3.4cm]{img/attention/2_20648_5} \caption{50 points} \end{subfigure} \hfill \begin{subfigure}{0.19\linewidth} \centering \includegraphics[height=3.4cm]{img/attention/2_20648_3} \caption{10 points} \end{subfigure} \hfill \begin{subfigure}{0.19\linewidth} \centering \includegraphics[height=3.4cm]{img/attention/2_20648_4} \caption{50 points w/o masking} \end{subfigure} \hfill \vspace{-0.2cm} \caption{Examples for model detections depending on the number of keypoint query tokens per model call. The images show four equally spaced lines regarding the thickness on each body part. For the limbs, the projection line is colored white with a color gradient to the edges. For the skis, the color gradient is from one side to the other. The keypoint query tokens are identical for all images. Image (a) is the result for the adapted attention, independent of the number of keypoint queries per model execution and without random sampling. Images (b) - (d) use the original attention with random sampling like \cite{ludwig2022recognition}, image (e) without random sampling. In image (b), all keypoints are computed with one model execution. In image (c) and (e), only 50 points are computed in one inference step and in image (d), 10 points are computed at once.} \label{fig:attention_examples} \vspace{-0.3cm} \end{figure*} Evaluations show that the method presented in \cite{ludwig2022recognition} works well if the number of keypoint query tokens used during inference is similar to the number of keypoint query tokens used during training. If a lot more tokens are used, the detection performance decreases. See Figure \ref{fig:attention_examples} for some examples. Thus, the model output for one keypoint query affects the other queries, which is an undesired behavior. The reason for this behavior is the attention mechanism. In TokenPose, the attention of layer $i+1$ is calculated as \begin{equation} A(L^{i+1}) = softmax(\frac{L^iW_Q(L^iW_K)^T}{\sqrt{d}})(L^iW_V) \end{equation} where $L^i = (T^i_{vis}, T^i_{kp})$ are the visual and keypoint query tokens of the previous layer, $W_Q, W_K, W_V \in \mathbb{R}^{d\times d}$ are the learnable parameters to generate the queries, keys and values and $d$ is the dimension of the tokens. Hence, the attention is calculated between all tokens, so there is an information flow from the keypoint query tokens to the visual tokens. Therefore, the keypoint query tokens have an influence on each other directly and through the visual tokens. In TokenPose, this is a desired behavior, as always the same keypoints are detected and the information of other keypoint tokens can help to detect occluded keypoints \cite{tokenpose}. In \cite{ludwig2022detecting}, it is observed that the detection performance is decreasing if the keypoint tokens corresponding to the standard keypoints are always present during training, but left out during inference. Their solution is to include a random sampling and permutation of the keypoint query tokens, but this does not solve the problem of the undesired influence of the keypoint tokens on each other. \begin{figure}[b] \vspace{-0.4cm} \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=\linewidth]{img/attention} \end{subfigure} \hfill \vspace{-0.3cm} \caption{Visualization of the information flow in our adapted attention modules. The attention is computed such that only the visual tokens serve as keys and values. Hence, the visual tokens exchange information and keypoint tokens aggregate information from the visual tokens. The keypoint tokens do not influence other tokens. } \label{fig:attention} \end{figure} Therefore, we adopt the attention mechanism according to \begin{equation} \widehat{A}(L^{i+1}) = softmax(\frac{L^iW_Q(T^i_{vis}W_K)^T}{\sqrt{d}})(T^i_{vis}W_V) \end{equation} which is also visualized in Figure \ref{fig:attention}. The keypoint tokens serve only as the queries during the attention and the visual tokens as queries, keys and values. This strategy is similar to \cite{crossattention}. The information flow in the Transformer network is now restricted within the visual tokens and from visual tokens to keypoint tokens. Hence, the position of a detected keypoint is only dependent on the image and independent of the other keypoints that should be detected at the same time. This is the desired behavior. Furthermore, the dimension of the softmax is now fixed to the number of visual tokens and independent of the number of keypoint tokens. \subsection{Training Strategies} As a baseline, we train a model with our adaptations on the images with available segmentation masks. As we only have a few masks available and the train subset is thus small, this model underperforms regarding standard keypoints. Therefore, we evaluate different strategies for including the full dataset in the arbitrary keypoint training. One approach could be the finetuning of a segmentation net with our segmentation masks in order to generate segmentation masks for all images. However, this approach is infeasible as the segmentation masks of our dataset are only partially complete and only partially correct. Finetuning a segmentation net on these masks would not let the model learn useful masks. This is especially the case for skis because many images have annotations for only one, no ski, or only parts of the skis. For a direct training on arbitrary points, this is not a problem. Arbitrary points are only generated on available (possibly partial) segmentation masks. This does not deteriorate the model's performance. The only challenge are segmentation masks with incorrect borders, since this leads either to a wrong calculation of the intersection points and a mismatch of the thickness vector and the generated point or to a generated point that does not lie on the limb/ski. However, our experiments show that the model can cope with this challenge and learns the correct points in most cases, because the number of correctly created points is by far larger than the number of false points. \subsubsection{Combined Training of Arbitrary and Standard Keypoints}\label{sec:train_strategy} The most straightforward approach is to use all available images for the training on the standard keypoints and the segmentation masks for the arbitrary keypoints. This strategy increases the performance on the standard keypoints a lot, but also deteriorates the ability to detect arbitrary points. Another technique includes the detection of projection points as presented in \cite{ludwig2022detecting}. In order to generate arbitrary keypoints on the straight line between two keypoints, which we call projection points, segmentation masks are not necessary. Hence, we can use all images for training on standard and projection points and jointly train with arbitrary points on the images with available segmentation masks. \subsubsection{Pseudo Labels}\label{sec:pl} The aforementioned approaches include the full train set during training, but the model still learns to detect arbitrary points only from a small subset. Therefore, we also experiment with pseudo labels. This means that we use a trained model in order to generate labels of arbitrary points for all images. With this strategy, it is possible to train a model on arbitrary keypoints with the whole training set. After convergence on the pseudo labels, a finetuning is executed with arbitrary points generated from the available segmentation masks, because these ground truth keypoints are more precise than the pseudo labels. Another strategy is to add the pseudo label training as a third part to the already described combined training approaches. Looking at visualizations of the generated pseudo labels reveals some wrong pseudo labels. Furthermore, we observe that the network's scores have no direct relation to the correctness of the pseudo labels. Hence, we use another technique to filter the labels. First, we obtain the model's predictions from the original image and some augmented variants. Second, we remove all keypoints with low scores, since a low score should indicate that a keypoint is not visible and augmentations like rotations might move the keypoint outside of the augmented image. We remove all keypoints from the pseudo labels with too few detections. Next, we transform the detections belonging to the augmented versions back to their location in the original image. Then, we calculate the standard deviation of these keypoints relative to the torso size. We use the standard deviation as the confidence measure instead of the network score. We select the pseudo labels with the least standard deviation per body part, that the number of pseudo labels per body part is equal in the pseudo label dataset. This approach is similar to \cite{ludwig2021self}. \section{Experiments} The backbone for all experiments is TokenPose-Base \cite{tokenpose} with three stages of an HRNet-w32 \cite{hrnet} for feature extraction. We crop the ski jumpers and resize all images to $256 \times 192$. Cropping is performed by creating the tightest bounding box containing all standard keypoints and enlarging its width and height by 20\% to all sides. Visual and keypoint tokens are of size $192$. We use 2D sine as positional encoding. After the Transformer layers, we use a two-layer MLP to obtain heatmaps of size $64 \times 48$ from the keypoint tokens. Keypoints coordinates are obtained from the heatmaps via the DARK \cite{dark} method. We pretrain our models with the COCO \cite{coco} dataset, either with TokenPose - only on the standard keypoints - or with the vectorized keypoints approach using arbitrary keypoints on the limbs. Additional to the model that is being trained, we keep an Exponential Moving Average (EMA) of the model's weights with an EMA rate of 0.99. The EMA model behaves like a temporal ensemble and achieves slightly better results than the original model. Therefore, we evaluate all experiments with the EMA model. As described in Section \ref{sec:dataset}, we evaluate on the test set with 560 images in total and 81 images with segmentation masks. We generate 200 arbitrary keypoints with a fixed seed for each image during the arbitrary keypoint evaluation, resulting in 16,200 total keypoints. \subsection{Evaluation Metrics} The first evaluation metric that we use is the Percentage of Correct Keypoints (PCK). A keypoint is considered as correct according to the PCK at a certain threshold $t$, if the euclidean distance between the ground truth keypoint and the detected keypoint is less or equal than $t$ times the torso size. We use the euclidean distance between right shoulder and left hip joint as the torso size and a threshold of 0.1. For this dataset, this threshold corresponds to approx. 6cm. \begin{table*}[htb] \begin{center} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{c|c|cccc|cccc} \toprule Method & Pretraining & Std. KP & Seg. M. & Proj. KP & PL & Std. PCK & Full PCK & MTE $\downarrow$ & PCT $\uparrow$\\ \midrule TokenPose & Std. KP & \checkmark & & & & 77.2\\ TokenPose & VK & \checkmark & & & & 75.4\\ \midrule Vectorized Keypoints & VK & & \checkmark & & & 52.7 & 88.1 & 18.2 & \textbf{77.7}\\ \midrule Std. \& Seg. & VK & \checkmark & \checkmark & & & 77.1 & 90.1 & 18.3 & 76.6\\ Seg. \& Proj. & VK & & \checkmark & \checkmark & & 76.5 & \textbf{91.8} & \textbf{17.5} & \textbf{77.7} \\ Seg. \& Proj. & Std. KP & & \checkmark & \checkmark & & \textbf{77.8} & 91.5 & 18.0 & 76.4 \\ all PL & VK & & & & all &76.3 & 90.4 & 18.7 & 76.0 \\ finetune all PL & VK & & \checkmark & & all & 76.3 & 90.4 & 18.9 & 75.2\\ all PL \& Std.\& Seg. & VK & \checkmark & \checkmark & & all & 76.4 & 90.9 & 19.2 & 74.7 \\ all PL \& Proj. \& Seg. & VK & & \checkmark & \checkmark & all & 76.9 & 91.0 & 18.4 & 75.6\\ 80\% PL & VK & & & & 80\% & 76.3 & 90.8 & 18.7 & 74.8\\ finetune 80\% PL & VK & & \checkmark & & 80\% & 76.1 & 91.3 & 18.3 & 75.8 \\ 80\% PL \& Std.\& Seg. & VK & \checkmark & \checkmark & & 80\% & 77.3& 90.7 & 19.4 & 73.4\\ 80\% PL \& Proj.\& Seg. & VK & & \checkmark & \checkmark & 80\% & 76.7& 91.4 & 18.1 & 75.3 \\ \bottomrule \end{tabular} } \vspace{-0.5cm} \end{center} \caption{Recall values for the skijump test set in \% at PCK@$0.1$. The second column displays the pretraining, \emph{Std. KP} refers to the pretraining with the standard keypoints, \emph{VK} to the pretraining with the vectorized keypoints approach, both on the COCO dataset. The third table section shows the used training steps. \emph{Std. KP} means training on the standard keypoints, usable on the whole training set. \emph{Seg. M.} refers to the training on arbitrary keypoints with available segmentation masks. \emph{Proj. KP} stands for the training on the projection keypoints which is also usable on the whole training set and \emph{PL} refers to the pseudo labels, whereby either all pseudo labels are used or the 80\% with the least standard deviation during filtering. The first column of the last table section displays the average PCK of the standard keypoints, evaluated on the test set containing images with and without segmentation masks. The average PCK score including the arbitrary points is given in the second column, the third column shows the MTE and the last column the PCT at threshold $0.2$. These scores are evaluated on the test set with available segmentation masks. } \label{tab:results} \vspace{-0.4cm} \end{table*} We use the terminology \emph{thickness} for the distance between a keypoint and its projection point. As described in \cite{ludwig2022recognition}, the PCK is not sufficient to measure if the model predicts the thickness of the arbitrary points correctly. A model predicting only the projection points would achieve a high PCK score although the thickness might be wrong, because the projection points are close enough to the ground truth points. Therefore, like in \cite{ludwig2022recognition}, the Mean Thickness Error (MTE) and the Percentage of Correct Thickness (PCT) are used as additional metrics. Let $g$ be the ground truth keypoint, $d$ the detected keypoint, $p$ the projection point, $c_g$ the intersection point closer to $g$ and $c_d$ the intersection point closer to $d$. Then, for keypoints on the limbs, the desired thickness $t_{g}$ and the estimated thickness $t_d$ are calculated as \begin{equation} t_{g} = \frac{||p - g||_2}{||p - c_g||_2},\;t_d =\left\{\begin{array}{ll} \frac{||p - d||_2}{||p - c_g||_2}, &c_g = c_d \\[8pt] \frac{||p - d||_2}{||p - c_d||_2} + t_g, &c_g \neq c_d\end{array}\right . \end{equation} The thickness error $e$ is defined as $e = |t_g - t_d|$. Hence, the maximum thickness error is 2, which is set for estimated keypoints that are located outside of the corresponding segmentation mask. In this case, projection and intersection points can not be computed. For arbitrary keypoints on the skis, we adapt this metric to fit the slightly different thickness logic as described in Section \ref{sec:keypoint_token}. Let $g$ be the ground truth keypoint, $d$ the detected keypoint, $c_1$ and $c_2$ the intersection points, then the desired thickness $t_{g}$ and the estimated thickness $t_d$ are calculated as \begin{equation} t_{g} = \frac{||c_1 - g||_2}{||c_1 - c_2||_2},\quad t_{d} = \frac{||c_1 - d||_2}{||c_1 - c_2||_2} \end{equation} The MTE metric is the mean of all thickness errors and the PCT is defined analogous to the PCK. At threshold $t$, the PCT considers all estimated thicknesses as correct with a thickness error less or equal than $t$. As the maximum thickness error is 2, we use the [email protected] for our evaluations. \subsection{Results}\label{sec:results} \newcommand{3.2cm}{3.2cm} \begin{figure*}[htb] \begin{subfigure}{0.15\linewidth} \centering \includegraphics[height=3.2cm]{img/visualizations/8_28294} \end{subfigure} \hfill \begin{subfigure}{0.15\linewidth} \centering \includegraphics[height=3.2cm]{img/visualizations/1_11129} \end{subfigure} \hfill \begin{subfigure}{0.15\linewidth} \centering \includegraphics[height=3.2cm]{img/visualizations/0_09348} \end{subfigure} \hfill \begin{subfigure}{0.15\linewidth} \centering \includegraphics[height=3.2cm]{img/visualizations/1_10561} \end{subfigure} \hfill \begin{subfigure}{0.15\linewidth} \centering \includegraphics[height=3.2cm]{img/visualizations/3_14025} \end{subfigure} \hfill \begin{subfigure}{0.18\linewidth} \centering \includegraphics[height=3.2cm]{img/visualizations/3_14936} \end{subfigure} \hfill \vspace{-0.2cm} \caption{Qualitative examples for model detections. The images show four equally spaced lines regarding the thickness on each body part. For the limbs, the projection line is colored white with a color gradient from it to the edges. For the skis, the color gradient is from one side to the other. The model from experiment \emph{Seg. \& Proj.} is used to generate the images.} \label{fig:qualitative_results} \vspace{-0.5cm} \end{figure*} Table \ref{tab:results} displays the results for all experiments. For the TokenPose approach, we evaluate two pretrainings. The pretraining on the COCO dataset with the standard keypoints achieves a better standard PCK than the pretraining with the vectorized keypoint approach from \cite{ludwig2022recognition} on the COCO dataset. This is expected, as TokenPose detects also only the standard keypoints of the skijump dataset. Furthermore, we use the \emph{vectorized keypoints} approach with a generation of 5 to 50 arbitrary keypoints for each image and with the improved attention mechanism described in Section \ref{sec:attention}. It achieves good results for the full PCK, the MTE and the PCT, but the PCK on the whole test set decreases by absolute 24.5\% in comparison to TokenPose. Therefore, we use the \emph{combined strategies} like described in Section \ref{sec:train_strategy} to improve the standard PCK. In the first experiment (\emph{Std. \& Seg.} in Table \ref{tab:results}), we alternately train on the standard keypoints and the arbitrary points. This leads to nearly the same PCK on the standard keypoints, but the PCT is absolute 1.1\% lower than for the vectorized keypoint approach. Training with the projection keypoints instead of the standard keypoints (experiment \emph{Seg. \& Proj.} in Table \ref{tab:results}) leads to better results. With the vectorized keypoint pretraining, it achieves the same PCT as the vectorized keypoint approach, while the PCK decreases only slightly. The results of the \emph{Seg. \& Proj.} seem promising. Therefore, we evaluate this strategy with two pretrainings, the vectorized keypoint pretraining and the standard keypoints pretraining. With the standard keypoints pretraining, the standard PCK is even higher than the standard PCK for the TokenPose approach which is trained only on the standard keypoints. But all other metrics decrease for this experiment. Therefore, we focus on the vectorized keypoints pretraining for all other experiments. Hence, we use the \emph{Std. \& Proj.} experiment with the vectorized keypoints pretraining in order to generate \emph{pseudo labels}, because it achieves the best results regarding the thickness metrics. We generate 1000 pseudo labels for each image in advance and select 25 of them randomly in each training step. The results of the pseudo label training with all pseudo labels are slightly worse than the results of the other experiments. As we did not use the existing segmentation masks during that experiment, we execute a finetuning on the best weights of this experiment in the vectorized keypoints manner in order to improve the results, but unsuccessful. From the validation curve, we observe a decrease in the standard PCK from step to step. Therefore, we consider a combined training in the next experiment, training alternately on the arbitrary keypoints, the pseudo labels and the standard keypoints (experiment \emph{all PL \& Std. \& Seg.} in Table \ref{tab:results}) or the projection points (experiment \emph{all PL \& Proj. \& Seg.} in Table \ref{tab:results}). Training with the standard keypoints achieves lower scores for all metrics in this case, also for the standard PCK. Including pseudo labels in the training process did not lead to better results. A look at the quality of the generated pseudo labels shows that some are wrong. Therefore, we repeat the pseudo label experiments with the best 80\% of the labels. We use a filtering technique based on the standard deviation of the detected keypoints for multiple, differently augmented images like described in Section \ref{sec:pl}. The augmentations that we use are horizontal flipping, 45$^\circ$ rotation (clockwise and counterclockwise) and scaling of 65\% and 135\%. We expected better results with more correct labels, but the results are similar. These experiments show that it is most important to have more images to train on. In our case, including pseudo labels does not increase the number of images, because we can use all images already by training on standard keypoints or projection points. Using the projection points results in the best scores because they are more similar to the desired arbitrary keypoints in comparison to the standard keypoints. Figure \ref{fig:qualitative_results} shows some example predictions for different poses. \section{Conclusion} This paper proposes a method to detect arbitrary keypoints on the limbs and skis of ski jumpers. We publish a new dataset with annotated images of ski jumpers from 10 TV broadcast videos of ski jumping competitions with a total of 370 jumps in order to provide a public benchmark. We provide annotations for 17 standard keypoints for 2159 images and a test, train and validation split such that each athlete is only contained in one subset. Furthermore, we generate 242 usable segmentation masks and include them in the dataset. The segmentation masks are only partly correct, many of them contain no or only one ski segmentation mask. Therefore, we cannot finetune a segmentation network in order to generate segmentation masks for all other images. But for our method, this is not a problem, since keypoints are only generated on the available segmentation masks. Problematic are only segmentation masks with wrong borders. This paper is based on the vectorized keypoint approach presented in \cite{ludwig2022recognition}. For the keypoints on the skis, we modify the technique because the projection points do not necessarily lie in the middle of the skis. Therefore we do not include the line of projection points and only use the intersection points with the segmentation mask. All other keypoints are represented relative to the intersection points. The evaluation metrics for the thickness are adapted accordingly. Training on the images with available segmentation masks with the vectorized keypoints approach shows two drawbacks. If a lot more keypoint query tokens than during training are used in a single inference step, the detection performance deteriorates. This is an effect of the attention mechanism. In the standard attention, all tokens are correlated with all tokens. Hence, the keypoint query tokens have an influence on each other and on the visual tokens as well. We adapt the attention mechanism in a way that the keypoint query tokens do not have an influence on other keypoint query tokens and on the visual tokens. Only the visual tokens are correlated with each other and with the keypoint query tokens. This solves the problem, as evaluations show. The second drawback is the large decrease in the standard PCK, so the detection performance on the standard keypoints is a lot worse. This is caused by the small number of images that the model sees during training. Hence, we experiment with different training strategies on both the segmentation mask dataset and the full dataset. Our experiments show that training jointly on arbitrary and standard keypoints lifts the standard PCK to a large extent, but the PCT and MTE deteriorate. Training on the projection points instead of the standard keypoints leads to better results on these metrics. Hence, the model proposed in this paper is the first model capable of detecting arbitrary keypoints on the limbs and skis of ski jumpers. Moreover, it can be trained using only a few partly correct segmentation masks. {\small \bibliographystyle{ieee_fullname}
{ "attr-fineweb-edu": 2.705078, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUf7o5qhDCzlTibrGH
\section{Introduction} In addition to identifying human action categories in videos, it is also crucial to evaluate the quality of specific actions, which means that the machine needs to understand not only what has been performed but also how well a particular action is performed. Action quality assessment (AQA) aims to evaluate how well a specific action is performed, which has become an emerging and attractive research topic in computer vision community. Assessing the quality of actions has great potential value for various real-world applications such as analysis of sports skills \cite{MIT14,PanICCV19,FisV,MUSDL,C3DAVG}, surgical maneuver training \cite{Surgical1,Surgical2,Surgical3} and many others \cite{PairRank18,ProsCons19}. In recent years, many methods \cite{PanICCV19,FisV,MUSDL,C3DAVG} directly applied the network of human action recognition (HAR) such as C3D \cite{C3D} and I3D \cite{I3D} to AQA tasks. Although these methods have achieved considerable performance, they still face many challenges, and their performances and efficiency are indeed limited. Firstly, the huge gap between HAR and AQA should be emphasized. Models in HAR require distinguishing subtle differences between different actions, while models in AQA require evaluating a specific action's advantages and disadvantages. Therefore, the performances of existing methods are inherently limited because of the undifferentiated feature extraction of video content, which leads to the pollution of body features. It is not appropriate to apply the framework in HAR directly to AQA without any modification. Secondly, existing methods cannot perform feature aggregation efficiently. The receptive field of convolution operation is limited, resulting in the loss of long-range dependencies. RNN has the inherent property of storing hidden states, which makes it challenging to be paralleled. An effective and efficient feature aggregation mechanism is desired in AQA tasks. To solve all challenges above, we propose Tube Self-Attention (TSA) module, an efficient feature aggregation strategy based on tube mechanism and self-attention mechanism, shown in Figure \ref{fig1}. The basic idea of the TSA module is straightforward and intuitive: considering that AQA models require rich temporal contextual information and do not require irrelevant spatial contextual information, we combine the tube mechanism and self-attention mechanism to aggregate action features sparsely to achieve better performance with minimum computational cost. For example, during a diving competition, the athletes' postures are supposed to raise most attentions, instead of distractors such as the audience and advertisements in the background. The merits of the TSA module are three-fold: (1)\textit{High efficiency}, the tube mechanism makes the network only focus on a subset of the feature map, reducing a large amount of computational complexity compared with Non-local module. (2)\textit{Effectiveness}, the self-attention mechanism is adopted in TSA module to aggregate the features in the spatio-temporal tube (ST-Tube), which preserves the contextual information in the time dimension and weakens the influence of redundant spatial information. (3)\textit{Flexibility}, consistent with Non-local module, TSA module can be used in a plug-and-play fashion, which can be embedded in any video network with various input sizes. Based on TSA module, we proposed Tube Self-Attention Network (TSA-Net) for AQA. Existing visual object tracking (VOT) framework is firstly adopted to generate tracking boxes. Then the ST-Tube is obtained through feature selection. The self-attention mechanism is performed in ST-Tube for efficient feature aggregation. Our method is tested on the existing AQA-7 \cite{AQA7} and MTL-AQA \cite{C3DAVG} datasets. Sufficient experimental exploration, including performance analysis and computational cost analysis, is also conducted. In addition, a dataset named Fall Recognition in Figure Skating (FR-FS) is proposed to recognize falls in figure skating. Experimental results show that our proposed TSA-Net can achieve state-of-the-art results in three datasets. Extensive comparative results verify the efficiency and effectiveness of TSA-Net. The main contributions of our work are as follows: \begin{itemize} \item We exploit a simple but efficient sparse feature aggregation strategy named Tube Self-Attention (TSA) module to generate representations with rich contextual information for action based on tracking results generated by the VOT tracker. \item We propose an effective and efficient action quality assessment framework named TSA-Net based on TSA module, with adding little computational cost compared with Non-local module. \item Our approach outperforms state-of-the-arts on the challenging MTL-AQA and AQA-7 datasets and a new proposed dataset named FR-FS. Extensive experiments show that our method has the ability to capture long-range contextual information, which may not be performed by previous methods. \end{itemize} \begin{figure*} \centering \includegraphics[width=\linewidth]{Figure/Fig2.pdf} \caption{ Overview of the proposed TSA-Net for action quality assessment. TSA-Net consists of five steps: (1) Tracking. VOT tracker is adopted to generate tracking results \textit{B}. (2) Feature extraction-s1. The input video is divided into \textit{N} clips and the feature extraction is performed by I3D-Stage1 to generate $\mathbf{X}$. (3) Feature aggregation. ST-Tube is generated given $B$ and $\mathbf{X}$, and then the TSA mechanism is used to complete the feature aggregation, results in ${\mathbf{X}}'$. (4) Feature extraction-s2. Aggregated feature ${\mathbf{X}}'$ is passed to I3D-Stage2 to generate ${\mathbf{H}}'$. (5) Network head. The final scores are generated by \textit{MLP\_block}. TSA-Net is trained with different losses according to different tasks. } \label{fig2} \end{figure*} \section{Related Works} \textbf{Action Quality Assessment}. Most of the existing AQA methods focus on two fields: sports video analysis \cite{MIT14,PanICCV19,FisV,MUSDL,C3DAVG} and surgical maneuver assessment \cite{Surgical1,Surgical2,Surgical3}. AQA works focus on sports can be roughly divided into two categories: pose-based methods and non-pose methods. Pose-based methods \cite{PanICCV19,MIT14,BMVC15} take pose estimation results as input to extract features and generate the final scores. Because of the atypical body posture in motion scene, the performance of pose-based methods are suboptimal. Non-pose methods exploit DNNs such as C3D and I3D to extract features directly from the raw video and then predict the final score. For example, Self-Attentive LSTM \cite{FisV}, MUSDL \cite{MUSDL}, C3D-AVG-MTL \cite{C3DAVG}, and C3D-LSTM \cite{C3DLSTM} share similar network structures, but their difference lies in the feature extraction and feature aggregation method. Although these methods have achieved significant results, the enormous computational cost of feature extraction and aggregation module limits AQA models' development. Different from the aforementioned AQA methods, our proposed TSA module can perform feature extraction and aggregation efficiently. \noindent\textbf{Self-Attention Mechanism}. Self-attention mechanism \cite{Transformer} was firstly applied on the machine translation task in neural language processing (NLP) as the key part of Transformer. After that, researchers put forward a series of transformer-based models including BERT \cite{BERT}, GPT \cite{GPT}, and GPT-2 \cite{GPT2}. These models tremendously impacted various NLP tasks such as machine translation, question answering system, and text generation. Owing to the excellent performance, some researchers introduce self-attention mechanism into many CV tasks including image classification \cite{Cls1,Cls2,Cls3}, semantic segmentation \cite{Seg1,Seg2,Seg3} and object detection \cite{Det1,Det2,Det3}. Specifically, inspired by Non-local\cite{Nonlocal} module, Huang \textit{et al.} \cite{CCNet} proposed criss-cross attention module and CCNet to avoid dense contextual information in semantic segmentation task. Inspired by these methods, our proposed TSA-Net adopts self-attention mechanism for feature aggregation. \noindent\textbf{Video Action Recognition} Video action recognition is a fundamental task in computer vision. With the rise of deep convolutional neural networks (CNNs) in object recognition and detection, some researchers have designed many deep neural networks for video tasks. Two-stream networks \cite{TwostreamV1, TwostreamV2, I3D} take static images and dynamic optical flow as input and fuse the information of appearance and short-term motions. 3D convolutional networks \cite{Ji3D,C3D,LFF2014} utilize 3D kernels to extract features form raw videos directly. In order to meet the needs in real applications, many works \cite{TSM,X3D,R2plus1D} focus on the efficient designing of networks recently. The proposed TSA-Net take I3D \cite{I3D} network as the backbone. \section{Approach} \subsection{Overview} The network architecture is given in Figure \ref{fig2}. Given an input video with $L$ frames $V=\left \{F_l \right \}_{l=1}^{L}$, SiamMask\cite{SiamMask} is used as the single object tracker to obtain the tracking results $B=\left \{b_l \right \}_{l=1}^{L}$, where $b_l=\{(x_p^l,y_p^l) \}_{p=1}^4$ represents the tracking box of the $l$-th frame. In feature extraction stage, $V$ is firstly divided into $N$ clips where each clip contains M consecutive frames. All clips are further sent into the first stage of Inflated 3D ConvNets (I3D) \cite{I3D}, resulting in $N$ features as $\mathbf{X}=\left \{ \mathbf{x}_n \right \}_{n=1}^N$, $\mathbf{x}_n \in \mathbb{R}^{T\times H\times W\times C}$. Since the temporal length of $\mathbf{x}_n$ is $T$, we have $\mathbf{x}_n =\{\mathbf{x}_{n,t}\}_{t=1}^T$, $\mathbf{x}_{n,t} \in \mathbb{R}^{H\times W\times C}$. In feature aggregation stage, the TSA module takes tracking boxes $B$ and video feature $\mathbf{X}$ as input to perform feature aggregation, resulting in video feature ${\mathbf{X}}'= \{ {\mathbf{x}}'_n \}_{n=1}^N$ with rich spatio-temporal contextual information. Since the TSA module does not change the size of the input feature map, $\mathbf{x}_n$ and ${\mathbf{x}}'_n$ have the same size, \textit{i.e.}, ${\mathbf{x}}'_n \in \mathbb{R}^{T\times H\times W\times C}$. This property enables TSA modules to be stacked in multiple layers to generate features with richer contextual information. The aggregated feature ${\mathbf{X}}'$ is further sent to the second stage of I3D to complete feature extraction, resulting in $\mathbf{H} = \{ \mathbf{h}_n \} _{n=1}^N$. $\mathbf{H}$ is the representation of the whole video or athlete's performance. In prediction stage (\textit{i.e.}, network head), average pooling operation is adopted to fuse $\mathbf{H}$ along clip dimension, \textit{i.e.}, $\overline{\mathbf{h}} = \frac{1}{N} \sum_{n=1}^{N} h_n $, $\overline{\mathbf{h}} \in \mathbb{R}^{T\times H\times W\times C}$. $\overline{\mathbf{h}}$ is further fed into the \textit{MLP\_Block} and finally used for the prediction of different tasks according to different datasets. \subsection{Tube Self-Attention Module} The fundamental difference between TSA module and Non-local module is that TSA module can filter the features of participating in self-attention operation in time and space according to the tracking boxes information. The TSA mechanism has the ability to ignore noisy background information which will interfere with the final result of action quality assessment. This operation makes the network pay more attention to the features containing athletes' information and eliminate irrelevant background information interference. The tube self-attention mechanism can also be called "local Non-local". The first "local" refers to the ST-Tube, while "Non-local" refers to the response between features calculated by self-attention operation. So the TSA module is able to achieve more effective feature aggregation on the premise of saving computing resources. TSA module consists of two steps: (1) spatio-temporal tube generation, and (2) tube self-attention operation. \begin{figure} \centering \includegraphics[width=\linewidth]{Figure/Fig3.pdf} \caption{The generation process of spatio-temporal tube. All boxes $\{b_{l},b_{l+1},b_{l+2},b_{l+3}\}$ are scaled to the same size as the feature map $\mathbf{x}_{c,t}$, and then the separate masks are generated. All masks are aggregated into the final mask $M_{c,t}^{l\rightarrow(l+3)}$ through \textit{Union} operation.} \label{fig3} \end{figure} \textbf{Step 1: spatio-temporal tube generation.} Intuitively, after obtaining tracking information $B$ and feature map $\mathbf{X}$ of the whole video, all features in the ST-Tube can be selected directly. Unfortunately, owing to the existence of two temporal pooling operations in \textit{I3D-stage1}, the corresponding relationship between tracking boxes and feature maps is not $1:1$ but $many:1$. Besides, all tracking boxes generated by SiamMask are skew, which complicates the generation of ST-Tube. To solve these problems, we propose an alignment method which is shown in Figure \ref{fig3}. Since \textit{I3D-Stage1} contains two temporal pooling operations, the corresponding relationship between bounding boxes and feature map is 4:1, \textit{i.e.}, $\{ b_l, b_{l+1}, b_{l+2}, b_{l+3} \}$ is correspond to $\mathbf{x}_{c,t}$. All tracking boxes should be converted into mask first, and then used to generate ST-Tube. We denote the mask of $b_l$ correspond to $\mathbf{x}_{c,t}$ as $M_{c,t}^l \in \{ 0,1 \} ^{H\times W}$. The generation process of $M_{c,t}^l$ is as follows: \begin{equation} M_{c,t}^l(i,j) = \left\{\begin{matrix} 1, S(b_l,(i,j))\geqslant \tau \\ 0, S(b_l,(i,j)) < \tau \end{matrix}\right. \end{equation} Where $S(\cdot,\cdot)$ function calculates the proportion of the feature grid at $(i,j)$ covered by $b_l$. If the proportion is higher than threshold $\tau $, the feature located at $(i,j)$ will be selected, otherwise it will be discarded. The proportion of each feature grid covered by box ranges from 0 to 1, so we directly took the intermediate value of $\tau = 0.5$ in all experiments of this paper. Four masks are further assembled into $M_{c,t}^{l\rightarrow (l+3)} \in \{ 0,1 \}^{H \times W}$ through element-wise \textit{OR} operation: \begin{equation} M_{c,t}^{l\rightarrow (l+3)}=Union(M_{c,t}^{l},M_{c,t}^{l+1},M_{c,t}^{l+2},M_{c,t}^{l+3}) \end{equation} This mask contains all location information of the features participating in self-attention operation. For the convenience of the following description, $M_{c,t}^{l\rightarrow (l+3)}$ is transformed into the position set of all selected features: \begin{equation} \mathbf{\Omega}_{c,t}=\left \{ (i,j) | M_{c,t}^{l \rightarrow (l+3)}(i,j)=1 \right \} \end{equation} Where $\mathbf{\Omega}_{c,t}$ is the basic component of ST-Tube and $\left |\mathbf{\Omega}_{c,t} \right |$ denotes the number of selected features of $\mathbf{x}_{c,t}$. \begin{figure} \centering \includegraphics[width=0.68\linewidth]{Figure/Fig4.png} \caption{ Calculation process of the TSA module. "$\oplus$" denotes matrix multiplication, and "$\otimes$" denotes element-wise sum. Owing to the existence of tube mechanism, only the features inside the ST-Tube can be selected and participate in the calculation of self-attention.} \label{fig4} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{Figure/Fig5.pdf} \caption{ The tracking results and predicted scores of four cases from four datasets. Four manually annotated initial frames are coloured in yellow, and the subsequent boxes generated by SiamMask are coloured in green. The predicted scores of TSA-Net and GT scores are shown on the right. More visualization cases can be found in supplementary materials.} \label{fig5} \end{figure*} \textbf{Step 2: tube self-attention operation} After obtaining $\mathbf{X}$ and $\mathbf{\Omega}_{c,t}$, the self-attention mechanism is performed to aggregate all features located in ST-Tube, as shown in Figure \ref{fig4}. The formation of the TSA mechanism adopted in this paper is consistent with \cite{Nonlocal}: \begin{equation} \mathbf{y}_p=\frac{1}{C(\mathbf{x})}\sum_{\forall c}\sum_{\forall t}\sum_{\forall (i,j)\in \mathbf{\Omega }_{c,t}}f\left (\mathbf{x}_p,\mathbf{x}_{c,t}(i,j))g(\mathbf{x}_{c,t}(i,j) \right ) \end{equation} Where $p$ denotes the index of an output position whose response is to be computed. $(c,t,i,j)$ is the input index that enumerates all positions in ST-Tube. Output feature map $\mathbf{y}$ and input feature map $\mathbf{x}$ have the same size. $f(\cdot,\cdot)$ denotes the pairwise function, and $g(\cdot)$ denotes the unary function. The response is normalized by $C(\mathbf{x})=\sum_c \sum_t \left | \mathbf{\Omega }_{c,t}\right |$. To reduce the computational complexity, the dot product similarity function is adopted: \begin{equation} f\left ( \mathbf{x}_p, \mathbf{x}_{c,t}(i,j) \right ) = \theta (\mathbf{x}_p)^T\phi (\mathbf{x}_{c,t}(i,j)) \end{equation} Where both $\theta(\cdot)$ and $\phi(\cdot)$ are channel reduction transformations. Finally, the residual link is added to obtain the final $\mathbf{X}'$: \begin{equation} \mathbf{x}'_p = W_z\mathbf{y}_p + \mathbf{x}_p \end{equation} Where $W_z\mathbf{y}_p$ denotes an embedding of $\mathbf{y}_p$. Note that $\mathbf{x}'_p$ has the same size with $\mathbf{x}_p$, so TSA module can be inserted into any position in deep convolutional neural networks. For the trade-off between computational cost and performance, all TSA modules are placed after \textit{Mixed\_4e}. Thus, $T=4$ and $H=W=14$. Compared with the Non-local operation, the TSA module greatly reduces the computational complexity in time and space from \begin{equation} O\left ( (N\times T\times H\times W) \times (N\times T\times H\times W) \right ) \end{equation} to \begin{equation} O\left ( \left ( \sum_c \sum_t \left | \mathbf{\Omega }_{c,t}\right | \right )\times \left ( \sum_c \sum_t \left | \mathbf{\Omega }_{c,t}\right | \right ) \right ) \end{equation} Note that the computational cost of TSA can only be measured after forwarding propagation because $\mathbf{\Omega }_{c,t}$ is generated from $B$. \subsection{Network Head and Training} To verify the effectiveness of the TSA module, we extend the network head to support multiple tasks, including classification, regression, and score distribution prediction. All tasks can be achieved by changing the output size of \textit{MLP\_block} and the definition of the loss function. The implementation details of these three tasks are as follows: \textbf{Classification}. When dealing with classification tasks, the output dimension of \textit{MLP\_block} is determined by the number of categories. Binary Cross-Entropy loss (BCELoss) is adopted. \textbf{Regression}. When dealing with regression tasks, the output dimension of \textit{MLP\_block} is set to 1. Mean Square Error loss (MSELoss) is adopted. \begin{table*} \caption{Comparison with state-of-the-arts on AQA-7 Dataset.} \label{tab1} \begin{tabular}{c|cccccc|c} \toprule Method & Diving & Gym Vault & Skiing & Snowboard & Sync. 3m & Sync. 10m & Avg. Corr.\\ \midrule Pose+DCT \cite{MIT14} & 0.5300 & - & - & - & - & - & - \\ ST-GCN \cite{STGCN18} & 0.3286 & 0.577 & 0.1681 & 0.1234 & 0.6600 & 0.6483 & 0.4433 \\ C3D-LSTM \cite{C3DLSTM} & 0.6047 & 0.5636 & 0.4593 & 0.5029 & 0.7912 & 0.6927 & 0.6165 \\ C3D-SVR \cite{C3DLSTM} & 0.7902 & 0.6824 & 0.5209 & 0.4006 & 0.5937 & 0.9120 & 0.6937 \\ JRG \cite{PanICCV19} & 0.7630 & 0.7358 & 0.6006 & 0.5405 & 0.9013 & 0.9254 & 0.7849 \\ USDL \cite{MUSDL} & 0.8099 & 0.757 & 0.6538 & \textbf{0.7109} & 0.9166 & 0.8878 & 0.8102 \\ \midrule NL-Net & 0.8296 & 0.7938 & \textbf{0.6698} & 0.6856 & 0.9459 & 0.9294 & 0.8418 \\ TSA-Net (Ours) & \textbf{0.8379} & \textbf{0.8004} & 0.6657 & 0.6962 & \textbf{0.9493} & \textbf{0.9334} & \textbf{0.8476} \\ \bottomrule \end{tabular} \end{table*} \textbf{Score distribution prediction}. Tang \textit{et al.} \cite{MUSDL} proposed an uncertainty-aware score distribution learning (USDL) approach and its multi-path version MUSDL for AQA tasks. Although experiment results in \cite{MUSDL} proved the superiority of MUSDL compared with USDL, a multi-path strategy will lead to a significant increase in computational cost. However, the TSA module can generate features with rich contextual information by adopting a self-attention mechanism in ST-Tube with less computational complexity. To verify the effectiveness of the TSA module, we embed the TSA module into the USDL model. The loss function is defined as Kullback-Leibler (KL) divergence of predicted score distribution and ground-truth (GT) score distribution: \begin{equation} KL\left \{ p_c \parallel s_{pre} \right \}=\sum_{i=1}^{m}p(c_i)log\frac{p(c_i)}{s_{pre}(c_i)} \end{equation} Where $s_{pre}$ is generated by \textit{MLP\_block}, and $p_c$ is generated by GT score. Note that for dataset with difficulty degree (DD), $s=DD \times s_{pre}$ is used as the final predicted score. \begin{table*} \caption{Study on different settings of the number of TSA module.} \label{tab:AQA7-2} \begin{tabular}{c|cccccc|c} \toprule Method & Diving & Gym Vault & Skiing & Snowboard & Sync. 3m & Sync. 10m & Avg. Corr.\\ \midrule TSA-Net & 0.8379 & 0.8004 & 0.6657 & 0.6962 & \textbf{0.9493} & 0.9334 & 0.8476 \\ TSAx2-Net & 0.8380 & 0.7815 & \textbf{0.6849} & \textbf{0.7254} & 0.9483 & \textbf{0.9423} & \textbf{0.8526} \\ TSAx3-Net & \textbf{0.8520} & \textbf{0.8014} & 0.6437 & 0.6619 & 0.9331 & 0.9249 & 0.8352 \\ \bottomrule \end{tabular} \end{table*} \begin{table} \caption{Comparisons of computational complexity and performance on AQA-7. GFLOPs is adopted to measure the computational cost.} \label{tab3} \begin{tabular}{c|cc|cc} \toprule Method & NL-Net & TSA-Net & Comp. Dec. & Corr. Imp.\\ \midrule Diving & 2.2G & 0.864G & -60.72\% & $\uparrow$0.0083 \\ Gym Vault & 2.2G & 0.849G & -61.43\% & $\uparrow$0.0066 \\ Skiing & 2.2G & 0.283G & -87.13\% & $\downarrow$0.0041 \\ Snowboard & 2.2G & 0.265G & -87.97\% & $\uparrow$0.0106 \\ Sync. 3m & 2.2G & 0.952G & -56.74\% & $\uparrow$0.0034 \\ Sync. 10m & 2.2G & 0.919G & -58.24\% & $\uparrow$0.0040 \\ \midrule Average & 2.2G & 0.689G & -68.70\% & $\uparrow$0.0058\\ \bottomrule \end{tabular} \end{table} \section{Experiments} \label{Experiments} We carry out comprehensive experiments on AQA-7 \cite{AQA7}, MTL-AQA \cite{C3DAVG}, and FR-FS datasets to evaluate the proposed method. Experimental results demonstrate that TSA-Net achieves state-of-the-art performance on these datasets. In the following subsections, we first introduce two public datasets and a new dataset named Fall Detection in Figure Skating (FD-FS) proposed by us. After that, a series of experiments and computational complexity analysis are performed on AQA-7 and MTL-AQA datasets. Finally, the detection results on FD-FS are reported, and the network prediction results are analyzed visually and qualitatively. \subsection{Datasets and Evaluation Metrics} \textbf{AQA-7} \cite{AQA7}. The AQA-7 dataset comprising samples from seven actions. It contains 1189 videos, in which 803 videos are used for training and 303 videos used for testing. To ensure the comparability with other models, we delete \textit{trampoline} category because of its long time. \noindent\textbf{MTL-AQA} \cite{C3DAVG}. The MTL-AQA dataset is currently the largest dataset for AQA tasks. There are 1412 diving samples collected from 16 different events in MTL-AQA. Furthermore, MTL-AQA provides detailed scoring of each referee, diving difficulty degree, and live commentary. We followed the evaluation protocol suggested in \cite{C3DAVG}, so that there are 1059 samples used for training and 353 used for testing. \noindent\textbf{FR-FS (Fall Recognition in Figure Skating)}. Although some methods have been proposed \cite{MIT14,FisV} to evaluate figure skating skills, they are only based on long-term videos which last nearly 3 minutes. These coarse-grained methods will lead to the inundation of detailed information in a long time scale. However, these details are crucial and indispensable for AQA tasks. To address this issue, we propose a dataset named FR-FS to recognize falls in figure skating sports. We plan to start from the most basic fault recognition and gradually build a more delicate granularity figure skating AQA system. The FR-FS dataset contains 417 videos collected from FIV \cite{FisV} and \textit{Pingchang 2018 Winter Olympic Games}. FR-FS contains the critical movements of the athlete's take-off, rotation, and landing. Among them, 276 are smooth landing videos, and 141 are fall videos. To test the generalization performance of our proposed model, we randomly select 50\% of the videos from the fall and landing videos as the training set and the testing set. \noindent\textbf{Evaluation Protocols}. Spearman's rank correlation is adopted as the performance metric to measure the divergence between the GT score and the predicted score. The Spearman's rank correlation is defined as follows: \begin{equation} \rho =\frac{\sum (p_i-\bar{p})(q_i-\bar{q})}{\sqrt{\sum(p_i-\bar{p})^2\sum(q_i-\bar{q})^2}} \end{equation} Where $p$ and $q$ represent the ranking of GT and predicted score series, respectively. Fisher's z-value \cite{C3DLSTM} is used to measure the average performance across multiple actions. \begin{table} \caption{Comparison with state-of-the-arts on MTL-AQA.} \label{tab4} \begin{tabular}{cc} \toprule Method & Avg. Corr. \\ \midrule Pose+DCT \cite{MIT14} & 0.2682\\ C3D-SVR \cite{C3DLSTM} & 0.7716\\ C3D-LSTM \cite{C3DLSTM} & 0.8489\\ C3D-AVG-STL \cite{C3DAVG} & 0.8960\\ C3D-AVG-MTL \cite{C3DAVG} & 0.9044\\ MUSDL \cite{MUSDL} & 0.9273\\ \midrule NL-Net & \textbf{0.9422}\\ TSA-Net & 0.9393\\ \bottomrule \end{tabular} \end{table} \subsection{Implementation Details} Our proposed methods were built on the Pytorch toolbox \cite{Pytorch} and implemented on a system with the Intel (R) Xeon (R) CPU E5-2698 V4 @ 2.20GHz. All models are trained on a single NVIDIA Tesla V100 GPU. Faster-RCNN\cite{FasterRCNN} pretrained on MS-COCO\cite{MSCOCO} is adopted to detect the athletes in all initial frames. All videos are normalized to $L=103$ frames. For all experiments, the I3D\cite{I3D} pretrained on Kinetics \cite{Kinetics} is utilized as the feature extractor. All videos are select from high-quality sports broadcast videos, and the athletes' movements are apparent. Therefore, we argue that the performance of TSA-Net is not sensitive to the choice of the tracker. SiamMask\cite{SiamMask} was chosen only for high-speed and tight boxes. Each training mini-batch contains 4 samples. Adam \cite{Adam} optimizer was adopted for network optimization with initial learning rate 1e-4, momentum 0.9, and weight decay 1e-5. \begin{figure} \centering \includegraphics[width=\linewidth]{Figure/Fig6.pdf} \caption{Alphapose \cite{Alphapose} is selected as the pose estimator. The estimation results of two sports videos are visualized.} \label{Pose-fail} \end{figure} Considering the complexity differences between datasets, we adopt different experimental settings. In AQA-7 and MTL-AQA datasets, all videos are divided into 10 clips consistent with \cite{MUSDL}. Random horizontal flipping and timing offset are performed on videos in training phase. Training epoch is set to 100. All video score normalization are consistent with USDL \cite{MUSDL}. In FR-FS dataset, all videos are divided into 7 segments to prevent overfitting. Training epoch is set to 20. \subsection{Results on AQA-7 Dataset} The TSA module and the Non-local module are embedded after \textit{Mixed\_4e} of I3D to create TSA-Net and NL-Net. Experimental results in Table \ref{tab1} show that TSA-Net achieves 0.8476 on Avg. Corr. , which is higher than 0.8102 of USDL. TSA-Net outperforms USDL in all categories except the \textit{snowboard}. This is mainly caused by the size issue: the small size of the target leads to the small size of the ST-Tube, resulting in invalid feature enhancement (AQA-7 \textit{snow. \#056} in Figure \ref{fig5}). Note that the TSA module is used in a plug-and-play fashion, comparative experiments in Table \ref{tab1} can also be regarded as ablation studies. Therefore, we didn't set up a separate part of ablation in this paper. \noindent\textbf{The effect of different number of TSA module}. Inspired by the multi-layer attention mechanism in Transformer \cite{Transformer}, we stack multiple TSA modules and test these variants on AQA-7. Experimental results in Table \ref{tab:AQA7-2} show that TSA-Net achieves the best performance when $N_{stack} = 2$. Benefit from the feature aggregation operations conducted by two subsequent TSA modules, the network can capture richer contextual features compared with USDL. When $N_{stack} = 3$, the performance of the model becomes worse, which may be caused by overfitting. \noindent\textbf{Computational cost analysis}. Computational cost comparison results are shown in Table \ref{tab3}. Note that only the calculation of TSA module or Non-local module is counted, not the whole network. Compared with NL-Net, TSA-Net can reduce the computation by 68.7\% on average and bring 0.0058 AVG. Corr. improvement. This is attributed to the tube mechanism adopted in TSA module, which can avoid dense attention calculation and improve performance simultaneously. Among all categories in AQA-7, the TSA module saves up to 87\% of the computational complexity on \textit{skiing} and \textit{snowboard}. Such a large reduction is caused by the small size of the ST-Tube. However, the small ST-Tube will hinder the network from completing effective feature aggregation and ultimately affect the final performance. This conclusion is consistent with the analysis of the results in Table \ref{tab1}. \begin{figure*} \centering \includegraphics[width=\linewidth]{Figure/Fig7.pdf} \caption{Case study with qualitative results on FR-FS. The failure case \#308-1 is above the timeline, while the successful case \#241-3 is below the timeline.} \label{fig7} \end{figure*} \subsection{Results on MTL-AQA Dataset} As shown in Table \ref{tab4}, the TSA-Net and NL-Net is compared with existing methods. Regression network head and MSELoss are adopted in two networks. Experimental results show that both TSA-Net and NL-Net can achieve state-of-the-art performance and the NL-Net is better. The performance fluctuation of TSA-Net is mainly caused by different data distribution between two datasets. Videos in MTL-AQA have higher resolution (640x360 to 320x240) and broader field of view, which leads to smaller ST-Tubes in TSA-Net and affects the performance. It should be emphasized that this impact is feeble. TSA-Net saves half of the computational cost and achieves almost the same performance as NL-Net. This proves the effectiveness and efficiency of the TSA-Net, which is not contradictory to the final conclusion. \noindent\textbf{Studys on the the stack number of TSA modules and computational cost}. As shown in Tabel \ref{tab5}, three parallel experiments are conducted with only the number of TSA module changed just as the experiments on AQA-7. If NL-Net is excluded, the best \textit{Sp. Corr.} is achieved when $N_{clip}=2$ (\textit{i.e.}, TSAx2-Net), while TSA-Net with only one TSA module achieves minimum MSE and computational cost simultaneously. This phenomenon is mainly caused by low computational complexity of TSA-Net. The sparse feature interaction characteristics of TSA module achieve more efficient feature enhancement and have the ability to avoid overfitting. Although the performance of TSA-Net can be improved by increasing the number of TSA modules, it will increase computational cost. To achieve the balance between computational cost and performance, we only take $N_{stack} = 1$ in all subsequent experiments. \begin{table} \caption{Comparisons of computational complexity and performance between NL-Net and the variants of TSA-Net on MTL-AQA.} \label{tab5} \begin{tabular}{cccc} \toprule Method & Sp. Corr.$\uparrow$ & MSE$\downarrow$ & FLOPs$\downarrow$\\ \midrule NL-Net & \textbf{0.9422} & 47.83 & 2.2G \\ TSA-Net & 0.9393 & \textbf{37.90} & \textbf{1.012}G\\ TSAx2-Net & 0.9412 & 46.51 & 2.025G\\ TSAx3-Net & 0.9403 & 47.77 & 3.037G\\ \bottomrule \end{tabular} \end{table} \subsection{Results on FR-FS Dataset} \begin{table} \caption{Recognition accuracy on FR-FS.} \label{tab6} \begin{tabular}{cc} \toprule Method & Acc.\\ \midrule Plain-Net & 94.23 \\ TSA-Net & \textbf{98.56} \\ \bottomrule \end{tabular} \end{table} In FR-FS dataset, we focus on the performance improvement that the TSA module can achieve. Therefore, Plain-Net and TSA-Net are implemented, respectively. The former does not adopt any feature enhancement mechanism, while the latter is equipped with a TSA module. As shown in Table \ref{tab6}, TSA-Net outperforms Plain-Net by 4.33\%, which proves the effectiveness of TSA module. \noindent\textbf{Visualization of Temporal Evolution.} A case study is also conducted to further explore the performance of TSA-Net. Two representative videos are selected and the prediction results of each video clip are visualized in Figure \ref{fig7}, . All clip scores are obtained by deleting the temporal pooling operation in Plain-Net and TSA-Net. In the failure case \#308-1, both Plain-Net and TSA-Net can detect that the athlete falls in the fourth chip which highlighted in red, but only TSA-Net gets the correct result in the end (0.9673 for Plain-Net and 0.2523 for TSA-Net). The TSA mechanism forces the features in ST-Tube interact with each other in the way of self-attention, which makes TSA-Net regard the standing up and adjusting actions after falling as errors in clip 5 to 7. It seems that TSA-Net is too strict in fall recognition, but the analysis in the successful case \#241-3 has overturned this view. Two models get similar results, except for the second clip (colored in blue), which contains the take-off and rotation phase. Plain-Net has great uncertainty for the stationarity of take-off phase, while TSA-Net can get high confidence results. Based on visual analysis and quantitative analysis, it can be concluded that the TSA module is able to perform feature aggregation effectively and obtain more reasonable and stable prediction results. \balance \subsection{Analysis and Visualization} \textbf{Reasons for choosing tracking boxes over pose estimation.} In sports scenes, high-speed movements of the human body will lead to a series of challenges such as motion blur and self-occlusion, which eventually result in failure cases in pose estimation. Results in Figure \ref{Pose-fail} show that Alphapose \cite{Alphapose} cannot handle these situations properly. The missing of human posture and background audience posture interference will seriously affect the evaluation results. Previous studies in FineGym \cite{FineGym} have come to the same results as ours. Based on these observations, we conclude that methods based on pose estimation are not suitable for AQA in sports scenes. Missing boxes and wrong poses will significantly limit the performance of the AQA model. Therefore, we naturally introduce the VOT tracker into AQA tasks. The proposed TSA-Net achieves significant improvement in AQA-7 and MTL-AQA compared to pose-based methods such as Pose+DCT \cite{MIT14} and ST-GCN \cite{STGCN18} as shown in Table \ref{tab1} and \ref{tab4}. These comparisons show that the TSA mechanism is superior to the posture-based mechanism in capturing key dynamic characteristics of human motion. \noindent\textbf{Visualization on MTL-AQA and AQA-7}. Four cases are visualized in Figure \ref{fig5}. The tracking results generated by SiamMask are very stable and accurate. The final predicted scores are very close to the GT score since the TSA module is adopted. Interestingly, as shown in Figure \ref{fig5}, the VOT tracker can handle various complex situations, such as the disappearance of athletes (\#02-32), drastic changes in scale (\#056) and synchronous diving (\#082). These results show that the tracking strategy perfectly meets the requirements of AQA tasks and verify the effectiveness of the TSA module. \section{Conclusion} In this paper, we present a Tube Self-Attention Network (TSA-Net) for action quality assessment, which is able to capture rich spatio-temporal contextual information in human motion. Specifically, a tube self-attention module is introduced to efficiently aggregate contextual information located in ST-Tube generated by SiamMask. Sufficient experiments are performed on three datasets: AQA-7, MTL-AQA, and a dataset proposed by us named FR-FS. Experimental results demonstrate that TSA-Net can capture long-range contextual information and achieve high performance with less computational cost. In the future, an adaptive mechanism of ST-Tube will be explored to avoid the sensitivity of the TSA-Net to the size issue. \begin{acks} This work was supported by Shanghai Municipal Science and Technology Major Project 2021SHZDZX0103 and National Natural Science Foundation of China under Grant 82090052. \end{acks} \clearpage \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 2.451172, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcsY5qhDACrbibRZ8
\section{Introduction}\label{Sec:Introduction} The rating of the players\footnote{We will talk about \emph{players} but the team sports are of course treated in the same way. In fact, the examples we provide come from team-sports but the advantage of talking about players is that it allows us to discuss the issue of gathering players in groups which face each other, as done \eg in eSports \cite{Herbrich06}.} is one of the fundamental problem in sport analytics and consists in assigning each player a real value called a \emph{skill}. In this work we are interested in the rating algorithms that can be systematically derived from the probabilistic models which describe i)~how the the skills affect the outcomes of the games, as well as ii)~how the skills evolve in time, \ie characterize the skills \emph{dynamics}. Using the probabilistic models, the forecasting of the game outcomes is naturally derived from the rating. Once the models are chosen, the rating boils down to inferring the unknown skills from the observed games outcomes. This is essentially a parameter estimation problem which has been largely addressed in the literature. In particular, using a static model for the skills, that is, assuming that the skills do not vary in time, the problem consists in solving a non-linear regression problem and the main issue then is to define a suitable skills-outcome model. The most popular skills-outcome models are obtained from the pairwise comparison framework which is well known in the psychometrics literature \cite{David63_Book}, \cite{Cattelan12}. For binary games (win/loss), the Thurston model \cite{Thurston27} and Bradley-Terry model \cite{Bradley52} are the most popular. Their extensions to the ternary games (win/loss/draw) were proposed in \cite{Rao67} and \cite{Davidson70}; these are particular cases of ordinal variable models \cite{Agresti92} that may may be applied in multi-level games as done \eg in \cite{Fahrmeir94} and \cite{Goddard05}. Alternative approach focuses on modelling directly the game points (\eg goals) using predefined distributions; the Poisson distribution is the most popular in this case \cite{Maher82}; similarly, the points difference can be modelled using, \eg the Skellam, \cite{Karlis08} or the Weibull \cite{Boshnakov17} distributions. The very meaning of the skills may be also redefined and instead of a scalar, the player may be assigned two values corresponding to offensive and defensive skills \cite{Maher82}, \cite{Manderson18}, \cite{Wheatcroft20a}; further, considering the \gls{hfa}, three or four distinct parameters per player may be defined \cite{Maher82}, \cite{Kuk95}, although recent results indicate that this may lead to over-fitting \cite{Ley19}, \cite{Lasek20}. The various skills-outcome model we mention above, invariably assume that the outcome of the game depends (via a non-linear function) on a linear combination of the skills of the participating players, this assumption is also used in the case when the multiple players are gathered in two groups facing each other in a game \cite{Herbrich06}. This general approach will be also used as a basis for our work. Each of these skills-outcome models can be combined with the models describing how the skills evolve in time. With that regard, the most popular is modelling of the skills via first-order Markov Gaussian processes, \eg \cite{Fahrmeir92}, \cite{Glickman93_thesis}, \cite{Fahrmeir94}, \cite{Glickman99}, \cite{Knorr00}, \cite{Held05}, \cite{Herbrich06}, \cite{Koopman12}, \cite{Manderson18}, \cite{Koopman19}. This formulation is then exploited to derive the on-line rating algorithms in two recursive steps: first, at a time $t$, the posterior distribution of the skills is found using all observations up to time $t$; more precisely, to simplify the problem, the Gaussian approximation of the latter is obtained. Next, the posterior distribution from the time $t$, is used as a prior in the time $t+1$. This approach should be seen as a generalization of the Kalman filtering to the non-Gaussian models characteristic of the rating problems \cite{Fahrmeir92}. Most of the works we cited above and which consider the skills' dynamics, focused on the estimation of the skills with a moderate number of players, the case which is typical in sport leagues (\eg in football, hockey, American football, etc). In such a case, the approximate (Gaussian) posterior distribution of the skills can be fully defined by the mean vector and by the covariance matrix. On the other hand, for large number of players (\eg thousands of chess players or millions of eSports players), it is considered unfeasible and further approximations are introduced by considering only a diagonal covariance matrix, equivalent to assuming that the skills are, a posteriori, Gaussian and independent. This approach was proposed to rate the chess players (the Glicko algorithm \cite{Glickman99}) as well as for the rating in eSports (the TrueSkill algorithm, \cite{Herbrich06}). However, the Glicko and the TrueSkill algorithms are derived i) from different skills-outcome models,\footnote{Glicko uses the Bradley-Terry model \cite{Bradley52}, while TrueSkill uses the Thurston model \cite{Thurston27}.} and ii) using different approximation principles. Thus, not only it may be difficult to see them as instances of a more general approach but, more importantly, they cannot be straightforwardly reused to obtain new online ratings if we would like to change the skills-outcome model. This latter fact stays very much in contrast with the approach of \cite{Fahrmeir92} which is general in its formulation so, under mild conditions, it can be applied to any skills-outcomes model. However, since the focus of \cite{Fahrmeir92} and other works which followed its path, was not on the large problems, the derivations did not leverage the simplifying assumptions on which rely the TrueSkill and Glicko algorithms. In our work we thus want to take advantage of both worlds: we will exploit the independence assumption on which \cite{Glickman99} and \cite{Herbrich06} are built, and the estimation principle underlying the Kalman filter which was used in \cite{Fahrmeir92}. Furthermore, we will also consider new simplifying assumptions about the posterior distributions of the skills; this will lead to different simplified versions of the Kalman filter. The goal of this work is thus threefold: \begin{itemize} \item We will show how the online rating algorithms can be derived for \emph{any} skills-outcome model and may be applied equally well to estimate the skills of the players in individual sports (as in the Glicko algorithms) or the skills of the players withing a group (as in the True-Skill algorithm). We will also consider different level of simplification when dealing with the skills' dynamics. \item Using this generic algorithmic framework, we will be able not only to derive new algorithms, but also to compare and understand the similarities/differences between the known online rating such as the Elo \cite{Elo08_Book}, the Glicko \cite{Glickman99}, or the TrueSkill \cite{Herbrich06} algorithms. \item By mean of numerical examples, we will provide an insight into the advantages of the simplified versions of the rating algorithms, and indicate under which conditions the simple rating algorithm may perform equally well as the more complex ones. \end{itemize} The paper is organized as follows: the model underlying the rating is shown in \secref{Sec:Model}. The on-line rating algorithms are derived in a general context in \secref{Sec:Tracking} using different approximations of the posterior distribution. In \secref{Sec:New.Ratings}, the popular scalar skills-outcome model are used to derive new rating algorithms which are then compared to the algorithms from the literature. Numerical examples are shown in \secref{Sec:Num.results} and conclusions are draws in \secref{Sec:Conclusions}. \section{Model}\label{Sec:Model} We consider the case when the players indexed with $m\in\set{1,\ldots, M}$ participate in the games indexed with $t\in\set{1,\ldots, T$}, where the number of games, $T$, is finite (as in sport seasons) or infinite (as in non-stop competitions, such as eSports). We consider one-on-one games: in the simplest case it means that the ``home'' player $i_t$ plays against the ``away'' player $j_t$, where $i_t, j_t\in\set{1,\ldots,M}$. The outcomes of the game $t$, denoted by $y_t$, belong to an ordinal set $\mc{Y}$. For example, if $y_t$ is the difference between the game-points (such as goals), we have $\mc{Y}=\set{\ldots-3,-2,-1,0,1,2,\ldots}$ so the ordinal variables are naturally encoded into integers. On the other hand, in ternary-outcome games, we may assign $y_t=0$ if the player $i_t$ loses the game, $y_t=2$ if she wins the game, and $y_t=1$ if the game ends in a draw, \ie $\mc{Y}=\set{0,1,2}$. The very notion of the home/away players is useful when dealing with the \acrfull{hfa} typically encountered in sports but it also helps us to ground the meaning of the game outcome: even in the absence of the \gls{hfa}, for the outcome $y_t=0$ to be meaningful, we must decide which player lost (the home player in our notation for ternary-outcome games). In a more general setup, the game may implicate two \emph{groups} of players whose indices are defined by the sets $\mc{I}_t=\set{i_{t,1}, i_{t,2},\ldots, i_{t,F}}$ (these are indices of the ``home'' players) and $\mc{J}_t=\set{j_{t,1}, j_{t,2},\ldots, j_{t,F}}$ (indices of the ``away'' players). While the number of players in each group, $F$, is assumed to be constant, this is done merely to simplify the notation and other cases are possible. For example, the groups in eSports are formed on the fly and they do not always have the same number of players, nor even the same number of players in both groups playing against each other \cite{Herbrich06}. The process of defining which players take part in the game $t$, \ie how the indices $i_t$, $j_t$ (or the sets $\mc{I}_t$ and $\mc{J}_t$) are defined, is called a \emph{scheduling}. The above notation applies directly if we replace the notion of ``player'' with ``team''; but then, of course, the general case of groups defined by $\mc{I}_t$ and $\mc{J}_t$ is not necessary. For the rest of the work we only refer to players which is a more general case to deal with. In its simplest form, the rating consists in assigning the player $m$ the value $\theta_{t,m}$, called a \emph{skill}; this is done after observing the outcomes of the games up to time $t$, which we denote as $\un{y}_t=\set{\un{y}_{t-1},y_t}$. We index the skills with $t$ because we assume that they may vary in time. The probabilistic perspective we will adopt relies on the model relating the skills $\boldsymbol{\theta}_t=[\theta_{t,1},\ldots, \theta_{t,M}]\T$ to $\un{y}_t$; it comprises i) the skills-outcome model, which makes explicit the relationship between the skills $\boldsymbol{\theta}_t$ and the outcome $y_t$ at time $t$, as well as ii) the model describing the evolution of the skills in time, \ie the skills' dynamics. The problem of finding the skills-outcome model has been treated extensively in the literature often exploitng the link with the problem of pairwise comparison well studied in psychometry \cite{Thurston27}, \cite{Bradley52}. This is also where most of the efforts are concentrated in the rating literature and many models have been alrady proposed and studied. On the other hand, the modelling of the dynamics of the skills is less diversified and mainly focuses on applying the particular skills-outcome model in the dynamic context. ~ \textbf{Skills-outcome model} The skills-outcome model defines the probability of the outcome conditioned on the skills, $\PR{y_t|\boldsymbol{\theta}_t}$, and most often is defined by combing the non-linear scalar function and the linear function of the skills, \ie \begin{align}\label{pdf.y.theta} \PR{y_t|\boldsymbol{\theta}_t}&=L(z_t/s; y_t),\\ \label{z.t} z_t&=\boldsymbol{x}_{t,\tr{h}}\T\boldsymbol{\theta}_t -\boldsymbol{x}_{t,\tr{a}}\T\boldsymbol{\theta}_t = \boldsymbol{x}\T_t\boldsymbol{\theta}_t, \end{align} where the role of $s>0$ is to scale the values the skills; $\boldsymbol{x}_{t,\tr{h}}=[x_{t,\tr{h},1},\ldots, x_{t,\tr{h},M}]\T$ is the home scheduling vector, defined as follows: $x_{t,\tr{h}, m}=1$ if the player $m$ is a home player, \ie $m\in\mc{I}_t$, and $x_{t,\tr{h},m}=0$, otherwise. The away scheduling vector is defined correspondingly for the away players. For example, if $M=10$, and $\boldsymbol{x}_{t,\tr{h}} = [ 0, 0, 0, 1, 0, 0, 1, 0, 0, 0]\T$, and $\boldsymbol{x}_{t,\tr{a}} = [ 0, 0, 1, 0, 0, 0, 0, 0, 0, 1]\T$, it means that, at time $t$, the game involves the home players $\mc{I}_t=\set{4,7}$ and away players $\mc{J}_t=\set{3,10}$. Then, the combined scheduling vector $\boldsymbol{x}_t=\boldsymbol{x}_{t,\tr{h}}-\boldsymbol{x}_{t,\tr{a}}$ is given by $\boldsymbol{x}_{t} = [ 0, 0, -1, 1, 0, 0, 1, 0, 0, -1]\T$. Of course, we do not suggest that the vector product(s) in \eqref{z.t} should be actually implemented; it is just a convenient notation expressing the fact that $z_t$ is the difference between the sum of the skills of the home players and the sum of the skills of the away players. As for the function $L(z;y)$ it should be defined taking into account the structure of the space of outcomes $\mc{Y}$. For example, in the binary games, $\mc{Y}=\set{0,1}$, we often use \begin{align}\label{L.example} L(z;y)= \begin{cases} F(z) & \tr{if}\quad y=1 \quad \tr{(home win)}\\ F(-z) & \tr{if}\quad y=0 \quad \tr{(away win)} \end{cases}, \end{align} where $0\le F(z)\leq 1$ is a non-decreasing function. This corresponds to the assumption that increasing the difference between the skills, $z_t$, corresponds to the increased probability of the home win and, of course, decreased probability of the away win. More on that in \secref{Sec:New.Ratings}. ~ \textbf{Skills' dynamics} The temporal evolution of the skills is often modelled as a damped random walk \begin{align}\label{btheta.t.t1} \boldsymbol{\theta}_{t} =\beta_{t} \boldsymbol{\theta}_{t-1} + \boldsymbol{u}_{t}\epsilon_{t}, \end{align} where $\boldsymbol{u}_t$ is the vector comprised of independent, zero-mean, unitary-variance, random Gaussian variables, so $\epsilon_{t}$ has the meaning of the variance of the random increment in skills from time $t-1$ to $t$: it is assumed to be the same for all the player. For example, \cite{Glickman99} uses \begin{align}\label{epsilon.t} \epsilon_{t}= \big(\tau(t)-\tau(t-1)\big)\epsilon, \end{align} where $\tau(t)$ is the time (\eg measured in days) at which the game indexed with $t$ is played, and $\epsilon$ is the per-time unit increase of the variance. The autoregression parameter $0<\beta_t\le 1$ models the decrease of the skills in time (in absence of game outcomes). While $\beta_{t}=1$ is used in most of the previous works we cite, $\beta_{t}<1$ was also used, \eg in \cite{Koopman12}, \cite{Manderson18}, \cite{Koopman19}, and to take into account the time we may define it as \begin{align}\label{beta.t} \beta_{t}= \beta^{(\tau(t)-\tau(t-1))}, \end{align} where, again $\beta$ is the per-time decrease of the skills. The relationship \eqref{btheta.t.t1} may be presented as \begin{align}\label{dumped.Markov} \pdf( \boldsymbol{\theta}_{t} | \boldsymbol{\theta}_{t-1} ) = \mc{N}(\boldsymbol{\theta}_t; \beta_t \boldsymbol{\theta}_{t-1}, \boldsymbol{I} \epsilon_{t}), \end{align} where $\boldsymbol{I}$ is the identity matrix and \begin{align}\label{pdf.Normal} \mc{N}( \boldsymbol{\theta}; \boldsymbol{\mu} , \boldsymbol{V} )=\frac{1}{\sqrt{\tr{det}(2\pi\boldsymbol{V})}}\exp\left( -\frac{1}{2} (\boldsymbol{\theta}-\boldsymbol{\mu})\T\boldsymbol{V}^{-1}(\boldsymbol{\theta}-\boldsymbol{\mu}) \right) \end{align} is the multivariate Gaussian \gls{pdf} with the mean vector $\boldsymbol{\mu}$ and the covariance matrix $\boldsymbol{V}$. As an alternative to \eqref{dumped.Markov} we may also assume that the skills in $\boldsymbol{\theta}_{t}$ (conditioned on $\boldsymbol{\theta}_{t-1}$) are correlated, \ie the covariance matrix has non-zero off-diagonal elements. This is done, \eg in \cite{Knorr00}, \cite{Manderson18} in order to ensure that $\boldsymbol{1}\T\boldsymbol{\theta}_{t}=\tr{Const.}$, \ie that the skills at a given time $t$, sum up to the same constant. However, a direct consequence of the correlation between the skills is that, the outcome $y_t$ will affect not only the skills of the players involved in the game at time $t$ but also of all other players. This may result in rating which is slightly counter-intuitive but also, in the case of eSports, when the pool of the players is not predefined, this model may be difficult to justify. We will thus use the model \eqref{dumped.Markov} but we emphasize that our goal is not to justify particular assumptions underlying the skills-outcome model or the models for the skills' dynamics. We rather want to present a common framework which will i) show the relationship between the algorithms already known from the literature, and ii) allow us to create new online rating algorithms in a simple/transparent manner. \section{Estimation of the skills}\label{Sec:Tracking} If we suppose momentarily that the skills do not vary in time, \ie $\boldsymbol{\theta}_{t}=\boldsymbol{\theta}$ (or $\epsilon=0$), the problem of finding the skills may be formulated under the \gls{ml} principle \begin{align}\label{ML.estimation} \hat{\boldsymbol{\theta}}&=\mathop{\mr{argmax}}_{\boldsymbol{\theta}} \PR{ \un{y}_T | \boldsymbol{\theta} } =\mathop{\mr{argmax}}_{\boldsymbol{\theta}} \prod_{t=1}^{T} L(z_t/s; y_t)\\ &=\mathop{\mr{argmax}}_{\boldsymbol{\theta}} \sum_{t=1}^{T} \ell(z_t/s;y_t), \end{align} where $\ell(z_t/s; y_t)=\log L(z_t/s; y_t)$ is the log-likelihood, and we assumed that the observations $y_t$, when conditioned on the skills' difference, $z_t$, are independent. The solution \eqref{ML.estimation} can be found uniquely if the log-likelihood $\ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta}/s; y_t)$ is a concave function of $\boldsymbol{\theta}$ which holds if $\ell(z; y_t)$ is concave in $z$. This is the ``mild'' condition we referred to in \secref{Sec:Introduction} and, in the rest of the work, we assume that this condition is satisfied.\footnote{For that, it is necessary that $\forall z, L''(z;y)L(z;y)\le [L'(z;y)]^2$.} Unlike the \gls{ml} approach which finds a \emph{point} estimate of the skills $\hat\boldsymbol{\theta}$, the Bayesian approach consists in finding the posterior \emph{distribution} of the skills $\pdf( \boldsymbol{\theta} | \un{y}_T )$. But, because finding the distributions is usually intractable, they are often assumed to belong to a particular parametrically defined family and the Gaussian approximation is often adopted \begin{align}\label{gauss.static.posterior} \pdf( \boldsymbol{\theta} | \un{y}_T )\approx \mc{N}(\boldsymbol{\theta}; \hat{\boldsymbol{\mu}}, \hat{\boldsymbol{V}}), \end{align} where, to find $\hat{\boldsymbol{\mu}}$ we should calculate the mean from the posterior distribution $\pdf( \boldsymbol{\theta} | \un{y}_T )$. This also may be difficult, so we may prefer to set $\hat{\boldsymbol{\mu}}=\hat{\boldsymbol{\theta}}$, where $\hat\boldsymbol{\theta}$ is the mode of the distribution $\pdf( \boldsymbol{\theta} | \un{y}_T )\propto \Pr\{ \un{y}_T |\boldsymbol{\theta} \}\pdf(\boldsymbol{\theta})$, and where $\pdf(\boldsymbol{\theta})$ reflect a priori knowledge about $\boldsymbol{\theta}$. For the non-informative prior $\pdf(\boldsymbol{\theta})$ the mode $\hat\boldsymbol{\theta}$ coincides with \eqref{ML.estimation}. While the problem of finding the posterior distribution of the skills is more general that finding the point estimate of the skills, with the model \eqref{gauss.static.posterior}, the mean $\hat\boldsymbol{\mu}$ may be treated as the \gls{map} point estimate, and the covariance $\hat\boldsymbol{V}$ expresses the uncertainity of the \gls{map} estimation. \subsection{Online rating} The Bayesian approach to the online rating consists in finding the distribution of the skills $\boldsymbol{\theta}_t$ conditioned on the games' outcomes $\un{y}_t$, \ie \begin{align}\label{theta.aposteriori} \pdf(\boldsymbol{\theta}_t| \un{y}_{t} )&= \pdf(\boldsymbol{\theta}_t| \un{y}_{t-1}, y_t ) \propto \PR{y_t|\boldsymbol{\theta}_t} \int \pdf(\boldsymbol{\theta}_t , \boldsymbol{\theta}_{t-1}| \un{y}_{t-1}, ) \dd \boldsymbol{\theta}_{t-1}\\ \label{theta.aposteriori.2} &=\PR{y_t|\boldsymbol{\theta}_t} \int \pdf(\boldsymbol{\theta}_t|\boldsymbol{\theta}_{t-1})\pdf(\boldsymbol{\theta}_{t-1}| \un{y}_{t-1}) \dd \boldsymbol{\theta}_{t-1}, \end{align} where we exploited the Markovian property of \eqref{dumped.Markov}, \ie the knowledge of $\boldsymbol{\theta}_{t-1}$ is sufficient to characterize the distribution of $\boldsymbol{\theta}_{t}$. The relationship \eqref{theta.aposteriori.2} allows us to calculate the distribution $\pdf(\boldsymbol{\theta}_{t}| \un{y}_{t})$ recursively, \ie from $\pdf(\boldsymbol{\theta}_{t-1}| \un{y}_{t-1})$. This is what the online rating is actually about: as soon as the game outcomes become available, we estimate the (distribution of the) skills by exploiting the previously obtained estimation results. Such recursive calculation of the posterior distribution from \eqref{theta.aposteriori.2} has been already dealt with, \eg in \cite{Fahrmeir92}, \cite{Fahrmeir94} which also recognized that the formulation \eqref{theta.aposteriori.2} underlies the well known Kalman filtering \cite[Ch.~12-13]{Moon00_Book}. In order to make \eqref{theta.aposteriori.2} tractable, \cite{Fahrmeir92} (and many works that followed) rely on a Gaussian parametric representation of $\pdf(\boldsymbol{\theta}_{t}| \un{y}_{t})$, akin to \eqref{gauss.static.posterior}, \ie $\tilde\pdf(\boldsymbol{\theta}_{t}| \un{y}_{t})=\mc{N}(\boldsymbol{\theta}_{t}; \boldsymbol{\mu}_t, \boldsymbol{V}_t)$ which allows us to implement the approximate version of \eqref{theta.aposteriori.2} as \begin{align}\label{theta.aposteriori.tilde} \hat \pdf(\boldsymbol{\theta}_{t}| \un{y}_{t}) &= \PR{y_t|\boldsymbol{\theta}_t} \int \pdf(\boldsymbol{\theta}_{t}|\boldsymbol{\theta}_{t-1})\tilde{\pdf}(\boldsymbol{\theta}_{t-1}| \un{y}_{t-1}) \dd \boldsymbol{\theta}_{t-1},\\ \label{project.aposteriori} \tilde \pdf(\boldsymbol{\theta}_{t}| \un{y}_{t} ) &\propto \mc{P}\left[\hat \pdf(\boldsymbol{\theta}_{t}| \un{y}_{t}) \right], \end{align} where $\mc{P}[\pdf(\boldsymbol{\theta})]$ is the operator projecting $\pdf(\boldsymbol{\theta})$ on the space of Gaussian distribution, of which we consider the following possible forms with varying degree of simplification \begin{align}\label{covariance.cases} \tilde\pdf(\boldsymbol{\theta}_t|\un{y}_t)= \begin{cases} \mc{N}(\boldsymbol{\theta}_t, \boldsymbol{\mu}_t, \boldsymbol{V}_t) & \text{matrix-covariance model}\\ \mc{N}(\boldsymbol{\theta}_t, \boldsymbol{\mu}_t, \tr{diag}(\boldsymbol{v}_t)) & \text{vector-covariance model}\\ \mc{N}(\boldsymbol{\theta}_t, \boldsymbol{\mu}_t, v_t\boldsymbol{I}) & \text{scalar-covariance model} \end{cases}, \end{align} where $\tr{diag}(\boldsymbol{v})$ is the diagonal matrix with diagonal elements gathered in the vector $\boldsymbol{v}$. The vector- and scalar-covariance models are particularly suited for online rating when the number of players, $M$, is large, because we only need to estimate a vector/scalar instead of $M\times M$ covariance matrix. The vector-covariance model is the basis for the derivation of the TrueSkill and Glicko algorithms. The scalar-covariance model may be found in \cite{Ingram21} (with an additional assumption of the variance being constant) and is justified in the sports where players are uniformly scheduled to play throughout the season. Then, at any point of time, the number of games played is similar for all players so we can expect that the uncertainty (expressed by the variance) be similar for all the posterior estimates. On the other hand, in eSports, there may be significant differences between the number of games played by different players; in particular, the players who are new to the game should be characterised by a larger uncertainty then those who have been playing for a long time, and thus the scalar-covariance model may be inadequate. The projection in \eqref{project.aposteriori} is done by finding $\tilde\pdf(\boldsymbol{\theta}_t|\un{y}_t)$ which minimizes the \gls{kl} distance to the projection argument $\hat\pdf(\boldsymbol{\theta}_t|\un{y}_t)$; this is done using the following. \begin{proposition}\label{Prop:DKL} The parameters of the distribution \eqref{covariance.cases}, $\tilde\pdf(\boldsymbol{\theta}_t|\un{y}_t)$, closest to $\hat\pdf(\boldsymbol{\theta}_t|\un{y}_t)$ (in the sense of the \gls{kl} distance), should be set as follows: \begin{align} \label{bmu.yt} \boldsymbol{\mu}_t&=\Ex[ \boldsymbol{\theta}_t |\un{y}_t]\\ \label{bV.yt} \boldsymbol{V}_t&=\Ex[(\boldsymbol{\theta}_t-\boldsymbol{\mu}_t)(\boldsymbol{\theta}_t-\boldsymbol{\mu}_t)\T |\un{y}_t]\\ \label{bv.t.DKL} \boldsymbol{v}_t&= \tnr{di}(\boldsymbol{V}_t)\\ \label{v.t.DKL} v_t & =\frac{1}{M}\boldsymbol{1}\T\boldsymbol{v}_t, \end{align} where $\tnr{di}(\boldsymbol{V})$ extracts the diagonal from the matrix $\boldsymbol{V}$, and $v_t$ in \eqref{v.t.DKL} calculates the arithmetic average of the elements in $\boldsymbol{v}_t$. \begin{proof} \appref{Proof:DKL} \end{proof} \end{proposition} So, to implement the projection $\mc{P}[\cdot]$, and irrespectively which covariance model in \eqref{covariance.cases} we decide to use, we must first calculate (exactly or approximately) the mean \eqref{bmu.yt} and the covariance \eqref{bV.yt} from the distribution $\pdf(\boldsymbol{\theta}_t|\un{y}_t)$. In the case of the vector (respectively, the scalar)-covariance model, we obtain the vector $\boldsymbol{v}_t$ (respectively, the scalar $v_t$) from the covariance matrix, via \eqref{bv.t.DKL} (respectively, via \eqref{v.t.DKL}). The algorithm based on the matrix-covariance model will be called a \gls{kf} rating and we show it in \secref{Sec:KF}. We will use it in \secref{Sec:SKF} to show how the \acrfull{skf} rating, based on the vector/scalar-covariance models may be obtained. \subsection{Kalman filter}\label{Sec:KF} The integral in \eqref{theta.aposteriori.tilde} is calculated from \eqref{pdf.Normal} and \eqref{dumped.Markov} as\footnote{We use the relationship $\mc{N}(\boldsymbol{\theta};\boldsymbol{\mu}_1,\boldsymbol{V}_1)\mc{N}(\boldsymbol{\theta}; \boldsymbol{\mu}_2,\boldsymbol{V}_2)=\mc{N}(\boldsymbol{\theta};\boldsymbol{\mu}_3,\boldsymbol{V}_3)\mc{N}(\boldsymbol{\mu}_1;\boldsymbol{\mu}_2;\boldsymbol{V}_1+\boldsymbol{V}_2)$ \cite[Ch.~8.4]{Barber12_Book}.} \begin{align}\label{integral.prod.matrix} \int \pdf(\boldsymbol{\theta}_{t}|\boldsymbol{\theta}_{t-1})\tilde{\pdf}(\boldsymbol{\theta}_{t-1}| \un{y}_{t-1}) \dd \boldsymbol{\theta}_{t-1} &= \mc{N}( \boldsymbol{\theta}_{t} ; \beta_{t}\boldsymbol{\mu}_{t-1} , \ov\boldsymbol{V}_{t} ), \end{align} where \begin{align}\label{ov.V.t} \ov\boldsymbol{V}_t &= \beta_{t}^2 \boldsymbol{V}_{t-1} +\epsilon_t \boldsymbol{I} \end{align} is the covariances matrix of the skills at time $t$ estimated from the observation $\un{y}_{t-1}$. Using \eqref{integral.prod.matrix} and \eqref{pdf.y.theta} in \eqref{theta.aposteriori.tilde} yields \begin{align}\label{pdf.n} \hat\pdf(\boldsymbol{\theta}_{t}| \un{y}_{t} ) &\propto \exp\big(Q(\boldsymbol{\theta}_{t})\big)\\ \label{Q.theta} Q(\boldsymbol{\theta})&=\ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta}/s;y_t) -\frac{1}{2}(\boldsymbol{\theta}-\beta_{t}\boldsymbol{\mu}_{t-1})\T \ov\boldsymbol{V}_t^{-1}(\boldsymbol{\theta}-\beta_{t}\boldsymbol{\mu}_{t-1}) \end{align} and thus, by finding its mode \begin{align}\label{mean.mode.Q} \boldsymbol{\mu}_t = \mathop{\mr{argmax}}_{\boldsymbol{\theta}} Q(\boldsymbol{\theta}) \end{align} and the inverse of the negated Hessian, $\boldsymbol{V}_t=[-\nabla^2_{\boldsymbol{\theta}}Q(\boldsymbol{\theta})]^{-1}|_{\boldsymbol{\theta}=\boldsymbol{\mu}_t}$, we obtain the approximate solution to the projection \begin{align} \mc{P}[\hat\pdf(\boldsymbol{\theta}_{t}| \un{y}_{t} )]=\mc{N}(\boldsymbol{\theta}_t; \boldsymbol{\mu}_{t}, \boldsymbol{V}_{t}). \end{align} We have to solve \eqref{mean.mode.Q} and this may be done by replacing $\ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta}/s;y_t)$ with a quadratic approximation \begin{align}\label{Taylor.expansion} \tilde\ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta}/s;y_t)\approx \ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t})+ [\nabla_{\boldsymbol{\theta}} \ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t})]\T\big(\boldsymbol{\theta}-\boldsymbol{\theta}_{\tr{o}}\big) +\frac{1}{2}(\boldsymbol{\theta}-\boldsymbol{\theta}_{\tr{o}})\T\nabla_{\boldsymbol{\theta}}^2 \ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t}) \big(\boldsymbol{\theta}-\boldsymbol{\theta}_{\tr{o}}\big), \end{align} obtained by developing $\ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta};y_t)$ via Taylor series around $\boldsymbol{\theta}_{\tr{o}}$, where the gradient and the Hessian of $\ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta};y_t)$ are calculated as \begin{align}\label{grad.ell} \nabla_{\boldsymbol{\theta}} \ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta};y_t) &= \frac{1}{s}g(\boldsymbol{x}_{t}\T\boldsymbol{\theta}/s;y_{t})\boldsymbol{x}_{t}\\ \nabla_{\boldsymbol{\theta}}^2 \ell(\boldsymbol{x}_{t}\T\boldsymbol{\theta};y_t) &= -\frac{1}{s^2}h(\boldsymbol{x}_{t}\T\boldsymbol{\theta}/s;y_{t})\boldsymbol{x}_{t}\boldsymbol{x}_{t}\T, \end{align} with the first- and the second derivatives of the scalar function $\ell(z;y_{t})$ denoted as \begin{align} g(z;y_{t})&=\frac{\dd}{\dd z}\ell(z;y_{t}),\\ h(z;y_{t})&=- \frac{\dd^2}{\dd z^2}\ell(z;y_{t}); \end{align} we note that $h(z;y_t)>0$ because $\ell(z;y_{t})$ is concave in $z$. Replacing $\ell(\cdot;y_{t})$ with $\tilde\ell(\cdot;y_{t})$ in \eqref{Q.theta}, the mode \eqref{mean.mode.Q} is obtained when the gradient of $Q(\boldsymbol{\theta})$ goes to zero, \ie \begin{align} \nabla_{\boldsymbol{\theta}} Q(\boldsymbol{\theta})|_{\boldsymbol{\theta}=\boldsymbol{\mu}_{t}}\approx \frac{1}{s}g(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t})\boldsymbol{x}_{t} -\frac{1}{s^2}h(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t})\boldsymbol{x}_{t}\boldsymbol{x}_{t}\T\big(\boldsymbol{\mu}_{t}-\boldsymbol{\theta}_{\tr{o}}\big) - \ov\boldsymbol{V}_t^{-1}\big(\boldsymbol{\mu}_{t}-\beta_{t}\boldsymbol{\mu}_{t-1}\big) =\boldsymbol{0}, \end{align} which is solved by \begin{align}\label{theta0.prime} \boldsymbol{\mu}_{t} &= \boldsymbol{V}_t\Big[ \boldsymbol{x}_{t} \big(\frac{1}{s}g(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s; y_{t}) + \frac{1}{s^2}h(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t})\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}\big) +\ov\boldsymbol{V}_{t}^{-1}\beta_{t}\boldsymbol{\mu}_{t-1} \Big], \end{align} where \begin{align} \boldsymbol{V}_t & = \big[\frac{1}{s^2}h(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_\tr{o}/s;y_{t})\boldsymbol{x}_{t}\boldsymbol{x}_{t}\T + \ov\boldsymbol{V}_t^{-1}\big]^{-1}\\ \label{eq:bV.full} &= \ov\boldsymbol{V}_t -\ov\boldsymbol{V}_t\boldsymbol{x}_{t}\boldsymbol{x}\T_{t}\ov\boldsymbol{V}_t \frac { h(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_\tr{o}/s;y_{t})} {s^2+h(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_\tr{o}/s;y_{t})\omega_t}, \end{align} $\omega_t=\boldsymbol{x}\T_{t}\ov\boldsymbol{V}_t\boldsymbol{x}_{t}$, and \eqref{eq:bV.full} is obtained via matrix inversion lemma \cite[Sec.~4.11]{Moon00_Book}. Combining \eqref{eq:bV.full} with \eqref{theta0.prime} yields \begin{align}\label{btheta.full} \boldsymbol{\mu}_{t}&=\beta_{t}\boldsymbol{\mu}_{t-1} + \ov\boldsymbol{V}_t\boldsymbol{x}_t\frac{sg(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t})+h(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t})\boldsymbol{x}_{t}\T(\boldsymbol{\theta}_{\tr{o}}-\beta_{t}\boldsymbol{\mu}_{t-1})}{s^2+h(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_{\tr{o}}/s;y_{t})\omega_t}. \end{align} After this first update, a further refinement may be obtained by alternating between \eqref{btheta.full} and the reassignment $\boldsymbol{\theta}_{\tr{o}}\leftarrow\boldsymbol{\mu}_{t}$ but, of course, it is much easier to use just one iteration with $\boldsymbol{\theta}_{\tr{o}}=\beta_{t}\boldsymbol{\mu}_{t-1}$, which yields a simple update of the skills' mean and covariance matrix, and that defines the \gls{kf} rating: \begin{empheq}[box=\fbox]{align} \label{ov.bV.update.KF} \ov{\boldsymbol{V}}_t&\leftarrow \beta_{t}^2 \boldsymbol{V}_{t-1}+\epsilon_t \boldsymbol{I}\\ \omega_t &\leftarrow \boldsymbol{x}_t\T\ov{\boldsymbol{V}}_t \boldsymbol{x}_t\\ g_t &\leftarrow g(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s;y_{t})\\ h_t &\leftarrow h(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s;y_{t})\\ \label{oneshot.mean.KF} \boldsymbol{\mu}_{t}&\leftarrow\beta_{t}\boldsymbol{\mu}_{t-1} + \ov\boldsymbol{V}_t\boldsymbol{x}_t\frac{sg_t}{s^2+h_t\omega_t}\\ \label{ht.update.KF} h_t &\leftarrow h(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t}/s;y_{t})\\ \label{Vt.update.KF} \boldsymbol{V}_{t} &\leftarrow \ov\boldsymbol{V}_t -\ov\boldsymbol{V}_t\boldsymbol{x}_{t}\boldsymbol{x}\T_{t}\ov\boldsymbol{V}_t \frac { h_t} {s^2+h_t\omega_t}, \end{empheq} where \eqref{Vt.update.KF} is obtained from \eqref{eq:bV.full} by setting $\boldsymbol{\theta}_\tr{o}\leftarrow \beta_{t} \boldsymbol{\mu}_t$. On the other hand, since we are in the realm of approximations, we might also use $\boldsymbol{\theta}_{\tr{o}}=\beta_{t} \boldsymbol{\mu}_{t-1}$ in \eqref{eq:bV.full}, which amounts to ignoring/removing \eqref{ht.update.KF}; this is what we do in the rest of this work. The initialization is done by $\boldsymbol{V}_0 \leftarrow v_0 \boldsymbol{I}$, where $v_0$ is the prior variances of the skills. \subsection{Simplified Kalman Filters and Stochastic Gradient}\label{Sec:SKF} We can now translate \eqref{ov.bV.update.KF}-\eqref{Vt.update.KF} taking into account the fact that the matrices are diagonal, \ie by replacing $\boldsymbol{V}_t$ with $\tr{diag}(\boldsymbol{v}_t)$; this yields the following equations of the \gls{vskf} rating: \begin{empheq}[box=\fbox]{align} \label{ov.bV.update.vSKF} \ov{\boldsymbol{v}}_t&\leftarrow \beta_{t}^2 \boldsymbol{v}_{t-1}+\epsilon_t \boldsymbol{1}\\ \omega_t&\leftarrow\sum_{m\in\set{\mc{I}_{t},\mc{J}_{t}}} \ov{v}_{t,m}\\ g_t &\leftarrow g(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s;y_{t})\\ h_t &\leftarrow h(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s;y_{t})\\ \label{oneshot.mean.vSKF} \boldsymbol{\mu}_{t}&\leftarrow\beta_{t}\boldsymbol{\mu}_{t-1} + \ov\boldsymbol{v}_t\odot\boldsymbol{x}_t\frac{s g_t}{s^2+h_t\omega_t}\\ \label{Vt.update.vSKF} \boldsymbol{v}_{t} &\leftarrow \ov\boldsymbol{v}_t \odot\Big( \boldsymbol{1} - \ov\boldsymbol{v}_t \odot |\boldsymbol{x}_t|\frac { h_t} {s^2+h_t\omega_t}\Big), \end{empheq} where $\odot$ denotes the element-by-element multiplication, and the initialization is done as $\boldsymbol{v}_0\leftarrow v_0 \boldsymbol{1}$. In particular, exploiting the form of the scheduling vector $\boldsymbol{x}_t$ (with elements $x_{t,m}\in\set{-1,0,1}$), we see that the players who are not involved in the game ($m\notin\set{\mc{I}_t,\mc{J}_t}$ and thus $x_{t,m}=0$), are updated as \begin{align} \mu_{t,m}&=\beta_{t}\mu_{t-1,m},\\ v_{t,m}&=\beta_{t}^2 v_{t-1,m} +\epsilon_t. \end{align} Most often $\beta_{t}=1$ will be used and then the means of the skills do not change, but the variance grows with $t$. This is compatible with the intuition we have about the rating procedure: the players not involved in the game should not change their mean (remember, the mean is approximated by the mode, thus it should be interpreted as the \gls{ml} estimate of the skill), while the growing variance corresponds to increased uncertainty about the skills' values due to passed time. It is worthwhile to note that a similar algorithm was proposed in \cite{Paleologu13} for $\ell(z;y)$ being a quadratic function, \ie in the context in which the Kalman algorithm is conventionally used. As for the scalar-covariance model, we have to replace $\boldsymbol{v}_t$ with $v_t\boldsymbol{1}$ in the \gls{vskf} rating, which will yield the following equations of the \gls{sskf} rating: \begin{empheq}[box=\fbox]{align} \ov{v}_t&\leftarrow \beta_{t}^2 v_{t-1}+\epsilon_t\\ \omega_t&\leftarrow 2F \ov{v}_{t}\\ g_t &\leftarrow g(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s;y_{t})\\ h_t &\leftarrow h(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s;y_{t})\\ \label{oneshot.mean.sSKF} \boldsymbol{\mu}_{t}&\leftarrow\beta_{t}\boldsymbol{\mu}_{t-1} + \ov{v}_t\boldsymbol{x}_t\frac{s g_t}{s^2+h_t\omega_t} \\ \label{Vt.update.sSKF} v_{t} &\leftarrow \ov{v}_t \Big( 1 - \frac{\omega_t}{M}\frac {h_t} {s^2+h_t\omega_t}\Big), \end{empheq} where $F=|\mc{I}_{t}|=|\mc{J}_{t}|$ and the initialization requires setting $v_0$. Another simplification is obtained if we assume that the variance $\ov{v}_t$ is constant across time $t$, \ie $\ov{v}_t=\ov{v}$, as done also in \cite{Ingram21}. We obtain then the \gls{fskf} \begin{empheq}[box=\fbox]{align} g_t &\leftarrow g(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s;y_{t})\\ h_t &\leftarrow h(\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s;y_{t})\\ \label{oneshot.mean.fSKF} \boldsymbol{\mu}_{t}&\leftarrow\beta_{t}\boldsymbol{\mu}_{t-1} + \ov{v}\boldsymbol{x}_t\frac{sg_t }{s^2+h_t 2F\ov{v}}, \end{empheq} where the initialization requires setting $\ov{v}$. All the \gls{skf} rating algorithm adjust the mean in the direction of the gradient, $g_t$, of the log-likelihood $\ell(z_t/s;y_t)$; they differ in the way the adjustment step is calculated from the previous results which mostly depends on the the second-order derivative, $h_t$. And finally, ignoring $h_t$, \eqref{oneshot.mean.fSKF} may be written as \begin{empheq}[box=\fbox]{align}\label{SG.update} \boldsymbol{\mu}_{t}\leftarrow \boldsymbol{\mu}_{t-1} + \ov{v}/s \boldsymbol{x}_t g_t, \end{empheq} which is the same as the \gls{sg} algorithm with the adaptation step being proportional to $\ov{v}$, the latter has the meaning of the posterior variance of the skills (which we suppose to be known). As we will see, depending on the model, ignoring $h_t$ may make sense. In particular, using the Bradley-Terry or the Davidson models, we obtain $\lim_{|z|\rightarrow{\infty}}h(z;y_t) =0$, see \eqref{h.Logistic} and \eqref{Davidson.h}. That is, for large differences between the skills, the second derivative of $\ell(z;y_t)$, may indeed be close to zero. At this point it is useful to comment on the use of the scale. While in practice, $s=400$, \eg \cite{fide_calculator}, \cite{eloratings.net}, \cite{Silver20} or $s=600$ \cite{fifa_rating} were applied, the value of $s$ is entirely arbitrary and, actually, irrelevant from the algorithmic point of view, as stated in the following: \begin{proposition}\label{Prop:SKF.scale} We denote by $\boldsymbol{\mu}_t(s,v_0,\epsilon)$ and by $\boldsymbol{V}_t(s,v_0,\epsilon)$ the mean and the covariance matrix of the skills obtained using the \gls{kf} algorithm with the scale $s$, and initialization parameters $v_0$ and $\epsilon$. Then \begin{align} \boldsymbol{\mu}_t(s,s^2v_0,s^2\epsilon)&=s\boldsymbol{\mu}_t(1,v_0,\epsilon)\\ \boldsymbol{V}_t(s,s^2v_0,s^2\epsilon)&=s^2\boldsymbol{V}_t(1,v_0,\epsilon). \end{align} For the \gls{vskf} algorithm we will obtain $\boldsymbol{v}_t(s,s^2v_0,s^2\epsilon)=s^2\boldsymbol{v}_t(1,v_0,\epsilon)$ while for the \gls{sskf} algorithm, $v_t(s,s^2v_0,s^2\epsilon)=s^2v_t(1,v_0,\epsilon)$, where $\boldsymbol{v}_t$ and $v_t$ are shown to depend on $v_0$ and $\epsilon$. On the other hand, for the \gls{fskf} and the \gls{sg}, the mean depends solely on $\ov{v}$ and thus we obtain $\boldsymbol{\mu}_t(s,s^2 \ov{v})=s\boldsymbol{\mu}_t(1,\ov{v})$. \end{proposition} \begin{proof} See \appref{Proof:SKF.scale}. \end{proof} \propref{Prop:SKF.scale} simply says that the scale, $s$, is not identifiable from the data so we can ignore it, \eg use $s=1$ (which simplifies the notation) and adjust only the parameters $\beta$, $\epsilon$, and $v_0$. The scale may be then included in the final results by multiplying the means $\boldsymbol{\mu}_t$ (by $s$) and the co-/variances $\boldsymbol{V}_t$, $\boldsymbol{v}_t$, or $v_t$ (by $s^2$). Nevertheless, by introducing the scale we are able to compare our rating algorithms with those that can be found in the literature. In particular, we can rewrite \eqref{SG.update} \begin{align}\label{SG.update.s1} \boldsymbol{\mu}_{t}\leftarrow \boldsymbol{\mu}_{t-1} + K s \boldsymbol{x}_t g_t, \end{align} where we use $\ov{v}=Ks^2$ with $K$ being the adaptation step defined for the scale $s=1$. Since $K$ should be seen as the variance $\ov{v}$, it clarifies the well-known variable-step strategy in the \gls{sg} adaptation, where the step $K$ is decreased after many games are played: this is when the posterior variance decreases. \begin{comment} \subsection{Probabilities of the game outcomes}\label{Sec:outcome.proba} One of the appealing applications of the rating algorithm is their forecasting capability, \ie the application of the rating results to calculate the probability of the outcome, $y_t$, \emph{before} the game takes place.\footnote{Here, with a slight abuse of notation we use $y_t$ to denote the random variable modelling the outcome of the game, while in the rest of the text, $y_t$ is used synonymously with the outcome itself.} In fact, the forecasting quality metrics (defined in \secref{Sec:Num.results}) will allow us to compare the rating algorithms. We will only consider a one-step prediction which consists in finding the probability \begin{align} \PR{y_{t}=y| \un{y}_{t-1}} &=\int \PR{y_{t}=y|z_{t}} \pdf(z_t|\un{y}_{t-1})\dd z_t \\ \label{Pr.y.z.theta} &=\int L(z/s;y) \mc{N}(z; \beta_{t}\boldsymbol{x}_t\T\boldsymbol{\mu}_{t-1}, \omega_{t})\dd z, \end{align} and since the integral \eqref{Pr.y.z.theta} is rarely analytically tractable,\footnote{In the so-called Thurston model, see \secref{Sec:New.Ratings}, where $L(z;1)=\Phi(z)$ is the Gaussian \gls{cdf}, the probability \eqref{Pr.y.z.theta} may be found \emph{exactly} as \begin{align} \PR{y_t=1|\un{y}_{t-1}} &=\int \Phi\big(z/s\big) \mc{N}(z;\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1},\omega_t) \dd z \label{proba.outcome.Thurstone} =\Phi\left(\frac{\beta_{t}\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}}{\sqrt{s^2+\omega_t}}\right). \end{align} } we will rather seek its approximation $\hat{P}_t(y)\approx\PR{y_{t}=y| \un{y}_{t-1}}$. For example, when $L(z;y)$ is a logistic function, $\hat{P}_t(y)$ may be expressed using the same logistic function, see \cite[Eq.~(16)]{Glickman99}. Here, in the spirit of keeping the expressions applicable to any skills-outcome model, and as suggested by \cite{Ingram21}, we will use a numerical quadrature \begin{align}\label{P.hat.t} \hat{P}_t(y)= \frac{1}{\sqrt{\pi}}\sum_{k=0}^{K} \eta_k L\big((\sqrt{2\omega_t} z_k+\beta_{t}\boldsymbol{x}\T_{t}\boldsymbol{\mu}_{t-1})/s;y\big), \end{align} where $z_k$ and $\eta_k$ are, respectively the knots and the weights of the $K$-points Gauss-Hermite quadrature, and $K\ge 15$ proved to be sufficient in our work. \end{comment} \begin{comment} Here we rather use the approach, from which the recursive algorithm to find the posterior distributions $\pdf(\boldsymbol{\theta}_t|\un{\boldsymbol{y}}_t)$ was derived and, instead of numerical quadrature, we will rely on the formulas for $g(z;y)$ and $h(z;y)$. Namely, the Taylor expansion similar to the one used in \eqref{Taylor.expansion} \begin{align}\label{Taylor.proba} \log L(z;y)\approx \ell(z_0;y) + g(z_0; y)(z-z_0) - \frac{1}{2}h(z_0; y)(z-z_0)^2, \end{align} applied in \eqref{Pr.y.z.theta} with $z_0=\beta_{t}\boldsymbol{x}_t\T\boldsymbol{\mu}_{t-1}$ yields $\PR{y_t=y| \un{y}_{t-1}}\approx \hat{P}_t(y)$ \begin{align}\label{P.t.1} \hat{P}_t(y) &= \frac{L(z_0;y)}{\sqrt{2\pi\omega_t}}\int \exp\left( g(z_0;y)(z-z_0) - \frac{1}{2}[h(z_0;y)+1/\omega_t](z-z_0)^2 \right) \dd z\\ \label{P.t.2} &=\frac{L(z_0;y)}{\sqrt{h(z_0;y)\omega_t+1}} \exp\left( \frac{g^2(z_0;y)\omega_t}{2\big( h(z_0;y)\omega_t+1\big)} \right), \end{align} where to pass from \eqref{P.t.1} to \eqref{P.t.2} we rely on the relationship $\int \exp(bx-\frac{1}{2}ax^2)=\sqrt{2\pi /a}\exp(b^2 /(2a))$ \cite[Sec.~8.4.1]{Barber12_Book}; this is the so-called Laplace approximation formula \cite[Sec.~18.2.2]{Barber12_Book}. Since \eqref{P.t.2} only approximates the probability, the results should be normalized \begin{align}\label{outcome.proba.normalized} \hat{P}_t(y) \leftarrow \frac{\hat{P}_t(y)}{\sum_{q\in\mc{Y}} \hat{P}_t(q)}. \end{align} \end{comment} \section{From skills-outcome models to new online ratings}\label{Sec:New.Ratings} We will turn to the popular skills-outcome models that has been often used and find the functions $g(z;y_t)$ and $h(z;y_t)$ which must be used in the \gls{kf} and the \gls{skf} algorithms \begin{itemize} \item Thurston model \cite{Thurston27} (binary games) uses $y_t=0$ for the away win and $y_t=1$ for the home win: \begin{align} \label{Thurston.L} L(z;y_{t}) &= \Phi\left(z\right)\IND{y_{t}=1}+\Phi\left(-z\right)\IND{y_{t}=0},\\ \label{Thurston.g} g(z;y_{t}) &= V(z)\IND{y_t=1} -V(-z)\IND{y_t=0},\\ \label{Thurston.h} h(z;y_{t})& =W(z)\IND{y_t=1} +W(-z)\IND{y_t=0}, \end{align} where $\Phi(z)=\int_{-\infty}^z\ov\mc{N}(t)\dd t$, $\ov\mc{N}(t)=\mc{N}(t;0,1)$, and \begin{align} \label{Thurston.V} V(z)&=\frac{\ov{\mc{N}}(z)}{ \Phi\big( z\big)}, \\ \label{Thurston.W} W(z)&=-V'(z)=V(z)\big(z+V(z)\big). \end{align} \item Bradley-Terry model \cite{Bradley52} (binary games), with $y_t=0$ (away win) and $y_t=1$ (home win) \begin{align}\label{L.Logistic} L(z;y_{t})&=F_\tr{L}\big( z\big)\IND{y_{t}=1}+F_\tr{L}\big( -z\big)\IND{y_{t}=0},\\ \label{g.Logistic} g(z;y_{t}) &= \ln 10 \big( y_t-F_\tr{L}(z) \big),\\ \label{h.Logistic} h(z;y_{t}) &= \left(\ln 10\right)^2 F_\tr{L}(z)F_\tr{L}(-z), \end{align} where we use the logistic function \begin{align}\label{F.Logistic} F_\tr{L}(z)=\frac{1}{1+10^{-z}}. \end{align} \item Davidson draw model \cite{Davidson70}, \cite{Szczecinski20} with $y_t=0$ (away win), and $y_t=1$ (draw), and $y_t=2$ (home win) \begin{align} \label{Davidson.L} L(z;y_{t})&=F_\tr{D}(-z)\IND{y_{t}=0}+\kappa\sqrt{F_\tr{D}(-z)F_\tr{D}(z)}\IND{y_{t}=1}+F_\tr{D}(z)\IND{y_{t}=2},\\ \label{Davidson.g} g(z;y_{t}) &= 2\ln 10 \big( \hat{y}_t-G_\tr{D}(z) \big)\\ \label{Davidson.h} h(z;y_{t})& =\left(\ln 10\right)^2\frac{\kappa 10^{z}+4 +\kappa 10^{-z}}{(10^{z}+\kappa+ 10^{-z})^2}, \end{align} where $\hat{y}_t=\frac{1}{2}y$ may be treated as the ``score'' of the game, and \begin{align} F_\tr{D}(z)&=\frac{10^z}{10^{-z}+\kappa+10^{z}},\\ \label{G.D.z} G_\tr{D}(z)&=\frac{10^{z}+\kappa/2}{10^{-z}+\kappa+10^{z}}. \end{align} Note that setting $\kappa=0$, \ie removing the possibility of draws, we obtain $F_\tr{L}(z) = G_\tr{D}(z/2)$, \ie we recover the equations of the Bradley-Terry model with halved scale. A simple, but less obvious observation is that, setting $\kappa=2$, we obtain $G_\tr{D}(z)=F_\tr{L}(z)$ and thus $g(z;y_t)$ in \eqref{g.Logistic} is half of \eqref{Davidson.g}. A direct consequence, observed in \cite{Szczecinski20}, is that, even if the Bradley-Terry and the Davidson models are different, their \gls{sg} updates \eqref{SG.update} may be identical. \end{itemize} \subsection{Comparison with TrueSkill algorithm}\label{Sec:TrueSkill} The TrueSkill algorithms is derived assuming that, for given skills $\boldsymbol{\theta}_t$, the outcome, $y_t$, is obtained by discretization of a variable $d_t=z_t +u_t=\boldsymbol{x}\T_t\boldsymbol{\theta}_t+u_t$, \ie \begin{align}\label{y_t.TrueSkill} y_t=\IND{d_t \ge 0}, \end{align} where $u_t$ is a zero-mean Gaussian variance with variance $\sigma^2$. Thus, $\PR{y_t=y|z_t}=L(z_t/\sigma;y)$, where $L(\cdot,\cdot)$ is given by the Thurston equation \eqref{Thurston.L}. So while $\sigma^2$ is the variance of the variable $u_t$, we may also treat $\sigma$ as the scale in the Thurston model.\footnote{To be more precise, the variance $\sigma^2$ in the TrueSkill algorithm is proportional to the number of players in the team, $F$.} Considering the binary games, the TrueSkill algorithm, described in \cite{trueskill} and \cite{Herbrich06} for two players, may be summarized as follows (for $m\in\set{i_t,j_t}$): \begin{align} \ov{\boldsymbol{v}}_t &\leftarrow\boldsymbol{v}_{t-1}+\epsilon \boldsymbol{1},\\ \omega_t &\leftarrow \ov{v}_{t,i_t}+\ov{v}_{t,j_t},\\ \tilde{\sigma}_t&\leftarrow \sigma \sqrt{1+\omega_t/\sigma^2},\\ \tilde{g}_t &\leftarrow g(\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/\tilde{\sigma}_t; y_t),\\ \tilde{h}_t &\leftarrow h(\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/\tilde{\sigma}_t; y_t),\\ \label{update.theta.t.TrueSkill} \mu_{t,m} &\leftarrow\mu_{t-1,m} + x_{t,m}\ov{v}_{t,m} \frac{ \tilde{g}_t\sigma}{\sigma^2\sqrt{1+\omega_t/\sigma^2}}, \\ \label{update.var.t.TrueSkill} v_{t,m} & \leftarrow\ov{v}_{t ,m} \Big(1 - \frac{\ov{v}_{t,m}\tilde{h}_t}{\sigma^2+\omega_t}\Big), \end{align} where $h(\cdot;\cdot)$ and $g(\cdot;\cdot)$ are derived in \eqref{Thurston.L}-\eqref{Thurston.W} for the Thurston model. The differences with the \gls{vskf} algorithm are the following: i) the scale used to calculate the fist and the second derivatives is increased by the factor $\sqrt{1+\omega_t/\sigma^2}$, and ii) the denominator of the update terms in \eqref{update.theta.t.TrueSkill} and \eqref{update.var.t.TrueSkill} is not affected by $h_t$ as it is the case in the corresponding equations \eqref{oneshot.mean.vSKF} and \eqref{Vt.update.vSKF} of the \gls{vskf} algorithm. In particular, knowing that $\omega_t h_t\leq \omega_t$, we see that the posterior variance $v_{t,m}$ decreases faster in the \gls{vskf} algorithm than it does in the TrueSkill algorithm. Numerical examples shown in \secref{Sec:Num.results} will allow us to asses the impact of these differences between the algorithms. \subsection{Comparison with Glicko algorithm}\label{Sec:Glicko} The Glicko algorithm, defined for two players in \cite[Eqs.~(9)-(10)]{Glickman99} may be formulated using our notation as follows (for $m\in\set{i_t,j_t}$): \begin{align} \ov{\boldsymbol{v}}_t&\leftarrow \boldsymbol{v}_{t-1} +\epsilon_t\boldsymbol{1},\\ \omega_t&\leftarrow \ov{v}_{t,i_t}+\ov{v}_{t,j_t},\\ \label{tilde.sigma.Glicko} \tilde{\sigma}_{t,m}&\leftarrow \sigma r( \omega_t-\ov{v}_{t,m}),\\ \tilde{g}_{t,m}&\leftarrow g\big(\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/\tilde{\sigma}_{t,m}; y_t )\\ \tilde{h}_{t,m}&\leftarrow h\big(\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/\tilde{\sigma}_{t,m}; y_t\big)\\ \label{update.mu.t.Glicko} \mu_{t,m}&\leftarrow\mu_{t-1,m} + \ov{v}_{t,m} x_{t,m} \frac{\tilde{\sigma}_{t,m} \tilde{g}_{t,m}}{\tilde{\sigma}_{t,m}^2+\ov{v}_{t,m} \tilde{h}_{t,m}},\\ \label{update.v.t.Glicko} v_{t,m} & \leftarrow \ov{v}_{t,m}\frac{\tilde{\sigma}_{t,m}^2}{\tilde{\sigma}_{t,m}^2+\ov{v}_{t,m} \tilde{h}_{t,m}}, \end{align} where \begin{align} \label{rtjt} r(v)&=\sqrt{1+ \frac{v a}{\sigma^2}}, \end{align} $a=3\ln^2 10/\pi^2$ is the factor which allows us to approximate the logistic distribution with the Gaussian distributions (see discussion in \secref{Sec:Synthetic}), and $g(z;y_t)$ and $h(z;y_t)$ are defined for the Bradley-Terry model, respectively, in \eqref{g.Logistic} and \eqref{h.Logistic}. The difference between $g_{t}$, $h_t$ in the \gls{vskf} rating (based on the Bradley-Terry model) and $\tilde{g}_{t,m}$, $\tilde{h}_{t,m}$ in the Glicko algorithm, is due to the presence of the factor $r(\omega_t-\ov{v}_{t,m})$ which multiplies the scale in \eqref{tilde.sigma.Glicko}. However, this factor tends to unity when the variance of the opposing players decreases, as it is the case after convergence. Then, we may use $\tilde{g}_{t,m}\approx g_t$ and $\tilde{h}_{t,m}\approx h_t$. Further, if we use $\ov{v}_{t,m}$ instead of $\omega_t$ in the denominator of the mean \eqref{oneshot.mean.vSKF} and of the variance \eqref{Vt.update.vSKF} updates in the \gls{vskf} algorithm, we will obtain, respectively, the Glicko updates of the mean, \eqref{update.mu.t.Glicko} and of the variance, \eqref{update.v.t.Glicko}. But, because $\ov{v}_{t,m}<\omega_t$, the update step size is always larger in the Glicko algorithm comparing to the \gls{vskf} algorithm. To asses the impact of the above differences on the performance we will evaluate the Glicko algorithm using numerical examples in \secref{Sec:Num.results}. \subsection{Comparison with Elo algorithm}\label{Sec:Elo} Considering again the binary games and using the Bradley-Terry model, the \gls{sg} update \eqref{SG.update.s1} may be written as \begin{align}\label{SG.BT} \boldsymbol{\mu}_t \leftarrow \boldsymbol{\mu}_{t-1} + \tilde{K} s \boldsymbol{x}_t \big(y_t-F_\tr{L}(z_t)\big), \end{align} where $\tilde{K}$ absorbs the term $\ln 10$ from \eqref{g.Logistic}, and we recognize \eqref{SG.BT} as the well-known Elo rating algorithm. The fact that the Elo algorithm may be seen as the \gls{sg} update in the Bradly-Terry model has been already noted before, \eg in \cite{Kiraly17}, \cite{Szczecinski20}, \cite{Lasek20}. On the other hand, using the Thurston model and after simple algebraic transformations of \eqref{Thurston.g} we obtain the following \gls{sg} update: \begin{align}\label{SG.Thurston} \boldsymbol{\mu}_t \leftarrow \boldsymbol{\mu}_{t-1} + K s \boldsymbol{x}_t \big(y_t-\Phi(z_t)\big)\xi(z_t), \end{align} where $\xi(z)=\ov{\mc{N}}(z)/\big[\Phi(z)\Phi(-z)]\big]$. Since $\xi(z)$ is not constant, \ie it depends on $z$, \eqref{SG.Thurston} is \emph{not the same} as the Elo rating algorithm proposed initially by \cite{Elo08_Book} under the following form: \begin{align}\label{Elo.original} \boldsymbol{\mu}_t \leftarrow \boldsymbol{\mu}_{t-1} + K s \boldsymbol{x}_t \big(y_t-\Phi(z_t)\big). \end{align} In other words, the original version of the Elo algorithm \eqref{Elo.original} does not implement the \gls{sg} update in the Thurston model. We indicate it merely for completeness of the analysis because, nowadays, the Elo algorithm is practically always used with the Bradley-Terry model as defined in \eqref{SG.BT}. \begin{comment} \subsection{Multidimensional outcomes and skills}\label{Sec:Generalizations} We will adapt now the rating algorithms to a more complex skills-outcome model: first, instead of the scalar game outcome $y_t$, we assume that we observe the game-points (such as goals), denoted as $y_{\tr{h},t}$ and $y_{\tr{a},t}$ which are scored, respectively, by the home and the away players; the outcome is thus two-dimensional. Further, players will be assigned the offensive and the defensive skills, denoted, respectively, as $\boldsymbol{\theta}_{\tr{off},t}$ and $\boldsymbol{\theta}_{\tr{def},t}$, and the relationships between the outcomes and the skills is then defined akin to \eqref{pdf.y.theta} \begin{align}\label{L.h} \PR{y_{\tr{h},t}|\boldsymbol{\theta}_{\tr{off},t},\boldsymbol{\theta}_{\tr{def},t}}&= L\big(z_{\tr{h},t}/s; y_{\tr{h},t}), \\ \label{L.a} \PR{y_{\tr{a},t}|\boldsymbol{\theta}_{\tr{off},t},\boldsymbol{\theta}_{\tr{def},t}}&= L\big(z_{\tr{a},t}/s; y_{\tr{a},t}) \end{align} with \begin{align} z_{\tr{h},t}&=\boldsymbol{x}_{\tr{h},t}\T\boldsymbol{\theta}_{\tr{off},t} -\boldsymbol{x}_{\tr{a},t}\T\boldsymbol{\theta}_{\tr{def},t},\\ z_{\tr{a},t}&=\boldsymbol{x}_{\tr{a},t}\T\boldsymbol{\theta}_{\tr{off},t} -\boldsymbol{x}_{\tr{h},t}\T\boldsymbol{\theta}_{\tr{def},t}. \end{align} Since \eqref{L.h}-\eqref{L.a} assume that both outcomes $y_{\tr{h},t}$ and $y_{\tr{a},t}$ are conditionally independent, we deal in fact with two ``virtual'' simultaneous games with independent outcomes: in the first one, the offensive home players are playing against the defensive away players (the outcome is given by $y_{\tr{h},t}$), and the roles are reversed in the second game, \ie the offensive away players are playing against the defensive home players (the outcome is then given by $y_{\tr{a},t}$).\footnote{In most cases, and for sure in individual sports, this is merely a conceptual separation. On the other hand, in American football, there are, indeed, independent defensive and offensive sub-teams so the distinction between them via independent skills is rooted in the game principles. Nevertheless, the points can still be scored both by the offensive and the defensive sub-teams and this fact is not taken into account by our model.} Then, if the goals scored, $y_{\tr{h},t}$ and $y_{\tr{a},t}$, are modelled using the Poisson distribution, \cite{Maher82} \begin{align} \PR{y|z}=\frac{1}{y!}\big[\lambda(z)\big]^y\mr{e}^{-\lambda(z)}, \end{align} whose mean is modelled as $\lambda(z) = \mr{e}^{z+c}$, where $c$ is a constant, we obtain \begin{align} L(z;y)&=\frac{1}{y!}\mr{e}^{y (z+c)}\exp\big(-\mr{e}^{z+c} \big),\\ g(z;y)&= y - \lambda(z),\\ h(z;y)&= \lambda(z), \end{align} which can be directly used in the algorithms from \secref{Sec:Tracking}. For example, opting for the \gls{fskf}, the updates are defined as follows: \begin{align} \label{lambda.h.fskf} z_{\tr{h},t}&\leftarrow \beta_{t} \boldsymbol{x}_{\tr{h},t}\T\boldsymbol{\mu}_{\tr{off},t-1} -\beta_{t}\boldsymbol{x}_{\tr{a},t}\T\boldsymbol{\mu}_{\tr{def},t-1},\\ z_{\tr{a},t}&\leftarrow \beta_{t} \boldsymbol{x}_{\tr{a},t}\T\boldsymbol{\mu}_{\tr{off},t-1} -\beta_{t}\boldsymbol{x}_{\tr{h},t}\T\boldsymbol{\mu}_{\tr{def},t-1},\\ \lambda_{\tr{h},t}&\leftarrow \exp(c+z_{\tr{h},t}/s),\\ \label{lambda.a.fskf} \lambda_{\tr{a},t}&\leftarrow \exp(c+z_{\tr{a},t}/s),\\ \label{mu.off.fskf} \boldsymbol{\mu}_{\tr{off},t}&\leftarrow \beta_{t}\boldsymbol{\mu}_{\tr{off},t-1} + s \Big(\boldsymbol{x}_{\tr{h},t}\frac{y_{\tr{h},t}-\lambda_{\tr{h},t}}{s^2/\ov{v}+2\lambda_{\tr{h},t}} + \boldsymbol{x}_{\tr{a},t}\frac{y_{\tr{a},t}-\lambda_{\tr{a},t}}{s^2/\ov{v}+2\lambda_{\tr{a},t}} \Big),\\ \label{mu.def.fskf} \boldsymbol{\mu}_{\tr{def},t}&\leftarrow \beta_{t}\boldsymbol{\mu}_{\tr{def},t-1} - s \Big(\boldsymbol{x}_{\tr{a},t}\frac{y_{\tr{h},t}-\lambda_{\tr{h},t}}{s^2/\ov{v}+2\lambda_{\tr{h},t}} + \boldsymbol{x}_{\tr{h},t}\frac{y_{\tr{a},t}-\lambda_{\tr{a},t}}{s^2/\ov{v}+2\lambda_{\tr{a},t}} \Big), \end{align} where we assume $F=|\mc{I}_t|=|\mc{J}_t|=2$. We note that the updates \eqref{mu.off.fskf}-\eqref{mu.def.fskf} are obtained by applying the algorithm equations \emph{simultaneously} to both outcomes defined in \eqref{L.h} and \eqref{L.a}: the offensive mean $\boldsymbol{\mu}_{\tr{off},t}$ is affected simultaneously by the outcome $y_{\tr{h},t}$ (and the scheduling vector $\boldsymbol{x}_{\tr{h},t}$ in \eqref{L.h}) and by the outcome $y_{\tr{a},t}$ (and the scheduling vector $\boldsymbol{x}_{\tr{a},t}$ in \eqref{L.a}); this explains why there are two update terms in \eqref{mu.off.fskf}. The same reasoning applies to \eqref{mu.def.fskf}. In the case we decide to abandon the assumption of multi-dimensional skills, we have to set $\boldsymbol{\mu}_t=\boldsymbol{\mu}_{\tr{off},t}=\boldsymbol{\mu}_{\tr{def},t}$, so that \eqref{lambda.h.fskf}-\eqref{mu.def.fskf} become \begin{align} z_t &\leftarrow \beta_{t} \boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1},\\ \lambda_{\tr{h},t}&\leftarrow \exp(c+ z_t/s ),\\ \lambda_{\tr{a},t}&\leftarrow \exp(c - z_t/s),\\ \label{mu.t.Poisson.scalar.skills.fskf} \boldsymbol{\mu}_{t}&\leftarrow \beta_{t}\boldsymbol{\mu}_{t-1} +s \boldsymbol{x}_{t} \Big(\frac{y_{\tr{h},t}-\lambda_{\tr{h},t}}{s^2/v_0+2\lambda_{\tr{h},t}} - \frac{y_{\tr{a},t}-\lambda_{\tr{a},t}}{s^2/v_0+2\lambda_{\tr{a},t}} \Big), \end{align} where $\boldsymbol{x}_t=\boldsymbol{x}_{\tr{h},t}-\boldsymbol{x}_{\tr{a},t}$ is the conventional scheduling vector, see \eqref{z.t}. We note that \eqref{mu.t.Poisson.scalar.skills.fskf} is different from the \gls{sg} update which would follow \eqref{SG.update} and which was also shown in \cite[Eq. (18)]{Lasek20} \begin{align} \boldsymbol{\mu}_{t}&\leftarrow \boldsymbol{\mu}_{t-1} + K s \boldsymbol{x}_{t} \big((y_{\tr{h},t}-y_{\tr{a},t})-(\lambda_{\tr{h},t}-\lambda_{\tr{a},t}) \big). \end{align} \end{comment} \section{Numerical examples}\label{Sec:Num.results} We will proceed in two steps. First, in order to assess the effect of approximations, we will use the synthetic data generated using the predefined skills-outcome models and the Gaussian random walk for skills dynamics defined in \secref{Sec:Model}. In this way, knowing exactly the model underlying the data and using it for the derivation of the algorithm, the eventual differences between the algorithms will be due to the approximations. Further the effect of the model mismatch may be also assessed using the algorithms based on the model different from the one used to generate the data. The insight obtained from the synthetic examples will allow us to interpret the results obtained from empirical data. \subsection{Synthetic data}\label{Sec:Synthetic} We suppose there are $M$ players in the pool and every ``day" (or any other time unit) there are $J=M/2$ games with random scheduling; the season lasts $D$ days. The time dependence required by \eqref{epsilon.t} is defined as $\tau(1)=\tau(2)=\ldots=\tau(J)=0$, $\tau(J+1)=\tau(J+2)=\ldots=\tau(2J)=1$ etc. The number of games in the season is equal to $T=DJ$. We use $M=20$ ($J=10$) and $D=100$, thus $T=1000$.\footnote{This bears a resemblance to a ``typical'' football season where, on average each team plays once per week. In practice, of course, the number of weeks, $D$, cannot be too large, \eg $D<40$.} To generate the sequence of skills $\boldsymbol{\theta}_t, t=1, \ldots, T$ we draw $\theta_{0,m}$ from a zero-mean, unit-variance Gaussian distribution; the remaining skills are obtaied using \eqref{dumped.Markov} with $\hat{\beta}=0.998$ and $\hat{\epsilon}=1-\hat{\beta}^2$. In this way we guarantee that $\Ex[\theta_{t,m}]=0$ and $\Ex[\theta^2_{t,m}]=1$. To evaluate how the perturbation of the skills affects the algorithms, after the day $d_{\tr{switch}}=40$, we remove the first $m_{\tr{switch}}=5$ players, which are already in the game, and replace them with new players whose skills $\theta_{d_{\tr{switch}},m}$ are generated from zero-mean, unit-variance Gaussian distribution; for $d>d_{\tr{switch}}$ we use again the random walk \eqref{dumped.Markov}. Such a ``switch'' scenario loosely reflects the case of new players joining the online games\footnote{The analogy is admittedly of limited scope because the online games do not care about the total number of players, $M$, to be constant. On the other hand, we do care, because we want to be able apply the \gls{kf} rating as defined in \secref{Sec:KF}} andallows us to evaluate how the algorithms deal with abrupt changes of the skills. The algorithms adjust to this ``switch'' by zeroing the means and adjusting the variances of the newly arrived players, that is, setting $\mu_{t-1,m}\leftarrow0, v_{t-1,m}\leftarrow v_0, m=1,\ldots,m_{\tr{switch}}$, where $t=M d_{\tr{switch}}/2$. For the \gls{kf} rating we also have to zero the covariances, $V_{t-1,m,l}\leftarrow 0, m=1,\ldots,m_{\tr{switch}}, \forall l$, while in the \gls{sskf} rating we recalculate the average variance as $v_{t-1}\leftarrow v_{t-1}+ (v_0 - v_{t-1})m_{\tr{switch}}/M$. The results of binary games, $y_t$, are generated with the probability defined by the Thurston model, \ie the probability of the home win is defined by \begin{align}\label{p.t} p_t=\PR{y_t=1} = \Phi(\boldsymbol{x}_{t}\T\boldsymbol{\theta}_t/\sigma), \end{align} where $\sigma^2$ may be interpreted as the variance of the Gaussian noise added to difference between the skills before the discretization defined in \eqref{y_t.TrueSkill}. Thus, increasing $\sigma$ the observations becomex more ``noisy''. Most of the results are shown for $\sigma=1$ and later we will asses the impact of larger $\sigma$. The performance of the algorithm is measured by the \gls{kl} divergence between the actual distribution of the games outcomes (defined by $p_t$) and the estimated distribution (defined by $L(\beta_t\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s; 1)$) \begin{align}\label{DKL.metric} \tr{D}_{t} = p_t\log\frac{p_t}{L(\beta_t\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s; 1)} + (1-p_t)\log\frac{1-p_t}{L(\beta_t\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s; 0)} \end{align} that can be evaluated here because we know how data is generated. \begin{figure}[bt] \psfrag{beta0.98}{$\beta=0.98$} \psfrag{xlabel}{$d$} \begin{tabular}{cc} \scalebox{0.8}{\includegraphics[width=\sizfs\linewidth]{./figures/Thurston_convergence_KF_5000.eps}} & \scalebox{0.8}{\includegraphics[width=\sizfs\linewidth]{./figures/Bradley-Terry_convergence_KF_5000.eps}}\\ a) & b) \end{tabular} \caption{Average \gls{kl} divergence for different values of $\beta$ and $\epsilon$ used in the \gls{kf} rating algorithm based on a) the Thurston model and b) the Bradley-Terry model. The loosely dashed lines indicate the median and the third quartile (only for $\beta=1$ and a) $\epsilon=0.004$, b) $\epsilon=0.002$).}\label{Fig:start} \end{figure} Of course, $\tr{D}_t$ obtained from randomly generated data is also random, so we show in \figref{Fig:start} its mean obtained from 5000 simulation runs of the \gls{kf} rating algorithm based on the Thurston as well as on the Bradley-Terry models, for different values of $\beta$ and $\epsilon$, with $s=\sigma$, and $v_0=1$. Note that, for the Thurston model, using $v_0=1$ and $s=\sigma$, the same model is used for the data generation and for the rating. To smooth the results, we show the average of all the results obtained in the same day $d$. We observe that the performance of the algorithm depends on $\beta$ being close to the actual value in the data-generation model: once $\beta$ is suitably chosen, the effect of $\epsilon$ is of lesser importance. Nonetheless, the best mean performance is obtained with $\epsilon=\hat{\epsilon}=0.004$ (for the Thurston model) and $\epsilon=0.002$ (for the Bradley-Terry model). These parameters will be also used in other algorithms. To put this (rather limited) importance of $\epsilon$ into perspective, we also show in \figref{Fig:start} the median and the third quartile (loosely dashed lines, for $\beta=1$ and the optimal value of $\epsilon$): it indicates that there is more variability due to the randomness of the metric $\tr{D}_{d}$ than due to the change in $\epsilon$. To answer the question why, using two different models (Thurston and Bradley-Terry) practically the same results are obtained in \figref{Fig:start}a and \figref{Fig:start}b, we first note that the logistic distribution (underlying the Bradley-Terry model with the scale $s_{\tr{L}}$) has the variance equal to $s^2_{\tr{L}}/a$ (where $a\approx 1.6$, is defined after \eqref{rtjt}), while the Gaussian distribution (underpinning the Thurston model with the scale $s_{\tr{G}}$) has the variance $s^2_{\tr{G}}$. By equalizing the second moments of both distributions we obtain the relationship $s_{\tr{L}}= s_{\tr{G}}\sqrt{a} \approx 1.3 s_{\tr{G}}$. In other words, the Thurston model may be approximated with the Bradley-Terry model if we increase the scale by $\sqrt{a}$. On the other hand, from \propref{Prop:SKF.scale} we know that we may turn the table: we might keep the scale $s_{\tr{L}}=1$ and then multiply the parameters $v_0$ and $\epsilon$ by the factor $1/\sqrt{a} \approx 0.78$; since it is close to one, the effect of using the same scale and parameters in different models is merely visible. Nonetheless, using a value of $\epsilon\approx 0.5\hat{\epsilon}$ improves (slightly) the performance. The only remaining element which should be adjusted is the initial uncertainty defined by the variance $v_0$; in the rest of this work, for the Bradley-Terry model we will use $v_0=0.5$. So, not very surprisingly, the similarity of the results for two different models is explained by the similarity of the models which happens due to our decision to use base-$10$ logarithm in the logistic function \eqref{F.Logistic}. \begin{figure}[bt] \begin{tabular}{cc} \scalebox{0.8}{\includegraphics[width=\sizfs\linewidth]{./figures/Thurston.sc=switch-compare-DKL_5000.eps}} & \scalebox{0.8}{\includegraphics[width=\sizfs\linewidth]{./figures/Bradley-Terry.sc=switch-compare-DKL_5000.eps}}\\ a) & b) \end{tabular} \caption{The average \gls{kl} divergence, when using the \gls{kf}, the \gls{vskf}, the \gls{sskf}, the \gls{fskf} and the \gls{sg} rating algorithms; $\beta=1$ and a) the Thurston model with $\epsilon=0.004$ and $v_0=1$; b) the Bradley-Terry model with $\epsilon=0.002$ and $v_0=0.5$.}\label{Fig:Thurston} \end{figure} We implement all the rating algorithms we proposed and comparing them in \figref{Fig:Thurston} we observe that: \begin{itemize} \item Without surprise, the \gls{kf} ensures the best performance for both, the initialization phase (after $d=1$), and the post-switch phase (after $d=40$). \item Rather surprisingly, the \gls{vskf} and the \gls{kf} ratings performs quasi-identically which suggests that the posterior correlation between the skills is not relevant even if the number of players in our example is moderate. \item The \gls{sskf} rating performs very well in the initialization phase because all the players have roughly the same variance and this is the assumption underpinning the algorithm. On the other hand, in the post-switch phase the variance of the players is disparate and then the convergence speed decreases. \item In the \gls{fskf} algorithm we may appreciate the trade-off: to increase the convergence speed we need larger $\ov{v}$, while the performance after convergence is improved with smaller $\ov{v}$. The value $\ov{v}$ which ensures the best performance after convergence may be deduced from the \gls{sskf} algorithm, where we have obtained $v_{T}\approx 0.1$ for the Thurston model and $v_T\approx 0.8$ for the Bradley-Terry model. Note again, that this stays in line with the argument of matching the Gaussian and the logistic distributions: the relation between the posterior variances after convergence is close to $1/\sqrt{a}$. \item The \gls{sg} shares the drawbacks of the \gls{fskf} rating: larger $K$ improves the convergence speed at the expense of poorer performance after convergence. Note again the halving of the step size, $K$, for the Bradley-Terry model: remember, $K$ has the meaning of the variance and thus the same principle of matching the logistic and the Gaussian distributions we mentioned above applies. \end{itemize} Overall, the important conclusions are: \begin{itemize} \item The \gls{vskf} algorithm is the best candidate for simple rating: it exploits the temporal model in the data and does not suffer loss comparing to the \gls{kf} rating, \item Opting for further simplifications leads to some loss where, despite its simplicity, the \gls{sg} rating offers the performance comparable to other \gls{skf} ratings, and \item The model mismatch (applying the algorithms based on the Bradley-Terry model to the data generated using the Thurston model, see \figref{Fig:start}b) does not affect the performance in any significant manner. And while at first it may appear counter-intuitive, we should note that the performance of the algorithms is not evaluated by their ability to estimate the skills, $\boldsymbol{\theta}_t$, but rather by their predictive capability. Thus, using the Bradley-Terry model, the estimate $\boldsymbol{\mu}_t$ may be, indeed, far from the actual value of $\boldsymbol{\theta}_t$, because a different model if fitted to the data, but the prediction is little affected. \end{itemize} \begin{figure}[bt] \begin{center} \scalebox{0.8}{\includegraphics[width=\sizfs\linewidth]{./figures/Thurston.sc=switch-Glicko-TrueSkill-DKL_5000.eps}} \end{center} \caption{The average (solid line) and median (loosely dashed line) \gls{kl} divergence, when using the \gls{vskf} ($\epsilon=0.004$ and $v_0=1$), the TrueSkill ($\epsilon=0.004$ and $v_0=1$) and the Glicko ($\epsilon=0.002$ and $v_0=0.5$) rating algorithms; $\beta=1$.}\label{Fig:TrueSkill.Glicko} \end{figure} We apply now the TrueSkill and the Glicko algorithms to the same data set and show the results in \figref{Fig:TrueSkill.Glicko}, where we observe that the Glicko and the \gls{vskf} algorithms yield practically indistinguishable results. This observations stays in line with the similarity of both algorithms which we observed in \secref{Sec:Glicko}. On the other hand, the TrueSkill algorithms, despite of being based on the same (Thurston) model which was used for data generation, suffers a small loss after convergence. This can be attributed to the adaptation step being increased comparing to the \gls{vskf} algorithm as we already noted in \secref{Sec:TrueSkill}. To put this difference in performance into perspective, we show also the median curve of the metrics; since the latter is much further from the mean than the differences among the algorithms, the ``loss'' of the TrueSkill may have no practical importance. Before moving to the empirical data, we show in \figref{Fig:Thurston.noisy} the results of the \gls{vskf} and the \gls{sg} ratings obtained for different values of $\sigma=s$. Instead of the metric \eqref{DKL.metric}, which we will not be able to calculate in the empirical data, we show here the log-score \begin{align}\label{log.score.definition} \tr{LS}_t = - \sum_{y\in\mc{Y} }\IND{y_t=y} \ell(\boldsymbol{x}_{t}\T\boldsymbol{\mu}_{t-1}/s; y), \end{align} where $\mc{Y}$ is the set of possible game outcomes. We see that increasing $\sigma$, \ie making the results more ``noisy'', the advantage of exploiting the temporal relationship between the skills is lost and the results obtained using the \gls{vskf} rating are very similar to those yield by the \gls{sg} rating. This leads to a cautionary note: if the uncertainty in the observations (that is, the ``noise") is large, the simple algorithms (such a the \gls{sg} rating) may provide satisfactory results. \begin{figure}[bt] \begin{center} \scalebox{0.8}{\includegraphics[width=\sizfs\linewidth]{./figures/Thurston_convergence_KF_vs_SG_5000.eps}} \end{center} \caption{The average log-score obtained using for the Thurston model and the \gls{vskf} ($\epsilon=0.004$ and $v_0=1$) and the \gls{sg} ($K=0.15\sigma$) ratings for different observation noise levels $\sigma$. The black horizontal line indicates the value $H=- \log 0.5\approx 0.69$ which is the entropy of uniformly distributed binary variable, see \eqref{H.definition}.}\label{Fig:Thurston.noisy} \end{figure} \subsection{Empirical data}\label{Sec:NHL} We consider now the empirical results from \begin{itemize} \item The ice-hockey games in the \gls{nhl} in the seasons 2005/06 -- 2014/15 except for the short season 2012/13. In this pre-expansion period, there were $M=30$ teams and the rules leading to the draws were kept the same.\footnote{Starting with the 2005/06 season, draws are not allowed and are resolved through shootouts if the game was tied in the overtime. Starting with the season 2015/16 the number of skaters in the overtime was changed from four to three.} We can thus treat the games as binary if we use the final result, or as ternary (\ie with draws) if we use the regulation-time results (before overtime/shootouts). Each team plays $82$ games so there are $T=1230=41M$ games in each season. \item The football games of the \gls{epl} seasons 2009/10 -- 2018/19. There are $M=20$ teams, each playing $38$ games, thus $T=380$. \item The Americal footbal games of the \gls{nfl} in the seasons 2009/10 -- 2018/19. There are $M=32$ teams, each playing $16$ games, so $T=256$. \end{itemize} In the team games, the \gls{hfa} is present and, in the rating methods it is customary to take it into account by artificially ``boosting'' the skill of the home player (here, team): in all the functions taking $z_t/s$ as the argument we will rather use $z_t/s+\eta$, \eg in \eqref{pdf.y.theta} we use $L(z_t/s+\eta; y_t)$ instead of $L(z_t/s; y_t)$. The \gls{hfa} boost, $\eta$, must be found from data as we show in the following; note also that $\eta$ does not depend on the scale $s$. To consider the ``initialization" period we average \eqref{log.score.definition} over the first $t_\tr{init}$ games \begin{align}\label{LS.init} \ov{\tr{LS}}_\text{init} = \frac{1}{t_\text{init}}\sum_{t=1}^{t_\text{init}} \tr{LS}_t, \end{align} where, $t_\tr{init}=4 M$, which means that, in the first $t_\tr{init}$ games, each team played, on average, 8 times. The performance after ``convergence" is evaluated by averaging \eqref{log.score.definition} over the second half of the season \begin{align}\label{LS.conv} \ov{\tr{LS}}_\tr{final} = \frac{2}{T}\sum_{t=T/2+1}^T \tr{LS}_t. \end{align} Further, we take the mean of \eqref{LS.init} and \eqref{LS.conv} over all seasons considered. We will use the Bradley-Terry model in the binary games (in the \gls{nhl}) and the \gls{hfa}-boost parameter $\eta$ is evaluated as \cite{Szczecinski20} \begin{align} \eta=\log_{10}\frac{f_1}{f_0}, \end{align} where $f_y$ is the estimated frequency of the game outcome $y\in\mc{Y}$. Here, from the nine \gls{nhl} seasons under study we obtain $f_0\approx 0.45$ and $f_1\approx 0.55$, and thus $\eta= 0.08$. For the ternary games we use the Davidson model \eqref{Davidson.L}-\eqref{Davidson.h}, and we estimate the home- and the draw parameters, $\eta$ and $\kappa$, using the strategy shown in \cite{Szczecinski20}, \cite{Szczecinski20c} \begin{align} \eta &= \frac{1}{2}\log_{10} \frac{f_2}{f_0},\\ \kappa &= \frac{f_1}{\sqrt{f_0 f_2}}, \end{align} where, as before, $f_y$ are the frequencies of the events $y\in\set{0,1,2}$ estimated from the games in all seasons considered. These are i) for the \gls{nhl}: $f_0\approx 0.33$, $f_1\approx 0.24$, and $f_2\approx 0.43$, ii) for the \gls{epl}: $f_0\approx 0.29$, $f_1\approx 0.25$, and $f_2\approx 0.46$, and iii) for the \gls{nfl}: $f_0\approx 0.43$, $f_1\approx 0.003$, and $f_2\approx 0.57$. The corresponding values of $\eta$ and $\kappa$ are shown in \tabref{tab:log-score}. We consistently use $\beta=1$ and $s=1$; the parameters $v_0$, $\epsilon$ (for the \gls{vskf} algorithm), and the update step $K$ (for the \gls{sg} algorithm) which yield the best results are shown them in \tabref{tab:log-score}; they were found by scanning the space of admissible values. The log-score results shown in \tabref{tab:log-score} may be compared to the log-score of the prediction based on the frequencies of the events $y_t$, \ie \begin{align}\label{H.definition} H = - \sum_{y\in\mc{Y}} f_y \log f_y, \end{align} which is the entropy calculated from the estimated frequencies. \begin{table}[tb] \centering \begin{tabular}{c | c | c| c |c| c} \multicolumn{2}{c|}{} & NHL & NHL & EPL & NFL\\ \multicolumn{2}{c|}{} & Bradley-Terry & Davidson & Davidson & Davidson\\ \multicolumn{2}{c|}{} & $\eta=0.08$ & $\eta=0.05$, $\kappa=0.63$ & $\eta=0.10$, $\kappa=0.67$ & $\eta=0.06$, $\kappa=5.5\cdot 10^{-3}$\\ \hline \multirow{3}{*}{v-SKF} & ($v_0$, $\epsilon$) & ($0.01$, $3\cdot 10^{-5}$) & ($0.003$, $3\cdot 10^{-5}$) & ($0.04$, $10^{-7}$) & ($0.02$, $10^{-4}$)\\ & $\ov{\tr{LS}}_\tr{init}$ & $0.688$ & $1.063$ & $1.055$ & $0.679$\\ & $\ov{\tr{LS}}_\tr{final}$ & $0.678$ & $1.064$ & $0.974$ & $0.640$\\ \hline \multirow{3}{*}{SG} & ($K$) & ($0.01$) & ($0.003$) & ($0.015$) & ($0.015$)\\ & $\ov{\tr{LS}}_\tr{init}$ & $0.688$ & $1.063$ & $1.052$ & $0.678$\\ & $\ov{\tr{LS}}_\tr{final}$ & $0.678$ & $1.064$ & $0.976$ & $0.641$\\ \hline \multicolumn{2}{c|}{$H$} & $0.688$ & $1.071$ & $1.061$ & $0.700$ \end{tabular} \caption{Log-score obtained in the \gls{nhl}, the \gls{nfl}, and the \gls{epl} games using the the \gls{vskf} and the \gls{sg} algorithms ( the \gls{kf} and the \gls{kf} ratings yield the same results). The entropy, $H$, calculated from \eqref{H.definition} is shown as a reference. The Bradley-Terry model corresponds to the binary games (in \gls{nhl}), while the Davidson model takes into account the ternary outcomes. Due to a very small frequency of draws in the \gls{nfl}, the results are practically binary but the presence of the draws affects the entropy which exceeds the limit for the binary variable. } \label{tab:log-score} \end{table} The conclusions drawn from the synthetic data also hold here: the performance of the \gls{vskf} and the \gls{kf} algorithms is virtually the same. The \gls{sg} rating is taken as the representative of other simplified algorithms. We observe in \tabref{tab:log-score} that the predictions in the \gls{nhl} are merely better than the entropy and, referring to \figref{Fig:Thurston.noisy} we might attribute it to the ``noisy'' game outcomes, which would also explain why the results produced by the \gls{vskf} and the \gls{sg} algorithms are virtually the same, and why we cannot see any differences even in the initialization phase of the algorithm. By the same token, we can say that the noise decreases in \gls{nfl} results, and more so in the \gls{epl} ones: so we can distinguish between the performance of the initialization and after the convergences. Yet, the improvement due to the use of the \gls{vskf} and the \gls{kf} algorithms is still negligible. This also can be intuitively understood from the parameters we found to minimize the average log-score. Note that, for the \gls{epl} we use the variance $v_0=0.04$ and $s=1$, but, applying \propref{Prop:SKF.scale} we might equally well use $v_0=1$ and $s=5$; the latter scenario may be related to a model with large outcome noise $\sigma$ as shown in \figref{Fig:Thurston.noisy}. The difference between the \gls{vskf} and the \gls{sg} algorithms may be also appreciated by inspecting the temporal evolution of $\boldsymbol{\mu}_t$, shown in \figref{Fig:Trajectories} and obtained for the 2009/10 \gls{epl} season. While the differences in the log-score results shown in \tabref{tab:log-score} are rather small, we can appreciate that the skills estimated using the \gls{vskf} converge very fast to the final values (after 50 days, approx.), to which the \gls{sg} rating also converges but the time required is longer (200 days, approx.); this effect is particularly notable for the teams with extreme values of the means, that is, for very strong, as well as very weak teams. \begin{figure} \centering \scalebox{0.8}{\includegraphics[width=\sizfs\linewidth]{figures/DavidsonEPL2009_trajectories.eps}} \caption{Evolution of the (selected) means $\mu_\tau$ indexed with the time-stamp $\tau(t)$ in the 2009-10 \gls{epl} season obtained using the \gls{vskf} (solid lines) and the \gls{sg} (dashed lines) rating algorithms.} \label{Fig:Trajectories} \end{figure} \section{Conclusions}\label{Sec:Conclusions} In this work we propose a class of online Bayesian rating algorithms for one-on-one games which can be used with any skills-outcome model and which encompasses the case of the group sports, a case typically encountered in eSports. By using various simplifications to represent the posterior covariance matrix in the Gaussian distributions, we obtain different algorithm in the same class. Deriving such a generic algorithms should not only streamline the passage from the skills-outcome model to the actual online algorithm but also provides a fresh insight into the relationship between the existing rating methods such as the Elo, the Glicko and the TrueSkill algorithms. Their differences and similarities are discussed and we demonstrate that, the Glicko and the TrueSkill algorithms may be seen as instances of our generic algorithms. This is an interesting observation in its own right as it unifies the view on these two popular algorithms, which, even if derived from different principles, are now shown in a common framework. We also provide a new insight into the interpretation of the Elo algorithm. We show numerical examples applied to the synthetic-- and the empirical data, which provide guidelines about the conditions under which the algorithms should be used. In particular, our results indicate that the differences between the \gls{kf} rating (with a full representation of the covariance matrix) and the \gls{vskf} rating (only diagonal of the covariance is preserved) are negligible. The \gls{vskf} is, in fact, very similar to the Glicko and the TrueSkill algorithms. We show that further simplification of the covariance matrix may be counterproductive and the simple, \acrfull{sg} rating may be then a competitive solution, even though it cannot be treated as a Bayesian algorithm as it only provides the point estimate of the skills. The simple \gls{sg}-based rating is indeed appealing and particularly useful in very noisy data, \ie when the prediction of the game outcomes cannot be reliably inferred from the estimated skills. These observations, made in the synthetic setup are then confirmed in empirical data, where we analyse the game results from the professional hockey, the American football, and the association football games. Indeed, the differences between the \gls{vskf} and the \gls{sg} results are, at best, small but notable (in football, where the data is relatively not noisy) and, at worst, negligible (in hockey, where the game outcomes are very noisy). In fact, the very concept of observational noise in the sport outcomes received very little attention in the literature and we believe studying it in more depth is an interesting research venue. Overall conclusion regarding the applicability of the algorithms is that, in reliable (not noisy) data, the online Bayesian rating algorithm may provide improved convergence, and potential skill-tracking capability. On the other hand, the reality of sport competition outcomes may not conform to these requirements and, when dealing with the noisy observations, the simple algorithms, such as the Elo rating (which is a an instantiation of the \gls{sg} rating) may be equally useful. \begin{appendices} \section{Proof of \propref{Prop:DKL}}\label{Proof:DKL} Our goal is to find the Gaussion distribution $\tilde{f}(\boldsymbol{\theta})=\mc{N}(\boldsymbol{\theta};\boldsymbol{\mu},\boldsymbol{V})$ under the form \eqref{covariance.cases} minimizing the \gls{kl} divergence with a given distribution $f(\boldsymbol{\theta})$ \begin{align} D_\tr{KL}\big(f|| \tilde{f}\big) &=\int f(\boldsymbol{\theta})\log\frac{f(\boldsymbol{\theta})}{\tilde{f}(\boldsymbol{\theta})} \dd \boldsymbol{\theta}\\ \label{DKL.mu.V} &\propto\frac{1}{2}\log\tr{det}(2\pi\boldsymbol{V}) +\frac{1}{2}\int f(\boldsymbol{\theta})(\boldsymbol{\theta}-\boldsymbol{\mu})\T\boldsymbol{V}^{-1}(\boldsymbol{\theta}-\boldsymbol{\mu})\dd \boldsymbol{\theta}. \end{align} Gradient of \eqref{DKL.mu.V} with respect to $\boldsymbol{\mu}$ is zeroed for $\boldsymbol{\mu}=\Ex[\boldsymbol{\theta}]$, and this, irrespectively of the form of $\boldsymbol{V}$, this proves \eqref{bmu.yt}. This is a well-known result, as well, as the one which says that, to minimize \eqref{DKL.mu.V} we also have to use $\boldsymbol{V}=\tr{Cov}[\boldsymbol{\theta}]=\Ex[(\boldsymbol{\theta}-\boldsymbol{\mu})\T(\boldsymbol{\theta}-\boldsymbol{\mu})]$, this is the claim in \eqref{bV.yt}. Now assume that we use the vector-covariance model, \ie we have to find $\tilde{f}(\boldsymbol{\theta})=\mc{N}(\boldsymbol{\theta};\boldsymbol{\mu},\tr{diag}(\boldsymbol{v}))$. Then, \eqref{DKL.mu.V} becomes \begin{align}\label{DKL.vector} D_\tr{KL}\big(f|| \tilde{f}\big) \propto \frac{1}{2}\sum_{m=1}^M \log v_m + \sum_{m=1}^M \frac{\tr{Var}[\theta_m]}{2v_m}, \end{align} where $\tr{Var}[\theta_m]$ is the variance of $\theta_m$. Zeroing the derivative of \eqref{DKL.vector} with respect to $v_m$ yields $v_m=\tr{Var}[\theta_m]$, that is, $\boldsymbol{v}=\tr{di}(\tr{Cov}[\boldsymbol{\theta}])$ which proves \eqref{bv.t.DKL}. Finally, if we adopt scalar-covariance model $\tilde{f}(\boldsymbol{\theta})=\mc{N}(\boldsymbol{\theta};\boldsymbol{\mu},v\boldsymbol{I})$, \eqref{DKL.vector} becomes \begin{align}\label{DKL.scalar} D_\tr{KL}\big(f|| \tilde{f}\big) \propto \frac{M}{2} \log v + \frac{1}{2v} \sum_{m=1}^M \tr{Var}[\theta_m], \end{align} whose derivative with respect to $v$ is zeroed if $v=\frac{1}{M}\sum_{m=1}^M\tr{Var}[\theta_m]$, and this proves \eqref{v.t.DKL}. \section{Proof of \propref{Prop:SKF.scale}}\label{Proof:SKF.scale} For brevity, let us use the symbol $\check{(\cdot)}$ to denote only the scaled variables, \eg $\check{\boldsymbol{V}}\equiv\boldsymbol{V}(s,s^2v_0,s^2\epsilon)$; the unscaled ones are used without the symbols, \eg $\boldsymbol{V}\equiv \boldsymbol{V}(1,v_0,\epsilon)$. The proof is done by induction: by construction, the initialization satisfied the Proposition, \ie $\check{\boldsymbol{V}}_0=s^2\boldsymbol{I}=s^2\boldsymbol{V}_0$, and we suppose that $\check{\boldsymbol{V}}_{t-1}=s^2 \boldsymbol{V}_{t-1}$ and $\check{\boldsymbol{\mu}}_{t-1}=s \boldsymbol{\mu}_{t-1}$ hold. Then we must have $\beta_{t}\boldsymbol{x}_t\T\check{\boldsymbol{\mu}}_{t-1}=s\beta_{t}\boldsymbol{x}_t\T\boldsymbol{\mu}_{t-1}$, and $g_t$ and $h_t$ are not affected by scaling. Then we also have $\check{\ov{\boldsymbol{V}}}_t=\beta^2_t\check{\boldsymbol{V}}_{t-1}+s^2\epsilon_t\boldsymbol{I} =s^2 \ov{\boldsymbol{V}}_t$ and $\check{\omega}_t=s^2\omega_t$, so \eqref{oneshot.mean.KF} may be written as \begin{align} \check{\boldsymbol{\mu}}_t &=\beta_{t} \check\boldsymbol{\mu}_{t-1} + \check{\ov{\boldsymbol{V}}}_t\boldsymbol{x}_t\frac{sg_t}{s^2+h_t\check\omega_t}\\ &= \beta_{t} s \boldsymbol{\mu}_{t-1} + s^2\ov{\boldsymbol{V}}_t\boldsymbol{x}_t\frac{sg_t}{s^2+s^2h_t\omega_t} =s\boldsymbol{\mu}_{t}, \end{align} and \eqref{Vt.update.KF}, as \begin{align} \check{\boldsymbol{V}}_t &=\check{\ov{\boldsymbol{V}}}_{t-1} + \check{\ov{\boldsymbol{V}}}_{t-1}\boldsymbol{x}_t\boldsymbol{x}_t\T\check{\ov{\boldsymbol{V}}}_{t-1}\frac{h_t}{s^2+h_t\check\omega_t}\\ &= s^2\ov{\boldsymbol{V}}_{t-1} + s^4\ov{\boldsymbol{V}}_{t-1}\boldsymbol{x}_t\boldsymbol{x}_t\T\ov{\boldsymbol{V}}_{t-1}\frac{h_t}{s^2+s^2h_t\omega_t}=s^2 \boldsymbol{V}_t. \end{align} This ends the proof for the \gls{kf} algorithm. By extension all other algorithms derived from the \gls{kf} algorithm must satisfy the claims of \propref{Prop:SKF.scale}, which may be also proven with the steps shown above applied to the \gls{vskf}, \gls{sskf}, and \gls{fskf} algorithms. \end{appendices} \input{./main.arxiv.bbl} \end{document}
{ "attr-fineweb-edu": 2.007812, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdYk4uzlhXkpzXkM0
\section{Introduction} In recent years physicists have started to investigate time series, resulting from successive matches in sports leagues. In this context several basic questions can be asked. Is the champion always the best team? \cite{ben1,ben2,buch} How many matches have to be played in a league so that (nearly) always the best team becomes the champion? \cite{ben1,ben2} Does the distribution of goals follow a Poisson distribution and what are possible interpretations of the observed deviations? \cite{tolan,janke}. In those studies it has been attempted to have a simplified view on complex processes such as soccer matches in order to extract some basic features like, e.g., scaling laws. Some empirical observations such as fat tails in the goal distributions can be related to other fields such as finance markets \cite{Stanley} and have been described, e.g., by the Zipf-Mandelbrot law \cite{Malacarne}. Actually, also in more general context the analysis of sports events, e.g. under the aspect of extreme value statistics, has successfully entered the domain of physicists activities \cite{Suter}. A more specific view has been attempted in detailed studies of the course of a soccer season. In one type of models; see e.g. Refs. \cite{Lee97,Dixon97,Dixon98,Rue00}, one introduces different parameters to characterize a team (e.g. offensive fitness) which can be obtained via Monte-Carlo techniques. These parameters are then estimated based on a Poisson assumption about the number of goals of both teams. Within these models, which were mainly applied to the English Premier league, some temporal weighting factors were included to take into account possible time variations of the different team parameters. These models are aimed to make predictions for the goals in individual matches. In \cite{Rue00} it is reported that based on a complex fitting procedure the time scale of memory loss with respect to the different variables is as short as 100 days. A second type of model assumes just one fitness parameter for each team and the outcome (home win, draw, away win) is then predicted after comparing the difference of the team fitness parameters with some fixed parameters \cite{Koning00}. The model parameters are then estimated based on the results of the whole season. Here, no temporal evolution of the team parameter is involved. This very simple model has been used in \cite{Dobson03} to check whether the outcome of one match influences the outcome of the successive match. Of course, this type of results is only relevant if the used model indeed reflects the key ingredients of the real soccer events in a correct way. It has been also attempted to analyse individual soccer matches on a very detailed level, e.g., to estimate the effect of tactical changes \cite{Hirotsu} The approach, taken in this work, is somewhat different. Before devising appropriate models, which will be done in subsequent work, we first attempt to use a model-free approach to learn about some of the underlying statistical features of German soccer (1. Bundesliga). However, the methods are general enough so that they can be easily adapted to different soccer leagues or even different types of sports. The analysis is exclusively based on the knowledge of the final results of the individual matches. Since much of the earlier work in this field originates from groups with a statistics or economy background, there is some room for the application of complimentary concepts, more common in the physics community. Examples are finite-size scaling, the analysis of 2-time correlation functions or the use of more complex correlation functions to unravel the properties of subensembles, as used, e.g., in previous 4D NMR experiments \cite{Klaus,Wilhelm,epl}. Four key goals are followed in this work. First, we ask about appropriate observables to characterize the overall fitness of a team. Second, using this observable we analyze the temporal evolution of the fitness on different time scales. Third, we quantify statistical and systematic features for the interpretation of a league table and derive some general properties of prediction procedures. Forth, we clarify the validity of some soccer myths which are often used in the typical soccer language, including serious newspapers, but never have been fully checked about their objective validity. Does something like a winning or losing streak exist? Do some teams have a specific home fitness during one season? The paper is organized as follows. In Sect.II we briefly outline our data basis. The discussion of the different possible measures of the overall fitness is found in Sect.III. In the next step the temporal evolution of the fitness is analyzed (Sect.IV). In Sect.V it is shown how the systematic differences in the team fitness can be separated from the statistical effects of soccer matches and how a general statistical characterization can be performed. In Sect.VI we present a detailed discussion of some soccer myths. Finally, in Sect.VII we end with a discussion and a summary. In two appendices more detailed results about a few aspects of our analysis are presented. \section{Data basis} We have taken the results of the German Bundesliga from http://www.bundesliga-statistik.de. For technical reasons we have excluded the seasons 1963/64, 1964/65 and 1991/92 because these were the seasons where the league contained more or less than 18 teams. Every team plays against any other team twice the season, once at home and once away. If not mentioned otherwise we have used the results starting from the season 1987/88. The reason is that in earlier years the number of goals per season was somewhat larger, resulting in slightly different statistical properties. \section{Using goals or points to measure the team fitness?} \subsection{General problem} Naturally, a strict characterization of the team fitness is not possible because human behavior is involved in a complex manner. A soccer team tries to win as many matches as possible during a season. Of course, teams with a better fitness will be more successful in this endeavor. As a consequence the number of points $P$ or the goal difference $\Delta G$ can be regarded as a measure for the fitness. In what follows all observables are defined as the average value per match. In Sect.IV it is shown that apart from fluctuations the team fitness remains constant during a season. Thus, in a hypothetical season where teams play infinitely often against each other and thus statistical effects are averaged out the values of $P$ indeed allow a strict sorting of the quality of the teams. Thus, $P$ is a well-defined fitness measure for the team fitness during a season. Naturally, the same holds for $\Delta G$ if the final ranking would be related to the goal difference. Since in reality the champion is determined from the number of points one might tend to favor $P$ to characterize the team fitness. In any event, one would expect that the rankings with respect to $\Delta G$ or $P$ are identical in this hypothetical limit. Evidently, in a match the number of goals scored or conceded by a team is governed by many unforeseen effects. This is one of the reasons why soccer is so popular. As a consequence, the empirical values of $P$ or $\Delta G$ obtained, e.g., after a full season will deviate from the limiting values due to the residual fluctuations. This suggests a relevant criterion to distinguish between different observables. Which observable displays a minimum sensitivity on statistical effects? As will be shown below, this criterion favors the use of $\Delta G$. \subsection{Distribution of $\Delta G$} In Fig.\ref{deltag_dist} we display the distribution of $\Delta G$ after one quarter of a season (thereby averaging over all quarters) and at the end of the season. The first case corresponds to $N=9$ (first and third quarter) or $8$ (second and fourth quarter), the second case to $N=34$. Here $N$ denotes the number of subsequent matches, included in the determination of $\Delta G$. Both distributions can be described as a Gaussian plus an additional wing at large $\Delta G$. Fitting each curve by a sum of two Gaussians, the amplitude ratio for the full-season distribution implies that there are on average 2-3 teams with an exceptional good fitness. Note that the distribution of $\Delta G$ is significantly narrower for larger $N$ and also for $N=34$ one expects some finite statistical contribution to the width of the distribution. Qualitatively, this reflects the statistical nature of individual soccer matches. Naturally, the statistical contribution becomes less relevant when averaging over more matches. This averaging effect will be quantified in Sect.V. \begin{figure} \includegraphics[width=7cm]{deltag_dist.eps} \caption{\label{deltag_dist} The distribution of $\Delta G$ after one quarter of the season and after a full season. Included is a fit with two Gaussian functions for both distributions. For the full-season distribution the intensity ratio of both Gaussian curves is approx. 1:6. The correlation coefficient for the latter is 0.985. } \end{figure} \subsection{Correlation analysis} A natural question to ask is whether the distribution for $N=34$ can be explained under the assumption that all teams have an identical fitness. If this is the case the outcome of each match would be purely statistical and no correlation between the goal differences of a team in successive matches could be found. To check this possibility in a simple manner we correlate the value of $\Delta G$, obtained in the first half of the season ($\Delta G_1)$, with the value of the second half of the same team ($\Delta G_2)$. The results, collected for all years and all teams (per year) are shown in Fig.\ref{deltag_basic}. One observes a significant correlation. Thus, not surprisingly, there is indeed a variance of the fitness of different teams. \begin{figure} \includegraphics[width=7cm]{deltag_basic.eps} \caption{\label{deltag_basic} The correlation of $\Delta G$ for the first and the second half of the season. Included are the respective averages together with the standard deviation which on average is 0.51. Furthermore an overall regression line is included which has a slope of 0.53. } \end{figure} For a quantification of the correlation one can use the Pearson correlation coefficient \begin{equation} c_P(M_1,M_2) = \frac{<(M_1 - <M_1>)(M_2 - <M_2>)>}{\sigma_{M,1} \sigma_{M,2}} \end{equation} to correlate two distributions $M_1$ and $M_2$. For the present problem it yields $0.55 \pm 0.03$. The error bar has been determined by calculating $c_P(M_1,M_2)$ individually for every year and then averaging over all years. This procedure is also applied in most of the subsequent analysis and allows a straightforward estimation of the statistical uncertainty. The average value $\langle \Delta G_2 \rangle$ can be interpreted as the best estimation of the fitness, based on knowledge of $\Delta G_1$. Note that the variance of the distribution of $\Delta G_2$ for every $\Delta G_1$ is basically independent of $\Delta G_1$ and is given by 0.51. There is a simple but on first view astonishing observation. It turns out that a team with a positive $\Delta G$ in the first half will on average also acquire a positive $\Delta G$ in the second half, but with a smaller average value. This is reflected by the slope of the regression line smaller than unity. This observation is a manifestation of the regression toward the mean \cite{Stigler}, which, however, is not always taken into account \cite{buch}. Qualitatively, this effect can be rationalized by the observation that a team with a better-than-average value of $\Delta G$ very likely has a higher fitness but, at the same time, on average also had some good luck. This statistical bias is, of course, not repeated during the second half of the season. For a stationary process $\Delta G$ has the same statistical properties in the first and the second half. Then the slope of the regression line is identical to the correlation coefficient (here: 0.53 vs. 0.55). In a next step we have taken the observable $p(\Delta G = 2)$ which describes the probability that a team wins a match with a goal difference of exactly two. Of course, this is also a measure of the fitness of the team but intuitively one would expect a major intrinsic statistical variance which should render this observable unsuited to reflect the team fitness for the real situation of a finite season. One obtains a correlation coefficient of 0.19. In agreement with intuition one indeed sees that observables which are strongly hampered by statistical effects display a lower correlation coefficient. Stated differently, the value of $c_p(M_1,M_2)$ can be taken as a criterion how well the observable $M$ reflects the fitness of a team. This statement is further corroborated in Appendix I on the basis of a simple model calculation. In particular it is shown that this statement holds whether or not the team fitness changes during a season. We have repeated the analysis for the value of $P$, applying the present rule (3 points for a win, 1 point for a draw and 0 for a loss) to all years. The results, however, are basically identical if using the 2-point rule. Here we obtain $0.49 \pm 0.03$ which is smaller than the value obtained for $\Delta G$. One might argue that both values can still agree within statistical errors. However, since the variation from season to season is very similar for both correlation factors the difference is indeed significant. A detailed statistical analysis yields $c_P(\Delta G_1,\Delta G_2) - c_P(P_1,P_2) = 0.06 \pm 0.015$. \begin{table} \centering \begin{tabular}[t]{|c|c|}\hline & $c_p$ \\ \hline $\Delta G$ & $0.55\pm 0.035$ \\ \hline $P$ & $0.49 \pm 0.035$ \\\hline $p(\Delta G)=2$ & $0.19 \pm 0.06$ \\ \hline \end{tabular} \caption{ Pearson correlation coefficients for different observables.} \label{tab1} \end{table} How to rationalize this difference? A team playing 1:0 gets the same number of points than a team winning 6:0. Whereas in the first case this may have been a fortunate win, in the second case it is very likely that the winning team has been very superior. As a consequence the goal difference may identify very good teams whereas the fitness variation among teams with a given number of points is somewhat larger. Actually, using $\Delta G_1$ to predict $P_2$ is also more efficient than using $P_1$ ($c_P(\Delta G_1,P_2) > c_P (P_1,P_2)$). One might wonder whether the most informative quantity is a linear combination of $\Delta G$ and $P$. Indeed the optimized observable $\Delta G + 0.3 P$ displays a larger value of $c_P$ than $\Delta G$ alone. The difference, however, is so small ($\Delta c_P \approx 0.001$) that the additional information content of the points can be totally neglected. As a conclusion a final ranking in terms of goals rather than points is preferable if one really wants to identify the strongest or weakest teams. \section{Temporal evolution of the fitness} Having identified $\Delta G$ as an appropriate measure for the team fitness one may ask to which degree the team fitness changes with time. This will be analyzed on three different time scales, now using all data starting from 1965/66. First we start with variations within a season. One may envisage two extreme scenarios for the time evolution of the fitness during a season: First a random walk in fitness-space, second fluctuations around fixed values. These scenarios are sketched in Fig.\ref{sketch_time}. \begin{figure} \includegraphics[width=7cm]{sketch_time.eps} \caption{\label{sketch_time} Two extreme scenarios for the time evolution of the fitness during a season. (a) The fitness performs a random-walk dynamics under the only constraint that the fitness distribution of all teams is (roughly) stationary. (b) The fitness of each team fluctuates around a predefined value which is constant for the whole season. } \end{figure} To quantify this effect we divide the season in four nearly equal parts (9 matches, 8 matches, 9 matches, 8 matches), denoted quarters. The quarters are enumerated by an index from 1 to 4. In the random-walk picture one would naturally expect that the correlation of quarters $1$ and $m$ ($m=2,3,4$) is the stronger the smaller the value of $m$ is. For the subsequent analysis we introduce the variable $n=m-1$, indicating the time lag between both quarters. In contrast, in the constant-fitness scenario no dependence on $n$ is expected. The correlation factors, denoted $c_q(n)$, are displayed in the central part of Fig.\ref{viertel_corr}. To decrease the statistical error we have averaged over the forward direction (first quarter with $m=n+1$-th quarter) and the time-reversed direction (last quarter with $m=4-n$-th quarter). Interestingly, no significant dependence on $n$ is observed. The correlation between the first and the fourth quarter is even slightly larger than between the first and the second quarter, albeit within the error bars. Thus, the hypothesis that the fitness remains constant during a season (apart from short-ranged fluctuations) is fully consistent with the data. Of course, because of the residual statistical uncertainties of the correlations, one cannot exclude a minor systematic variation of the fitness. \begin{figure} \includegraphics[width=7cm]{viertel_corr.eps} \caption{\label{viertel_corr} The correlations between quarters, involving the comparison between subsequent seasons. $n$ denotes the difference between the quarter indices. For a closer description see text.} \end{figure} This analysis can be extended to learn about a possible fitness variation when comparing one season with the next or the previous season. More specifically, we correlate the fitness in the first quarter of a given season with the quarters $m=5,6,7,8$ in the next season and with the quarters $m=-3,-2,-1,0$ and the previous season and plot it again as function of $n=m-1$. The results are also included in Fig.\ref{viertel_corr}. Interestingly, there is a significant drop of correlation which, consistent with the previous results, does not change during the course of the next or the previous season. Thus it is by far the summer break rather than the time during a season where most changes happen to the fitness of a team. The very fact that the correlation to last year's result is weaker than present year's result has been already discussed in \cite{goddard}, based on a specific model analysis. Finally, we have analysed the loss of correlation between seasons $i$ and $i+n$. In order to include the case $n=0$ in this analysis we compared $\Delta G$, determined for the first and the second halves of the season. Thus, for the correlation within the same season one obtains one data point, for the correlation of different seasons one obtains four data points which are subsequently averaged. $c_y(n)$ denotes the corresponding Pearson correlation coefficient, averaged over all initial years $i$. We checked that for $n > 0$ we get the same shape of $c_y(n)$ (just with larger values) when full-year correlations are considered. Of course, when calculating the correlation coefficient between seasons $i$ and $i+n$ one only takes into account teams which are in the Bundesliga in both years. However, even for large time differences, i.e. large $n$, this number is significant (e.g. the number of teams playing in the first season, analyzed in this study, and the season 2007/08 is as large as 11). This already indicates that, given the large number of soccer teams in Germany which might potentially play in the Bundesliga, a significant persistence of the fitness is expected although many of these teams in between may have been briefly relegated to a lower league. The results are shown in Fig.\ref{jahr_corr2}. $c_y(n)$ displays a fast decorrelation for short times which slows down for longer times. To capture these two time-regimes we have fitted the data by a bi-exponential function (numbers are given in the figure caption). This choice is motivated by the fact that this is maybe the simplest function which may quantify the $n$-dependence of $c_y(n)$. The short-time loss has a time scale of around 2 years. This effect, however, only has an amplitude of around 2/5 as compared to the total. The remaining loss of correlation occurs on a much longer scale (around 20-30 years). Obviously, there exist fundamental properties of a team such as the general economic situation which only change on extremely long time scales given the short-range fluctuations of a team composition. As mentioned above, this long-time correlation is also reflected by the small number of teams which during the last decades have played a significant time in the Bundesliga. \begin{figure} \includegraphics[width=7cm]{jahr_corr3.eps} \caption{\label{jahr_corr2} The fitness correlation when comparing $\Delta G$ for two seasons which are $n$ years apart. The analysis is based on the comparison of half-seasons (see text for more details). The data are fitted by $c_y(n) = 0.22\exp(-n/1.7) + 0.34 \exp(-n/27)$. } \end{figure} \section{Statistical description of a soccer league} \subsection{General} Here we explicitly make use of the observation that the fitness does not change during the season. Actually, in this Section we will report another supporting piece of evidence for this important fact. Hypothetically, this fitness could be obtained "experimentally" if a season would contain an infinite number of matches between the 18 teams. Then, the fitness could be identified as the observable $\Delta G (N \rightarrow \infty)$ (abbreviated $\Delta G(\infty)$). The specific value for team $i$ is denoted $\Delta G_i (\infty)$. We already know from the discussion of Fig.\ref{deltag_basic} that the values $\Delta G_i (\infty)$ are distributed. As a consequence the variance of $\Delta G(\infty)$, denoted $\sigma^2_{\Delta G}$, is non-zero. Although it cannot be directly obtained from the soccer table (because of the finite length of a season) it can be estimated via appropriate statistical means, as discussed below. Because the number of goals and the width of the distribution of $\Delta G$ somewhat decreased if comparing the years starting from the season 1987/88 with the earlier years, we restrict the analysis in this section to the latter time regime. \subsection{Estimation of the statistical contribution} Formally, the omnipresence of statistical effects can be written as \begin{equation} \label{gn_def} \Delta G_i(N) = \Delta G_i(\infty) + \Delta G_{i,stat}(N). \end{equation} In physical terms this corresponds to the case of a biased random walk, i.e. a set of particles, each with a distinct velocity (corresponding to $(\Delta G_i(\infty))$) and some diffusion contribution (corresponding to $\Delta G_{i,stat}(N)$). We note in passing that to a good approximation the amplitude of the statistical contribution does not depend on the value of the fitness, i.e. the index $i$ in the last term of Eq.\ref{gn_def} can be omitted. Otherwise, the variance in Fig.\ref{deltag_basic} would depend on the value of $\Delta G_1$. Squaring Eq.\ref{gn_def} and averaging over all teams one can write \begin{equation} \label{statsum} \sigma^2_{\Delta G (N)} = \sigma^2_{\Delta G} + \sigma^2_{\Delta G (N),stat} \end{equation} where the variances of the respective terms have been introduced. $\sigma^2_{\Delta G (N),stat}$ is expected to scale like $1/N$ and will disappear in the limit $N \rightarrow \infty$. Thus, $\sigma^2_{\Delta G}$ can be extracted by linear extrapolation of $\sigma^2_{\Delta G (N)}$ in a $1/N$-representation. We have restricted ourselves to even values of $N$ in order to avoid fluctuations for small $N$ due to the differences between home and away matches. To improve the statistics we have not only used the first $N$ matches of a season but used all sets of $N$ successive matches of a team for the averaging. This just reflects the fact that any $N$ successive matches have the same information content about the quality of a team. One can clearly see in Fig.\ref{fitness_nall} that one obtains a straight line in the $1/N$-representation for all values of $N$. We obtain \begin{equation} \label{eqs2} \sigma^2_{\Delta G(N)} = 0.215 + \frac{3.03}{N}, \end{equation} i.e. $\sigma^2_{\Delta G} = 0.215$ and $\sigma^2_{\Delta G (N),stat}= 3.03/N$. Generally speaking, the excellent linear fit in the $1/N$-representation shows again that the team fitness remains stable during the season. Otherwise one would expect a bending because also the first term in Eq.\ref{statsum} would depend on $N$; see again Appendix I for a more quantitative discussion of this effect. Of course, for this statement it was important to include only {\it successive} matches of a team for the statistical analysis. \begin{figure} \includegraphics[width=7cm]{fitness_nall2.eps} \caption{\label{fitness_nall} The variance of the distribution of $\Delta G(N)$, averaged over all years. The straight line is a linear fit. } \end{figure} In Fig.\ref{fitness_stat} the relative contribution of the statistical effects in terms of the variance, i.e. $\sigma^2_{\Delta G (N),stat}/( \sigma^2_{\Delta G (N),stat} + \sigma^2_{\Delta G })$ is shown as a function of $N$. The result implies that, e.g., after the first match of the season ($N=1$) approx. 95\% of the overall variance is determined by the statistical effect. Not surprisingly, the table after one match may be stimulating for the leading team but has basically no relevance for the rest of the season. For $N \approx 14$ the systematic and the statistical effects are the same. Interestingly, even at the end of the season the statistical contribution in terms of its contribution to the total variance is still as large as 30\%. \begin{figure} \includegraphics[width=7cm]{fitness_stat.eps} \caption{\label{fitness_stat} Statistical contribution to the overall variance after $N$ matches. Included is the analysis for the goal differences as well as for the points.} \end{figure} Repeating the same analysis for the number of points $P$ yields \begin{equation} \label{point} \sigma^2_{P(N)}\approx 0.08 + \frac{1.7}{N}. \end{equation} The resulting plot of $\sigma^2_{P(N),stat}/( \sigma^2_{P(N),stat} + \sigma^2_{P})$ is again displayed in Fig.\ref{fitness_stat}. Now it takes even $N=22$ matches until the systematic effects start to be dominant. At the end of the season the statistical contribution is as large as 36\%. This shows again that $\Delta G$ is a better measure for the fitness because then the random component in the final ranking is somewhat smaller. \subsection{Prediction of team fitness: General framework} The previous analysis has shown that even for $N=34$ there still exists a significant random contribution. The next goal is to estimate in a statistically consistent way from knowledge of $\Delta G(N)$ (e.g. the final scores at the end of the season) the team fitness. Formally, one wants to determine the conditional probability function $p(\Delta G(\infty) | \Delta G(N))$. This can be determined by using the Bayes theorem \begin{equation} \label{Bayes} p(\Delta G(\infty) | \Delta G(N)) \propto p(\Delta G(N) | \Delta G(\infty))) q(\Delta G(\infty)) \end{equation} Here $p(\Delta G(N) | \Delta G(\infty))$ is fully determined via Eq.\ref{gn_def} and corresponds to a Gaussian with variance $\sigma^2_{\Delta G (N),stat}$. The function $q(\Delta G(\infty))$ describes the a priori probability for the team fitness. This distribution has been already discussed in Fig.\ref{deltag_dist}. To first approximation we saw a Gaussian behavior with small but significant deviations. One can show that a strict linear correlation between the estimated fitness (or the behavior in the second half of the season) and $\Delta G(N)$ is fulfilled for a Gaussian distribution $q(\Delta G(\infty))$. Since to a good approximation a linear correlation was indeed observed in Fig.\ref{deltag_basic}, for the subsequent analysis we neglect any deviations from a Gaussian by choosing $q(\Delta G(\infty)) \propto \exp(-\Delta G(\infty)^2/2\sigma^2_{\Delta G})$. Of course, for a more refined analysis the non-Gaussian nature, displayed in Fig.\ref{deltag_dist}, could be taken into account. After reordering of the Gaussians in Eq.\ref{Bayes} one obtains after a straightforward calculation \begin{equation} \label{cond} p(\Delta G(\infty) | \Delta G(N)) \propto \exp[-(\Delta G(\infty) - a_N \Delta G(N))^2/2\sigma^2_{e,N}). \end{equation} with \begin{equation} \label{pred_a} a_N = \frac{\sigma^2_{\Delta G} }{\sigma^2_{\Delta G} + \sigma^2_{\Delta G(N),stat}} \end{equation} and \begin{equation} \sigma_{e,N}^2 = \frac{\sigma^2_{\Delta G (N),stat}}{1+ \sigma^2_{\Delta G (N),stat}/\sigma^2_{\Delta G }}. \end{equation} As discussed in the context of Fig.\ref{deltag_basic} $a_N$ is identical to the Pearson correlation coefficient when correlating two subsequent values of $\Delta G$, each based on $N$ matches. From Eq.\ref{eqs2} one obtains $a_{N=17} = 0.55$ and $\sigma_{e,N=17}^2 = 0.097$. As expected $a_N$ is identical to $c_P(\Delta G_1,\Delta G_2)$ and within statistical uncertainties identical to the slope of 0.53 in Fig.\ref{deltag_basic}. Finally, we apply these results to the interpretation of the Bundesliga table at the end of the season, i.e. for $N=34$. Using Eq.\ref{cond} the estimator for $\Delta G(\infty)$ can be written as \begin{equation} \Delta G(\infty) = a_{N=34} \Delta G(N=34) \pm \sigma_{e,N=34}. \end{equation} For the present data this can be explicitly written as \begin{equation} \label{Gest} \Delta G(\infty) = 0.71 [\Delta G(N=34) \pm 0.36] . \end{equation} Using standard statistical analysis one can, e.g., determine the probability that a team with a better goal difference $\Delta G$ (i.e. $\Delta G_1> \Delta G_2)$ is indeed the better team. For the present data it turns out that for $\Delta G_1 - \Delta G_2 = 0.36$ (corresponding to an absolute value of 12 goals after 34 matches) the probability is approx. 24\% that the team with the worse goal difference is nevertheless the better team. In analogy, one can estimate from Eq.\ref{point} that two teams which after the season are 10 points apart have an incorrect order in the league table, based on their true fitness, with a probability of 24\%. Maybe this figure more dramatically reflects the strong random component in soccer. \subsection{Prediction of team fitness: Application} These results can be taken to quantify the uncertainty when predicting $\Delta G_i(M)$ of team $i$. More specifically, we assume that this prediction is based on the knowledge of the results of the $N$ previous matches of team $i$. The variance of the estimate of $\Delta G(M)$ is denoted $\sigma^2_{est}(M,N)$. This notation reflects the fact that it depends on both the prediction time scale $M$ as well as the information time scale $N$. To estimate $\Delta G_i(M)$, based on $\Delta G_i(N)$, two uncertainties have to be taken into account. First, the uncertainty of estimating $\Delta G_i(\infty)$ is characterized by $\sigma^2_{e,N}$. Second, even if $\Delta G( \infty)$ were known exactly, the statistical uncertainty of estimating $\Delta G(M)$ due to the finite $M$ is still governed by the variance $\sigma^2_{\Delta G (M),stat}$. Thus, one obtains \begin{equation} \label{pred} \sigma^2_{est}(M,N) = \sigma^2_{e,N} + \sigma^2_{\Delta G (M),stat} \end{equation} For the specific choice $M=17$, i.e. for the prediction of the second half of the season, the standard deviation $17 \cdot \sigma_{est}(M=17,N)$ of the estimator (expressed in absolute number of goals) is displayed in Fig.\ref{prediction}. First, we discuss the extreme cases. In the practically impossible case that the fitness is exactly known (formally corresponding to $N \rightarrow \infty$) one obtains a standard deviation of approx. 7 goals. In the other extreme limit where no information is available, i.e. $N = 0$) one obtains a value of approx. 10.5 goals. Thus the difference between complete information and no information for the prediction of the second half of the season is only 3.5 goals. Finally, for the interpretation of the results in Fig.\ref{deltag_basic} one has to choose $N=17$. As shown in Fig.\ref{prediction} the observed standard deviation of $17\cdot 0.51 \approx 8.7$ agrees well with the theoretical value based on Eq.\ref{pred}. The remaining deviations (8.7 vs. 8.9) might reflect the non-Gaussian contributions to $q(\Delta G(\infty))$. From Eq.\ref{point} one can estimate in analogy to above that, based on the knowledge of the points for the first half, the number of points for the second half can be estimated with a standard deviation of approx. 6 points. Of course, according to our previous discussion the estimation would be slightly better if the value of $\Delta G$ rather than the number of points of the first half were taken as input. \begin{figure} \includegraphics[width=7cm]{prediction.eps} \caption{\label{prediction} The function $17 \sigma_{est}(M=17,N)$, describing the uncertainty for the prediction of the goal difference during the second half of the season based on the knowledge of $N$ matches. Included is the data point, observed numerically in Fig.\ref{deltag_basic}.} \end{figure} \subsection{Going beyond the team fitness $\Delta G$} So far we have characterized the fitness of a team $\Delta G$. From a conceptual point of view the most elementary quantities are the number of goals $G_+$, scored by a team, as well as the number of goals $G_-$ conceded by this team ( $\Delta G = G_+ - G_-$). Correspondingly, $\langle G_\pm \rangle $ denotes the average number of goals per team and match. The brackets denote the corresponding average. Since the subsequent analysis can be also used for prediction purposes we restrict ourselves to all years since the season 1995/96 when the 3-point rule had been introduced. The above analysis, performed for $\Delta G$, can be repeated for $G_\pm$. The general notation reads ($M \in \{G_+,G_-$\}) \begin{equation} \sigma^2_{M(N)} = \sigma^2_{M} + \frac{b_M}{N}. \end{equation} The fitting parameters are listed in Tab.II. We note in passing that all statistical features, described in this Section, are observed in the English Premier League, too. For reasons of comparison the resulting parameters are also included in Tab.II. \begin{table} \label{tab2} \centering \begin{tabular}[t]{|l|c|c|c|c|c|c|}\hline & $\langle G_\pm \rangle$ & $\sigma^2_{G_+}$& $b_{G_+}$ & $\sigma^2_{G_-}$& $b_{G_-}$ & $c_{+,-}$\\ \hline Bundesliga & 1.43 & 0.075 & 1.45 & 0.055 & 1.50 & 0.71 \\ \hline Premier League & 1.29 & 0.075 & 1.40 & 0.060 & 1.40 & 0.85 \\ \hline \end{tabular} \caption{ Statistical parameters, characterizing the Bundesliga (1995/96-2007/08) and the English Premier League (1996/97-2006/07). } \end{table} For a complete understanding of the goal statistics one has to include possible correlations between $G_+$ and $G_-$, i.e. \begin{equation} c_{+,-}(N) = \frac{\langle (G_+ - \langle G \rangle) ( \langle G \rangle - G_-)}{\sigma_{G_+}\sigma_{G_-}} . \end{equation} This value reflects the correlation of a team's strength of attack and defence. Complete correlation means $c_{+,-}(N) = 1$. The statistical effects during a soccer match, related to $G_+$ and $G_-$, are likely to be statistically uncorrelated. As a consequence one would not expect a significant $N$-dependence. Indeed, we have verified this expectation by explicit calculation of $c_{+,-}(N)$ which within statistical uncertainty is $N$-independent. We obtain $c_{+,-} = 0.71$. This information is sufficient to calculate $\sigma^2_{M(N)}$ for $M\in \{\Delta G \equiv G_+ - G_-,\Sigma G \equiv G_+ + G_-\}$ via $\sigma^2_{(G_+ \pm G_-)(N)} = \sigma_{G_+(N)}^2 + \sigma_{G_-(N)}^2 \mp 2c_{+,-} \sigma_{G_+}\sigma_{G_-}$. One obtains $\sigma^2_{\Delta G}(N) = 0.22 + 2.95/N$ and $\sigma^2_{\Sigma G}(N) = 0.03 + 2.95/N$. $\sigma^2_{\Delta G}(N)$ agrees very well with the data, reported above for the time interval 1987/88-2007/08. Based on this detailed insight into the statistical nature of goals several basic questions about the nature of soccer can be answered. Are offence or defence abilities more important? The magnitude of the variance $\sigma^2_{M}$ is a direct measure for the relevance of the observable $M$. Since $\sigma^2_{G_+} / \sigma^2_{G_-} =1.25 \pm 0.09 > 1$ the investment in good strikers may be slightly more rewarding. However, the difference is quite small so that to first approximation both aspects of a soccer match are of similar importance. Do teams with good strikers also have a good defence? In case of a strict correlation one would have $c_{+,-} = 1$. The present value of 0.71 indicates that there is indeed a strong correlation. However, the residual deviation from unity reflects some team dependent differences beyond simple statistical fluctuations. Interestingly, this correlation is significantly stronger in the Premier League, indicating an even stronger balance between the offence and the defence in a team of the Premier League. Is the total number of goals of a team (i.e. $G_+ + G_-$) a team-specific property? On average this sum is 97. Without statistical effects due to the finite length of a season the standard deviation of this value would be just $ 34 \sigma_{\Sigma G } \approx 6$, i.e. only a few percent. Thus, to a very good approximation the number of goals on average scored by team $i$ is just given by $G_{+,i} = \langle G_\pm \rangle + \Delta G_i/2$ (an analogous formula holds for $G_{-,i}$). \section{Soccer myths} In typical soccer reports one can read that a team is particularly strong at home (or away) or is just playing a winning streak ({\it Lauf} in German) or a losing streak. Here we show that the actual data does not support the use of these terminologies (except for the presence of losing streaks). \subsection{Home fitness} One may ask the general question whether the overall fitness $\Delta G$ of the team {\it fully} determines the {\it home fitness}, i.e. the team quality of playing at home. If yes, it would be useless and misleading to define a team-specific home fitness because it is not an independent observable but just follows from the overall fitness $\Delta G(\infty)$. For the present analysis we use again our standard data set starting from 1987/88. To discuss the ability of a team to play at home as compared to play away we introduce $\Delta G_H(N)$ and $\Delta G_A(N)$ as the goal difference in $N$ home matches and $N$ away matches, respectively. Of course, one has $\Delta G_H (N) + \Delta G_A (N) = \Delta G (2N)$. The {\it home advantage} can be characterized by \begin{equation} \Delta (\Delta G) = \Delta G_H - \Delta G_A. \end{equation} The average value $\langle \Delta (\Delta G) \rangle $ is approx. 1.4, which denotes the improved home goal difference as compared to the away goal difference. This number also means that on average a team scores 0.7 more goals at home rather than away whereas 0.7 goals more are conceded by this team when playing away. We note in passing that the home advantage is continuously decreasing with time. Just taking the seasons since 1995/96 one gets, e.g., $\Delta (\Delta G) \approx 1.0$. A team-specific home fitness could be characterized by $\Delta (\Delta G)_i - \langle \Delta (\Delta G)\rangle $. A positive value means that team $i$ is better at home than expected from the overall fitness $\Delta G$. Of course, again one has to consider the limit $N \rightarrow \infty$. Thus, in analogy to the previous Section one has to perform a scaling analysis. After $N$ matches $\Delta (\Delta G)(N)$ will be distributed with a variance, denoted $\sigma^2_{\Delta (\Delta G) (N)}$. A positive value of the large $N$-limit $\sigma^2_{\Delta (\Delta G)}$ reflects the presence of a home fitness. Otherwise the quality of a team for a match at home (or away) is fully governed by the overall fitness $\Delta G(\infty)$. \begin{figure} \includegraphics[width=7cm]{heimst_n.eps} \caption{\label{heimst_n} The variance of $\Delta (\Delta G)$, i.e. $\sigma^2_{\Delta (\Delta G(N))}$ vs. $1/N$. The straight line is a linear fit. The extrapolation to $N=\infty$ yields approx. $-0.003 \pm 0.016$.} \end{figure} The $N$-dependence of $\sigma^2_{\Delta (\Delta G) (N)}$ is shown in Fig.\ref{heimst_n}. To obtain these data one has to evaluate the appropriate expression for the empirical variance for this type of analysis which is a slightly tedious but straightforward statistical problem. The statistical error has been estimated from performing this analysis for the individual years. It becomes clear that the hypothesis $\sigma^2_{\Delta (\Delta G) (\infty)} =0$ is fully compatible with the data. Because of the intrinsic statistical error one cannot exclude a finite value of $\sigma_{\Delta (\Delta G) (\infty)}$ ($\sigma_{\Delta (\Delta G) (\infty)} < 0.12)$. This value is less than 10\% of the average value $\langle \Delta (\Delta G) \rangle = 1.4$. Thus, the presence of teams which are specifically strong at home relative to their overall fitness is, if at all, a very minor effect. Although this result rules out the presence of a relevant team-specific home fitness it may be illuminating to approach the same problem from a direct analysis of the whole distribution of $\Delta (\Delta G)(N=17)$. The goal is to compare it with the distribution one would expect for the ideal case where no team-specific home fitness is present. This comparison, which is technically a little bit involved, is shifted to Appendix II. It turns out that the residual home fitness can be described by a value of $ 0 \le \sigma_{\Delta (\Delta G)}\ll 0.4$. This means that in particular the simple model, sketched above, is not compatible with the data. In summary, relative to the average home advantage of 1.4 any possible residual home fitness is a negligible effect. In literature it is often assumed that for a specific match of team A vs. team B one can a priori define the expectation value of goals $t_{A(h)}$ and $t_{B(a)}$, scored by the home team A and the away team B, respectively. In the approach of Ref.\cite{Rue00} one explicitly assumes $t_A(h) = f_{AB} \cdot c_h$ and $t_B(a) = f_{BA} \cdot c_a$ (using a different notation). Here $f_{ij}$ contains the information about the offence strength of team $i$ and the defence strength of team $j$. The information about the location of the match is only incorporated into the factors $c_h$ and $c_a$. This approach has two implicit assumptions. First, the fact that $c_h$ is team-independent is equivalent to the assumption that there is no team-specific home fitness. This is exactly what has been shown in this Section. Second, the average number of goals of, e.g., the home team is proportional to the average number, expected in a neutral stadium. For reasons of convenience this number can be chosen identical to $f_{AB}$. Then, $c_h > 1$ takes into account the general home advantage. The same holds for $c_a < 1$. Assuming the multiplicative approach one has to choose \begin{equation} c_{h,a} = \frac{\langle G_{\pm} \rangle \pm \langle \Delta (\Delta G)\rangle }{\langle G_{\pm} \rangle}. \end{equation} which for the present case yields $c_h/c_a \approx 1.45$ In principle, one might have also added some fixed value to take into account the home advantage. Thus, the multiplicative approach is not unique. However, using the above concepts, one can show that this approach is indeed compatible with the data. For this purpose we introduce the observables $M\in \{G_{+,h},G_{+,a}, G_{-,h}, G_{-,a}\}$. $G_{\pm,h}$ denotes the number of goals scored and conceded by the home team. An analogous definition holds for $G_{\pm,a}$. In analogy to above one can calculate $\sigma^2_{M}$ obtained again from the $N \rightarrow \infty$-extrapolation of the respective observable. One obtains $\sigma^2_{G_{+,h}} = 0.089 , \sigma^2_{G_{+,a}} = 0.044 ,\sigma^2_{G_{-,h}} = 0.033, \sigma^2_{G_{-,a}} = 0.069$. If the properties of home and away goals are fully characterized by the factors $c_{h,a}$ one would expect $\sigma_{G_{+,h}}/\sigma_{G_{+,a}} = \sigma_{G_{-,a}}/\sigma_{G_{-,h}} = c_h/c_a$. The two ratios read 1.4 and 1.45, respectively, and are thus fully compatible with the theoretically expected value of 1.45. In case of an additive constant to account for the home advantage one would have expected a ratio of 1 because then the distributions would have been just shifted to account for the home advantage. In practical terms this allows one to correct the results of soccer matches for the home advantage by dividing the number of goals in a match by $c_h$ and $c_a$, respectively. This correction procedure may be of interest in cases where one wants to identify statistical properties without being hampered by the residual home advantage. Using this procedure for the data points in Fig.6 the data points for odd $N$ would also fall on the regression line. We just mention in passing that in the limit of small $\langle \Delta(\Delta G)\rangle/\langle G_\pm \rangle$ and small $\sigma_{G_\pm} / \langle G_\pm \rangle$ (which in practice is well fulfilled) this scaling yields similar results as compared to a simple downward shifting of the home goals and upward shifting of the away goals by $\langle \Delta(\Delta G)\rangle /2$. \subsection{Streaks} The aspect of identifying winning or losing streaks is somewhat subtle because one has to take care that no trivial selection effects enter this analysis. Here is one example of such an effect. Evidently, in case of a winning streak it is likely that during this period the team played against somewhat weaker teams and will, subsequently, on average play against somewhat stronger teams. Thus, to judge the future behavior of this team one needs a method which takes these effects in a most simple way into account. To obtain a sufficiently good statistics here we use our complete data set, starting from the season 1965/66. The key question to be answered here is whether or not the presence of a winning or losing sequence stabilizes or destabilizes a team or maybe has no effect at all. If a winning sequence stabilizes a team one may speak of a winning streak. Analogously, if a losing sequence destabilizes a team one has a losing streak. In general, we have identified all sequences of $n$ successive matches where $n$ wins or losses were present. Of course, the actual length of the win or loss sequences can have been much longer. Having identified such a sequence we have determined the probability that in the $m$-th match after this sequence that team will win. This probability is denoted $p_{win}(m,n)$. This is sketched in Fig.\ref{sketch_series} for the case $n=4$. \begin{figure} \includegraphics[width=7cm]{sketch_series.eps} \caption{\label{sketch_series} Sketch of the definitions of $n$ and $m$ for the analysis of the possible existence of winning and losing streaks. } \end{figure} In a first step we analyze the winning probability in the next match, i.e. for $m=1$. The data are shown in Fig.\ref{series_simple}. In case of winning sequence the probability to win increases with increasing $n$. The opposite holds for a losing sequence. Does this indicate that the longer the winning (losing) sequence, the stronger the (de)stabilization effect, i.e. real winning or losing streaks emerge? \begin{figure} \includegraphics[width=7cm]{series_simple.eps} \caption{\label{series_simple} The probability $p_{win}(n,m)$. to win after a team as won or lost n times.} \end{figure} This question has been already discussed in Ref. \cite{Dobson03}. It was correctly argued that by choosing teams which have, e.g., won 4 times one typically selects a team with a high fitness. This team will, of course, win with a higher probability than an average team (selected for $n=0$). Thus the increase of the win probability with $n$ is expected even if no stabilizing effect is present. It would be just a consequence of the presence of the fitness distribution and thus of good and bad teams, as shown above. Only if all teams had the same fitness the data of Fig.\ref{series_simple} would directly indicate the presence of a stabilization and destabilization effect, respectively. The key problem in this analysis is that the different data points in Fig.\ref{series_simple} belong to different subensembles of teams and thus cannot be compared. Therefore one needs to devise an analysis tool, where a fixed subensemble is taken. The realization of this tool is inspired by 4D NMR experiments, performed in the 90s in different groups to unravel the properties of supercooled liquids \cite{Klaus,Wilhelm,epl}. The key problem was to monitor the time evolution of the properties of a specific subensemble until it behaves again like the average. This problem is analogous to that of a soccer team being selected because of $n$ wins or losses in a row. This idea can be directly applied to the present problem by analyzing the $m$-dependence of $p_{win}(m,n)$. It directly reflects possible stabilization or destabilization effects. In case of a stabilization effect $p_{win}(m)$ would be largest for $m=1$ and then decay to some limiting value which would be related to the typical fitness of that team after possible effects of the series have disappeared. In contrast, in case of a destabilization effect $p_{win}(m=1)$ would be smaller than the limiting value reached for large $m$. Note that in this way the problem of different subensembles is avoided. Furthermore this analysis is not hampered by the fact that most likely the opponents during the selection period of $n$ matches were on average somewhat weaker teams. The limiting value has been determined independently by averaging $p_{win}(m,n)$ for $|m| > 8$, i.e. over matches far away from the original sequence. To improve the statistical quality this average also includes the matches sufficiently far before the selected sequence (formally corresponding to negative $m$). Of course, only matches within the same season were taken into account. It is supposed to reflect the general fitness of a team during this season (now in terms of wins) independent of that sequence. In case of no stabilization or destabilization effect the observable $p_{win}(m,n)$ would not depend on $m$. This would be the result if playing soccer would be just coin tossing without memory. To avoid any bias with respect to home or away matches we only considered those sequences where half of the matches were home matches and and the other half away matches ($n$ even). Furthermore, the data for $p_{win}(m,n)$ are averaged pairwise for subsequent $m$ (1 and 2, 3 and 4, and so on). \begin{figure} \includegraphics[width=7cm]{series2.eps} \caption{\label{series2} The probability to win $p_{win}(m,n=2)$ after a sequence of $n=2$ wins and losses, respectively. The broken lines indicate the range ($\pm 1\sigma$-interval) of the plateau value reached for large $m$.} \end{figure} \begin{figure} \includegraphics[width=7cm]{series4.eps} \caption{\label{series4} Same as in the previous figure for $n=4$. In addition we have included data where only away matches of the teams are considered for the calculation of $p_{win}(m,n=4)$ in case of a win sequence. } \end{figure} The functions $p_{win}(m,n)$ for $n=2$ and $n=4$ are shown in Figs. \ref{series2} and \ref{series4}, respectively. For $n=4$ a total of 374 win sequences and 384 loss sequences have been taken into account. For $n=2$ one observes a small but significant destabilization after a loss sequence. It takes approx. 8 matches to recover. No effects are seen for the win sequence. More significant effects are visible for $n=4$. For the loss sequence one observes that directly after the selected sequences, i.e. for $m=1$ and $m=2$ the winning probability is reduced by approx. 30\% as compared to the limiting value. Thus for about 6 matches the teams play worse than normal. Surprisingly, a reduction of $p_{win}(m,n=4)$ for small $m$ is also visible for the win sequence. Thus, there seems to be a destabilization rather than a stabilization effect. By restricting the analysis to the away matches after the selected sequence, this effect is even more pronounced. Of course, correspondingly the effect is smaller for home matches. Unfortunately, $n=6$ can no longer be analyzed because due to the small number of events the statistics is too bad. Of course, a critical aspect in this discussion is the matter of statistical significance. For this purpose we have estimated the probability that, using Gaussian statistics, the average of the first four matches after a win sequence can be understood as an extreme statistical deviation from the final plateau value. This probability turns out to be smaller than $10^{-3}$. Furthermore we analyzed shuffled data, i.e. where for a given team in a given season the 34 matches are randomly ordered. The results for $p_{win}(m,n=4)$, using one example of ordering, are shown in Fig.\ref{series4_shuffle}. As expected no effect is seen. The observation that the plateau values are somewhat lower than in Fig.\ref{series4} just reflects the fact the the first data points (small $m$) in Fig.\ref{series4} are systematically lower than the respective plateau value. \begin{figure} \includegraphics[width=7cm]{series4_shuffle.eps} \caption{\label{series4_shuffle} Analysis of loss and win sequences, using shuffled data.} \end{figure} Thus, we conclude that both a positive ($n = 4$) as well as a negative sequence $(n = 2,4)$ have a destabilizing effect. This means that losing streaks indeed exist whereas there are no stabilization effects for positive sequences, invalidating the notion of a winning streak. Rather destabilization effects occur after a longer winning sequence. This asymmetry between positive and negative sequences is already reflected by the asymmetry, seen in Fig.\ref{series_simple}. Actually, the present results disagree with the statistical analysis in Ref.\cite{goddard01} for the Premier League. In that work it is concluded that sequences of consecutive results tend to end sooner than they should without statistical association. However, the presence of losing streaks has been clearly demonstrated above. The disagreement might be due to the different data set (Bundesliga vs. Premier League). However, one needs to take into account that in that work the results have been obtained within a framework of a specific model via Monte-Carlo simulations. The present analysis has the advantage that, first, it does not to refer to any model about the nature of soccer and, second, can be done without additional Monte-Carlo simulations. Thus, possible artifacts of the model might hamper the interpretation of the data. \section{Discussion and Summary} On a conceptual level we have used finite-size scaling methods to extract the underlying distribution of fitness parameters. It turns out that the goal difference is a better measure of the team fitness than the number of points. From a technical point of view a key aspect was to analyze the $N$-dependence of observables such as $\Delta G$. This problem is analogous to the simple physical problem of random walks with a drift. The key results can be summarized as follows. 1.) The fitness of a team displays a complex temporal evolution. Within a season there are no indications for any variations (except maybe for day-to-day fluctuations around some average team fitness which can only be identified via a single-match analysis. This is, however, beyond the scope of the present work). During the summer-break a significant decorrelation is observed. This short-scale decorrelation stops after around 2 years where approx. 40\% of the fitness has been changed (some teams becoming better, some worse). Interestingly, the remaining 60\% of the fitness only decorrelates on an extremely long times scale of 20-30 years which is close to the data window of our analysis. This shows that there are dramatic persistence effects, i.e. there are some underlying reasons why good teams remain good on time scales largely exceeding the lifetime of typical structures in a club (manager, coach, players etc.). 2.) For finite seasons (which, naturally, is realized in the actual soccer leagues) the fitness of a team can be only roughly estimated because of the presence of residual statistical fluctuations. However, by linear extrapolation of the variance of the team fitness one can identify the underlying variance one would (hypothetically) obtain for an infinite number of matches. Based on this one can estimate the statistical contribution to the end-of-the-season table which is quite significant (36\% for points). This allows one to quantify, e.g., the relevance of the final league table in some detail. 3.) The overall fitness, defined via the goal difference $\Delta G$, is to a large extent the only characteristics of a team. In particular there is no signature of the presence of a team specific home fitness. We would like to stress that the definition of a home fitness is always relative to a single season. This means if a team is strong at home in one year and weak in another year this would nevertheless show up in the present analysis. Whenever a team plays better or worse at home than expected (measured via $\Delta G_H$ - $\Delta G_A$) this effect can be fully explained in terms of the natural statistical fluctuations, inherent in soccer matches. 4.) A more detailed view on the number of goals reveals that the quality of the offence and that of the defence of a team is strongly correlated. In case of a perfect correlation their quality would be fully determined by the overall fitness. However, since the correlation is not perfect there exist indeed differences. Furthermore, the strength of attack is slightly more important for a successful soccer team than the strength of defence, albeit the difference is not big. 5.) It is possible to identify the impact of the home-advantage for the final result. Stated differently, one can estimate the average outcome of a match one would obtain at a neutral stadium. This procedure may be helpful if data are taken as input for a statistical analysis. 6.) The notion of streaks, as present in the soccer language, can only be confirmed in case of a losing streak. This means that if a team has lost several times (we analyzed 2 and 4 times) there is a significant drop of their fitness as compared to the normal level which will be reached again sufficiently far away from the time period. Possible reasons may be related to psychological aspects as well as the presence of persistent structural problems (such as heavily injured players). Surprisingly, no winning streak could be identified. Winning two times had no effect on the future outcome. Winning four times even reduced the fitness, in particular when having an away match. This analysis had to be performed with care in order to avoid any trivial statistical effects. Possibly, this indicates an interesting psychological effect. In literature one can find models for understanding the basis of human motivation. In one of the standard models by Atkinson a reduction of motivation may occur if the next problem {\it appears} either to be too difficult (after having lost several times) or too simple (after having won several times) \cite{atkinson}. However, since these types of sequences (for $n=4)$ of wins or losses are relatively rare they are of very minor relevance for the overall statistical description of the temporal evolution of soccer matches. Since furthermore the effect of sequences decays after a few more matches (up to 8) these observations are consistent with the notion that the fitness does not change during a season (if averaged over the time scale of at least a quarter season). Of course, a further improvement of the statistical analysis could be reached if further explanatory variables are implemented such as the possession of the ball \cite{Hirotsu03}. It would be interesting to quantify the increase of the predictive power in analogy to the analysis of this work; see, e.g., Tab.I. Whereas some of our results were expected, we had to revise some of our own intuitive views on how professional soccer works. Using objective statistical methods and appropriate concepts, mostly taken from typical physics applications, a view beyond the common knowledge became possible. Probably, for a typical soccer fan also this statistical analysis will not change the belief that, e.g., his/her support will give the team the necessary impetus to the next goal and finally to a specific home fitness. Thus, there may exist a natural, maybe even fortunate, tendency to ignore some objective facts about professional soccer. We hope, however, that the present analysis may be of relevance to those who like to see the systematic patterns behind a sports like soccer. Naturally, all concepts discussed in this work can be extended to different types of sports. Furthermore an extension to single-match properties as well as a correlation with economic factors is planned for the future. We would like to thank S.F. Hopp, C. M\"uller and W. Krawtschunowski for the help in the initial phase of this project as well as B. Strauss, M. Tolan, M. Trede and G. Schewe for interesting and helpful discussions. Furthermore we would like to thank H. Heuer for bringing the work of Atkinson to our attention. \section{Appendix I} Here we consider a simple model which further rationalizes the statement that observables with larger Pearson correlation coefficients (correlation between first and second half of season) are better measures for the fitness of a team. This holds independent of whether or not the true fitness changes during a season or remains constant. We assume that the true fitness of a team $i$ at time $j$ ($j$ may either reflect a single match or, e.g., the average fitness during the $j$-th half of the season) can be captured by a single number $\mu_{i,j}$. Evidently, the true fitness $\mu_{i,j}$ of team $i$ is not exactly known. The variance of the fitness $\sigma_\mu^2$ is assumed to be time independent, which just reflects stationarity. In the experiment (here: soccer match) one observes the outcome $x_{i,j}$ which may, e.g., correspond to the goal difference or the number of points of team $i$ at time $j$. We assume a Markovian process, i.e. the outcome at time $j$ is not influenced by the outcome in previous matches. Naturally $x_{i,j}$ is positively correlated with $\mu_{i,j}$. Without loss of generality we assume that the $\langle \mu_{i,j} \rangle_i = \langle x_{i,j} \rangle_i = 0$. The index $i$ reflects the fact that the averaging is over all teams. For reasons of simplicity we assume a linear relation between $x_{i,j}$ and $\mu_{i,j}$, namely \begin{equation} \label{app1} x_{i,j} = a (\mu_{i,j} + \xi). \end{equation} Here $a > 0$ is a fixed real number and $\xi$ some noise, characterized by its variance $\sigma_\xi^2$. The noise reflects the fact that the outcome of a soccer match is not fully determined by the fitness of the teams but also includes random elements. This relation expresses the fact that a team with a better fitness will on average also perform better during its matches. The key idea in the present context is to use the outcome of matches to {\it estimate} the team fitness. The degree of correlation between $x_{i,j}$ and $\mu_{i,j}$ is captured by the correlation coefficient \begin{equation} \label{app2} c_{x_j,\mu_j} = \frac{\langle x_{i,j} \mu_{i,j} \rangle_i}{\sigma_x \sigma_\mu}. \end{equation} A large value of $c_{x_j,\mu_j}$ implies that the estimation of $\mu_{i,j}$, based on knowledge of $x_{i,j}$ works quite well. Thus, one may want to search for observables $x_{i,j}$ with large values of $c_{x_j,\mu_j}$. Unfortunately, since $\mu_{i,j}$ cannot be measured $c_{x_j,\mu_j}$ is not directly accessible from the experiment. The theoretical expectation reads (see Eqs. \ref{app1} and \ref{app2}) \begin{equation} \label{cxm} c_{x_j,\mu_j} = \frac{\sigma_\mu}{\sqrt{\sigma_\mu^2 + \sigma_\xi^2}}. \end{equation} For a closer relation to the general experimental situation one has to take into account that the team fitness may somewhat change with time. This can be generally captured by the correlation factor \begin{equation} c_{\mu_j,\mu_{j+1}} = \frac{\langle \mu_{i,j+1} \mu_{i,j} \rangle}{\sigma_\mu^2}. \end{equation} Experimentally accessible is the correlation of $x_{i,j}$ for two subsequent time points $j$ and $j+1$ . A short and straightforward calculation yields (using Eq.\ref{cxm}) \begin{equation} c_{x_j,x_{j+1}} = c_{\mu_j,\mu_{j+1}}[c_{x_j,\mu_j}]^2. \end{equation} This result shows that {\it independent} of the possible decorrelation of the true fitness $\mu$ observables $x$ with a larger correlation coefficient $c_{x_j,x_{j+1}}$ display larger $c_{x_j,\mu_j}$, i.e. form a better measure for the true fitness $\mu$. This is the line of reasoning used to identify $\Delta G$ as a better fitness measure than the number of points independent of whether or not $\Delta G$ changes during a season. To go beyond this key statement we specify the loss of correlation of the true fitness via the simple linear ansatz \begin{equation} \mu_{i,j+1} = b \mu_{i,j} + \epsilon. \end{equation} Here the noise term is characterized by the variance $\sigma_\epsilon^2$. For reasons of simplicity we assume that the random-walk type dynamics is identical for all teams. Stationarity is guaranteed exactly if \begin{equation} \sigma_\epsilon^2 = \sigma_\mu^2(1 - b^2). \end{equation} Constant fitness naturally corresponds to $b=1$ and $\sigma_\epsilon = 0$. Of particular interest for the present work is the average of $x_{i,j}$ over $N$ times (e.g. $N$ matches if $j$ counts the matches). Here we define \begin{equation} X_{i}(N) = \frac{\sum_{j=1}^N x_{i,j}}{N}. \end{equation} The variance of this average, denoted $\sigma_{X(N)}$ can be calculated in a straightforward manner. The result reads \begin{equation} \label{X_exact} \sigma_{X(N)}^2 = \frac{a^2\sigma_\mu^2}{N^2} \left [ N + 2b\frac{N-1-Nb+b^N}{(1-b)^2}\right ] + \frac{a^2 \sigma_\xi^2}{N}. \end{equation} For $b=1$ one obtains $\sigma_{X(N)}^2 = a^2 \sigma_\mu^2 + a^2\sigma_\xi^2/N$. Thus, in case of constant team fitness one gets a linear behavior in the $1/N$ representation and the limit value just corresponds to the variance of the team fitness (apart from the trivial constant $a$). This implies that by extrapolation one can get important information about the underlying statistics, as described by the true team fitness $\mu_{i,j}$. This just reflects the fact that for sufficient averaging the noise effects become irrelevant. For $b < 1 $, however, one has a crossover from that behavior to $\sigma_{X(N)}^2 = a^2 \sigma_\mu^2[(1+b)/(1-b)]/N + a^2\sigma_\xi^2/N$ for large $N$, thus approaching zero for large $N$. Since $\sigma^2_{\Delta G}(N)$ did not show any bending we have concluded in the main text that the data do not indicate a decorrelation of the fitness within a single season. \section{Appendix II} Here we discuss in more detail the distribution of $\Delta(\Delta G)(N=17)$ shown in Fig.\ref{heimstaerke}. Of course, it has a finite width due to statistical effects. Our goal is to compare this distribution with a second distribution which is generated under the assumption that no specific home fitness exists. For this purpose we have defined, for each team in a given season, the random variable $\Delta G_1 - \Delta G_2$. Here the first term contains the average of the goal differences of some 17 matches and the second term the average over the remaining 17 matches. The 34 matches were attributed to both terms such that the number of home matches of the first term is 9 (or 8) and that of the second team is 8 (or 9), respectively. Then we have generated the distribution of $\Delta G_1 - \Delta G_2$. In order to get rid of the residual home effect (9 vs. 8) we have shifted this curve so that the average value is 0. This procedure has been repeated for many different mappings of this kind and for all seasons. The resulting curve is also shown in Fig.\ref{heimstaerke}. It reflects the statistical width of $\Delta (\Delta G)$ after a season if no home advantage were present. It can be very well described by a Gaussian. When shifting this distribution by the value of the average home advantage one obtains an estimate of the distribution of $\Delta (\Delta G)$ for $\sigma^2_{\Delta (\Delta G)}= 0$. To be consistent with this procedure we have generated the distribution of $\Delta (\Delta G)(N=17)$ in an analogous way. We have calculated this distribution for every individual season and shifted each curve so that the mean agrees with the overall mean. In this way we have removed a possible broadening of this curve due to the year-to-year fluctuations of the general home advantage. \begin{figure} \includegraphics[width=7cm]{heimstaerke.eps} \caption{\label{heimstaerke} Analysis of the home fitness. The squares correspond to the actual distribution of $\Delta (\Delta G)$. This curve is compared with the estimation for $\sigma_{\Delta (\Delta G)} = 0$ and $\sigma_{\Delta (\Delta G) } = 0.4$. For more details see text.} \end{figure} In agreement with the discussion of Fig.\ref{heimst_n} one observes a good agreement with the actual distribution of $\Delta (\Delta G)$. By convolution of this distribution with a Gaussian with variance $\sigma^2_{\Delta (\Delta G)}$ one can get information about the sensitivity of this analysis. Choosing, e.g., $\sigma_{\Delta (\Delta G)} = 0.4$, one can clearly see that this choice is not compatible with the actual distribution of $\Delta (\Delta G)$. Thus, if at all, the residual home fitness can be described by a value of $\sigma_{\Delta (\Delta G)}$ significantly smaller than 0.4. In the main text we have derived an upper limit of 0.12.
{ "attr-fineweb-edu": 2.158203, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUf0nxK0zjCxh75zfX
\section{Introduction} Player tracking in team sports consists in detecting and identifying the players in video sequences. It is a necessary task to automate the generation of individual statistics such as ball possession, field position or involvement in play sequences. Player tracking in team sports such as rugby is however a challenging task. Rugby is a sport of physical contact where player occlusions are very frequent on camera during rucks, tackles and scrums. The players can also adopt a wide range of body postures from sprinting to laying on the ground in a foetal position. Players from the same team share a very similar appearance since they wear the same jerseys. Moreover, the number of pixels in which the players are visible is often limited in the case of a TV stream (sometimes with a height below 150 pixels). This prevents the access to fine identification details. \begin{figure} \begin{center} \includegraphics{images/captures.pdf} \end{center} \caption{Tracking French players (blue jerseys) in our rugby sevens dataset: a. France / Kenya extract, b. Argentina / France extract, c. France / Chile extract.} \label{fig:captures} \end{figure} Player tracking is a specific Multi-Object Tracking (MOT) problem. MOT has been widely studied in the literature. Security applications have lead to the development of many people tracking approaches. Offline methods use all the frames of the input video sequence to optimize the generation of tracks while online methods target real-time applications by relying only on the current and previous frames to generate the tracks. The most recent frameworks achieve the best performance using deep neural network architectures. The availability of large public datasets and challenges such as the MOT challenge \cite{milan2016mot} allows to fairly train and compare the various approaches. Some recent people tracking methods have also been proposed for the specific context of team sports: soccer \cite{zhang2020multi, hurault2020self}, basketball \cite{lu2013learning} and hockey \cite{vats2021player}. These methods often use private datasets specific to their studied sport to get competitive results evaluated on short video clips extracted from a match. Game-specific annotations are required to train a player tracking and identification system to adapt to the player identities and the context of a game. The number of such annotations is an important factor that will determine the success of using such a system in real world scenario. Little focus was made in previous work on the practicability of this annotation process. Consequently, we propose an incremental learning approach to identify players with very few game-specific annotations. Our method is offline: it tracks and identifies players once the game has been completed. It benefits from the closed gallery re-identification (re-ID) hypothesis as, contrary to video surveillance, the number of players is known and limited. Since our method does not use any sport-specific knowledge, it can be applied to any team sport. Our annotation process consists in several steps. Bounding boxes around all the persons in the frames are first extracted from the input video to generate non-ambiguous tracklets. A tracklet is the uninterrupted sequence of the bounding box images of a single player. Tracklets can have a variable length since a player can enter or leave the camera field of view or be occluded by an other player. At this stage, the user provides few annotations per player to train the tracklet re-ID network. Finally, the obtained tracklet classification scores or appearance features feed an algorithm that look for an optimal association between tracklets and identities. The contributions of this paper are the following: We tackle the sparsity of training data in team sport contexts by leveraging generic detection and re-ID datasets. The detection network is only learned on a public dataset. The re-ID network is pretrained on a video surveillance public dataset. We propose a new architecture based on a Transformer network \cite{vaswani2017attention} to classify and generate tracklet appearance features. An incremental learning mechanism using few user interactions trains this model and strengthen the re-ID performances throughout the whole annotation process. Some datasets have been proposed for basketball \cite{delannay2009detection} and soccer \cite{dorazio2009semi} player tracking with multiple static cameras. However, although Deliege et al. \cite{deliege2021soccernet} are extending their SoccerNet dataset to tracking and re-ID, no dataset with a moving point of view has been made available. We publicly release our rugby sevens tracking dataset composed of single-view videos that can pan, tilt or zoom to follow the action. It is one of the most challenging team sport for tracking on which no approach was tested. We demonstrate the efficiency of our approach on our dataset. On a full game, it can achieve up to 67.9\% detection and identity classification recall when the players are sufficiently visible with only 6 annotations per player. The paper is organized as follows: Section~\ref{sec:sota} introduces Related Work. Our method is described in Section~\ref{sec:method}. Finally, Section~\ref{sec:results} provides our results on our challenging rugby sevens dataset, compares them to state-of-the-art methods and analyzes them in an ablative study. \section{Related Work} \label{sec:sota} \subsection{Multiple people tracking} Two categories of MOT algorithms can be distinguished. \textbf{Offline methods} leverage the full sequence of images to globally optimize the generated tracks with a graph paradigm. The vertices are the detections on each frame and the edges are the connections between detections that form tracks. Thus, Zhang et al. \cite{zhang2008} uses a minimum cost flow iterative algorithm that models long term occlusions. The approach described by Berclaz et al. \cite{berclaz2011} takes only an occupancy map of detection as input and applies a k-shortest path algorithm on the flows. More recently, Brasó and Leal-Taix \cite{braso2020learning} proposed a fully differentiable network that learns both the appearance and geometrical feature extraction as well as the detection association to generate tracks. Hornakova et al. \cite{hornakova2020lifted} use lifted edges to model long term interactions and generate the optimized solution with a linear programming relaxation. \textbf{Online methods} only use the current and past frames to associate new detections with tracks. They have raised more interest in the literature as they fit real time scenario. Thus, the SORT algorithm \cite{bewley2016} uses a Faster R-CNN \cite{ren2015} person detector. Then, a Kalman filter \cite{kalman1960new} predicts the future positions of each track. The Intersection-Over-Union (IOU) between these predictions and the detected bounding boxes are used as inputs of an Hungarian algorithm that matches the detection with the tracks. ByteTrack \cite{zhang2021bytetrack} achieves state of the art tracking performance with a two steps association algorithm: the first step focuses on high confidence detections while the second step deals with the low confidence ones. The Deep SORT algorithm \cite{wojke2017simple} adds a re-ID network to extract the visual appearance of each person. The input data of the Hungarian algorithm becomes a combination of a Manaholis distance as the spatial term and a cosine distance between the re-ID vectors as the appearance term. Using distinct networks for detection and re-ID has the advantage of separating the two tasks that may have opposite objectives. The detection task aims at learning common features to recognize humans while the re-ID task aims at learning distinctive features of each individual. However this may cause scalability issues as each detected bounding box must be independently processed by the re-ID network. Single-shot methods were therefore proposed to generate the bounding boxes coordinates and re-ID vectors with a single network. Thus, Track-RCNN \cite{voigtlaender2019} uses a common backbone with specific heads for each task. FairMOT \cite{zhang2020fairmot} achieves better tracking performance by focusing only on the detection and re-ID tasks. Meinhard et al. \cite{meinhardt2021trackformer} use a Transformer architecture. Applying traditional MOT to team sport players usually leads to many ID switches. Each time a player leaves the vision field or is occluded too much time, a new identity is generated at reappearance. This prevents the reliable generation of individual statistics (see section \ref{comparison_generic_tracking}). \subsection{Multiple team sport player tracking and re-identification} \subsubsection{Tracking} Some tracking methods have been proposed for the context of team sports. For soccer, many approaches performed tracking by first extracting the field regions \cite{manafifard2017survey, khatoonabadi2009automatic, baysal2015sentioscope, liu2009automatic, d2009investigation, xing2010multiple}. In the method of Liu et al. \cite{liu2009automatic}, an unsupervised clustering algorithm classifies the players among four classes (two teams, referee or outlier). The tracking is formulated as a Markov chain Monte Carlo data association. D'Orazio et al. \cite{d2009investigation} classify each player with an unsupervised clustering algorithm. The tracking takes as input geometrical and motion information. It is based on a set of logical rules with a merge split strategy. In Xing et al. \cite{xing2010multiple}, the observation model of each player is composed of the color histogram in the cloth regions, the size and the motion. The tracking is formulated as particle filtering. Theagarajan and Bhanu \cite{theagarajan2020automated} used a YOLOv2 \cite{redmon2016you} network detector and a DeepSORT tracker \cite{wojke2017simple} to identify the player controlling the ball. All the previous approaches do not build individual appearance signatures per player identities. If a player leaves the camera field of view and re-enter later, he/she will be considered as a new person. This prevents the generation of individual statistics. \subsubsection{Re-identification} Jersey number recognition has been studied in the literature to identify team sport players. Ye et al. \cite{ye2005jersey} developed a method based on Zernike moments features \cite{khotanzad1990invariant}. Gerke et al. \cite{gerke2015soccer} were the first to use a convolutional neural network to classify jersey numbers from bounding box images of players. It was later combined to spatial constellation features to identify soccer players \cite{gerke2017soccer}. To ease the recognition of distorted jersey numbers, Li et al. \cite{li2018jersey} trained a branch of their network to correct the jersey number deformation before the classification. Liu and Bhanu \cite{liu2019pose} enabled jersey number recognition only in the relevant zones by detecting body keypoints. For hockey player identification, Chan et al. \cite{chan2021player} used a ResNet + LSTM network \cite{he2016deep, hochreiter1997long} on tracklet images to extract jersey numbers. When a single view is available, as in our rugby sevens dataset, jersey numbers are often not visible, partially visible or distorted. Besides, to our knowledge, there is no publicly available training dataset for team sport jersey number recognition. A solution can therefore be to use appearances to re-identify players. Teket and Yetik \cite{teket2020fast} proposed a framework to identify the player responsible for a basketball shot. Their re-ID network, based on MobileNetV2 \cite{sandler2018mobilenetv2}, is trained with a triplet loss formulation. The framework described by Senocak et al. \cite{senocak2018part} combines part-based features and multiscale global features to generate basketball player signatures. Both approaches are based, as ours, on the hypothesis of a closed gallery however they use a private dataset to train their model which makes comparisons impossible. \subsubsection{Tracking with re-identification} Several methods tracks players by using re-ID features \cite{lu2013learning, zhang2020multi, yang2021multi, hurault2020self, vats2021player}. Lu et al. \cite{lu2013learning} use DPM \cite{felzenszwalb2008discriminatively} to detect basketball players. Local features and RGB color histograms are extracted on players for the re-ID. Zhang et al. \cite{zhang2020multi} proposed a multi-camera tracker that locates basketball players on a grid based on a K-shortest paths algorithm \cite{berclaz2011}. Players are detected and segmented with a network based on Mask R-CNN \cite{he2017mask}. Re-ID features are computed thanks to the team classification, jersey number recognition and a pose-guided feature embedding. To track soccer players, Yang et al. \cite{yang2021multi} iteratively reduced the location and identification errors generated by the previous approach by creating a bayesian model that is optimized to best fit input pixel level segmentation and identification. Hurault et al. \cite{hurault2020self} use a single network with a Faster R-CNN backbone \cite{ren2015} to detect small soccer players and extract re-ID features. Kong et al. \cite{kong2021online} mix player appearance, posture and motion criteria to match new detections with existing tracks. Vats et al. \cite{vats2021player} use a Faster R-CNN network \cite{ren2015} to detect hockey players and a batch method for tracking \cite{braso2020learning}. Specific ResNet-18 networks \cite{he2016deep} are used to identify the player teams and jersey numbers. Most of the approaches presented here \cite{lu2013learning, zhang2020multi, yang2021multi, vats2021player} train their re-ID or jersey number recognition model with a private dataset. \subsubsection{Minimizing the number of annotations} To our knowledge, few previous work focus on the minimization of the game-specific training annotations for re-ID. For example, Lu et al. \cite{lu2013learning} used a mere 200 labels for every players in a team with their semi-supervised approach. Senocak et al. \cite{senocak2018part} use 2500 cropped images for each player to train their re-ID network. Teket and Yetik \cite{teket2020fast} use a training dataset that contains 30 to 1000 images per player. In this paper, by asking the user to annotate tracklets, we aim to demonstrate that it is possible to produce meaningful player re-ID results for a rugby sevens full game with only 6 annotations per player. \section{Proposed method} \label{sec:method} \subsection{Overview} \begin{figure} \begin{center} \includegraphics{images/process.pdf} \end{center} \caption{Incremental learning of tracklet classification. The user provides annotations to train the model to correctly classify the tracklets to a player identity.} \label{fig:process} \end{figure} We propose a new method to track the \(N_p\) players of a team in a video with a single moving view of a game. The first step of our method generates \( N_{t} \) tracklets we qualify as non-ambiguous because they contain a single identity. For this purpose, bounding boxes around persons are detected and associated across frames automatically. The user can then provide few identity annotations to some of the generated tracklets thanks to a dedicated interface show on Figure \ref{fig:interface_capture}. The tracklet re-ID network can then be trained with these annotations. Once the model is trained, classification scores and re-ID features are generated for all the tracklets. This data feeds an algorithm that matches every tracklet to an identity. Once the annotation interface has been updated, the user can then decide to add more annotations to correct the wrong classifications or to stop this incremental learning mechanism if she/he is satisfied by the results. The whole process is depicted on Figure \ref{fig:process}. \subsection{Tracklet generation} \label{sec:tracklet_generation} Non-ambiguous tracklets are generated with a tracking by detection paradigm. A Faster R-CNN network \cite{ren2015} with a ResNet-50 backbone \cite{he2016deep} trained on the COCO dataset \cite{lin2014microsoft} detects all the persons in the video frames. This detector is a well-known model used in several recent work \cite{hurault2020self, vats2021player}. To generate the tracklets, we use the simple and classic approach described in \cite{bewley2016}. Bounding boxes between the previous and the current frames are associated by bipartite matching with an Hungarian algorithm \cite{kuhn1955hungarian}. This matching is performed with a single IoU criteria since the player appearances are later taken into account by our tracklet re-ID model. We also use a Kalman filter \cite{kalman1960new} to predict the position of an existing track in the current frame. Each generated tracklet will be later matched to a single identity. We therefore want to avoid as much as possible identity switches inside tracklets. When a tracklet partially occludes an other one, bipartite matching may generate a wrong association. Our algorithm therefore splits the tracklets that intersect since they are considered as ambiguous. If at the current frame, two tracklet bounding boxes have an IoU above a threshold \(\mu = 0.5\) these tracklets are terminated and new ones are created. We also filter out tracklets that have a length inferior to \( l_{min} \). We indeed consider that they may also be ambiguous by containing several identities in their images. Besides, they do not provide enough diverse data to the tracklet re-ID model. \subsection{Incremental learning tracklet classification} \begin{figure} \begin{center} \includegraphics[scale=0.53]{images/architecture_training.pdf} \end{center} \caption{Architecture of the tracklet classification network. \( R_{img} \) extracts re-ID features \( T^1_t \) from the tracklet images. They are combined by the transformer to generate a single tracklet re-ID vector \( F_t \). The model is trained by ID loss and triplet loss.} \label{fig:archi} \end{figure} The aim of our system is to match tracklets to identities with the fewest possible annotations. This process is done through incremental learning since the user can choose to add more training annotations while the quality of the generated tracklet association is not satisfying. We set the target number of classes \(N_c = 1 + N_p\). The class zero corresponds to all persons we do not want to track (players from the opponent team, referees, public). Our tracklet re-ID model is mainly composed of a single image re-ID network \( R_{img} \) followed by a Transformer \cite{vaswani2017attention} as illustrated on Figure \ref{fig:archi}. For \( R_{img} \), we chose the model described by Luo et al. \cite{luo2019bag} for its simplicity. It uses a ResNet-50 backbone \cite{he2016deep} and has been trained on the generic Market1501 dataset \cite{zheng2015scalable}. It takes as input single images at resolution \(H \times W\) and outputs player appearance features at dimension \(d_{1}\). We regularly sample \(d_{t}\) images from each tracklet and combine their appearance features to obtain the tracklet features tensor \( T^1_t \in \mathbb{R}^{d_{t} \times d_{1}} \). The feature dimension of \(T^1_t\) is then reduced to \(d_{2}\) by a fully connected layer to obtain \(T^2_t\). This limits the dimension of the features inside the next nodes of our model in order to train it quickly. The Transformer in our model then combines the re-ID features of the sampled tracklet images \(T^2_t\) to generate a single tracklet re-ID vector \( F_t \). Its cross-attention nodes can learn to focus on the most distinctive features across the tracklet sampled frames. It takes as input of the encoder \(T^2_t\) and as input of the decoder the \(N_{q}\) queries \(Q_q\). Similarly to DETR \cite{carion2020end}, the queries \(Q_q \in \mathbb{R}^{d_{2}}\) are learned embeddings. Each query learns to specialize on some features of the player identities. However, we do not use any input positional encoding because, since our initial variable length tracklets are resampled to fixed length \(d_{t}\), there are no common temporal link between the features. We found that using 16 encoder layers, one decoder layer and 16 heads in the multi-head attention models was the best set of parameters. At the output of the decoder, a batch norm layer generates the tracklet features \( F_t \). For the classification, a fully connected layer computes the classification scores \(S_t \in \mathbb{R}^{N_{c}} \). Given a tracklet \( t \), the \(N_{qc}\) queries among \(N_{q}\) that gives the highest classification scores are selected for the back-propagation. The optimized loss is defined by \[ L = L_{ID}(S_t, \hat{S_t}) + \alpha L_{Triplet}(D_{t,p}, D_{t,n}), \] where \( L_{ID} \) is the standard cross entropy loss, \( \hat{S_t} \) are the target classification logits, \( L_{Triplet} \) is the soft-margin triplet loss \cite{hermans2017defense}, \( D_{t,p} \) and \( D_{t,n} \) are feature distances of positive pairs and negative pairs and \( \alpha \) is a binary constant. As described by Luo et al. \cite{luo2019bag}, the idea of combining a classification loss and a triplet loss is to let the model learn more discriminative features \( F_t \in \mathbb{R}^{d_2} \). For the triplet loss, we use a batch hard strategy that finds the hardest positive and negative samples. Once the model has been trained, all tracklets are processed by the model at inference stage to compute the tracklet classification scores \( S_t \) and features \( F_t \). \subsection{Association algorithms} With generated scores \(S_t\) and the features \(F_t\), we have the needed data to match tracklets to player identities by using an association algorithm. Two alternative methods are investigated. \subsubsection{Iterative association} An iteration of the association algorithm consists in selecting the highest score in the matrix of all tracklet scores \(S_t\). The highest score represents a matching between the tracklet \(t\) and the identity \(i\). The algorithm then checks that \(t\) can be associated to \(i\) by verifying that the tracklets already associated to \(i\) do not already appear in the frames where \(t\) appears. If the association is possible, \(t\) is added to the list of tracklets associated to \(i\) and a new iteration of the algorithm is run. When the iterative association is used, we set \( \alpha = 0 \) during the incremental learning to only optimize the classification scores \( S_t \). \subsubsection{Matrix factorization association} \label{sec:rnmf} The second algorithm is inspired from \cite{he2020multi}. The authors describe a multi-camera batch people tracking system that assigns tracklets extracted from different views to identities. The input of the algorithm is a tracklet similarity matrix \(S\) generated with appearance, motion and localization criteria. A Restricted Non-negative Matrix Factorization (RNMF) algorithm optimizes the identity assignment. The association matrix \(A \in \mathbb{R}^{N_{t} \times N_{p}} \) is computed thanks to the iterative updating rule given in \cite{ding2008convex}. We applied the RNMF algorithm to our single view case with \(S\) as the sum of an appearance term \(\Psi_{app}\) and a localization term \( \Psi_{loc} \). The similarity between two tracklets \(u\) and \(v\) is computed with: \[S(u,v) = clip(\Psi_{app}(F_u, F_v)) + clip(\Psi_{loc}(B_{ul}, B_{vf})) \] where \( clip(x) = max(min(x; 1); 0) \). \(\Psi_{app} \) is defined by equation \ref{eq:psi_app}. \begin{equation} \label{eq:psi_app} \Psi_{app(F_u, F_v)} = 1 - \frac{1}{\eta_{app}} \cdot d(F_u, F_v) \end{equation} where \(d(F_u, F_v)\) is the cosine distance between the feature vectors of the two tracklets and \(\eta_{app}\) is the cosine distance threshold above which we consider that \(u\) and \(v\) belongs to two distinct identities. \(\Psi_{loc} \) is defined by the equation \ref{eq:psi_loc}. \( t_{ul} \) is the end time of the of the first tracklet and \( t_{vf} \) is the start time of the second tracklet. \(B_{ul}\) and \(B_{vf}\) are the corresponding bounding boxes. \begin{multline} \label{eq:psi_loc} \Psi_{loc}(B_{ul}, B_{vf}) = \\ \begin{cases} (1 + \eta_{loc}) \cdot IoU(B_{ul}, B_{vf}) - \eta_{loc} & \text{if } t_{vf} - t_{ul} \leq \tau \\ 0, & \text{otherwise} \end{cases} \end{multline} where \(\eta_{loc}\) and \( \tau \) are constant numbers. \(\Psi_{loc} \) aims at giving a high similarity scores to two successive tracklets if \(B_{ul}\) and \(B_{vf}\) have a high IoU. When the RNMF association is used, we set \( \alpha = 1 \) during the incremental learning. \section{Experimental Results} \label{sec:results} \subsection{Implementation details} \begin{figure} \begin{center} \includegraphics[scale=0.225]{images/interface_capture.png} \end{center} \caption{Partial screen capture of our semi-interactive annotation interface. Each cell corresponds to one tracklet. Each column corresponds to one player identity, except the zero column that contains all the persons we do not want to track.} \label{fig:interface_capture} \end{figure} Our system is implemented using the Pytorch framework. The minimum number of frames of a tracklet \( l_{min} \) is set to 10. All the tracklets are resampled to \(d_t = 10\). Our re-ID network \cite{luo2019bag} takes as input images of resolution \(H=256\) and \(W=128\). It outputs features at dimension \(d_{1} = 2048\). Our Transformer network takes input features at \(d_{2} = 128\). The number of input queries \(N_{q}\) is set to 32. They are randomly initialized. The number of queries selected for backpropagation \(N_{qc}\) is set to 4. It is trained during 120 epochs with an AdamW optimizer, a learning rate of \(9 \times 10^{-5}\), a weight decay of \(10^{-4}\) and a batch size of 4. The transformer parameters are initialized with Xavier initialization \cite{glorot2010understanding}. For the linear layer, He initialization \cite{he2015delving} is used. \(\eta_{app}\) and \(\eta_{loc}\) are experimentally set to 0.35 and 0.43. The time threshold \(\tau\) for the localization similarity is set to 0.5 seconds. Our semi-interactive annotation interface, illustrated on Figure \ref{fig:interface_capture}, can run on a laptop GPU (Quadro M2000M). It allows the annotator to generate training data for our model by indicating to which player belongs a tracklet. The training time represents about 0.8 second per annotation when \(R_{img}\) is frozen and the iterative association is used. \subsection{Player tracking on rugby sevens samples} \label{tracking} \begin{figure*} \newcommand{0.344}{0.344} \begin{center} \setlength\tabcolsep{0pt} \begin{tabular}{cc} \includegraphics[scale=0.344]{images/france_argentina_fr.pdf} & \includegraphics[scale=0.344]{images/france_argentina_arg.pdf} \\ \includegraphics[scale=0.344]{images/france_chile_fr.pdf} & \includegraphics[scale=0.344]{images/france_chile_chi.pdf} \\ \includegraphics[scale=0.344]{images/france_kenya_fr.pdf} & \includegraphics[scale=0.344]{images/france_kenya_ken.pdf} \\ \end{tabular} \end{center} \caption{MOT metrics for the tracking of rugby sevens players in 3 videos. The x-axis corresponds to the total number of annotations divided by the number of tracked players. The variation intervals for the 5 seeds and average values are represented. The tested variants are: \(R_{img}\) frozen with the iterative association (\textcolor[RGB]{31,119,180}{\textbf{---}}), \(R_{img}\) frozen with the RNMF association (\textcolor[RGB]{255,127,14}{\textbf{---}}), \(R_{img}\) trained with the iterative association (\textcolor[RGB]{44,160,44}{\textbf{---}}), \(R_{img}\) trained with the RNMF association (\textcolor[RGB]{214,39,40}{\textbf{---}}) and the ground truth association (\textcolor[RGB]{148,103,189}{\textbf{- - -}}). } \label{fig:graph} \end{figure*} \subsubsection{Dataset} Rugby sevens is a variant of rugby where two teams of seven players play a game composed of two seven minute halves. It is an Olympic sport since 2016. We annotated a total of 58193 person bounding boxes in the images of three rugby sevens samples of 40 seconds to use them as ground truth for players of both teams, the referees and some people in the public. These samples come from the Argentina / France, France / Chile and France / Kenya games of the 2021 Dubai tournament. They are encoded at a resolution of 1920 by 1080 pixels and a frame rate of 50 frames per seconds. The aim of our experiments is to track players from one of the two teams taking part to the game. Tracklets were extracted with the method detailed in section \ref{sec:tracklet_generation}. About 30\% of the tracklets have a number of frames superior to \(l_{min} = 10\). This represent an average of 346 tracklets per video of 40 seconds. These tracklets have an average length of one second and correspond to about 89\% of the detected bounding boxes. We publicly release the tracking ground truth and the generated tracklets at \url{https://kalisteo.cea.fr/index.php/free-resources/}. \subsubsection{Quantitative results and ablation studies} \label{sec:ablation} The annotator selects a number of tracklet examples for each player appearing in the sequence and also for the class 0 (opponent team, referees, public). At each round of annotations, a new user annotation for each player and two user annotations for the class 0 are added on average. As the training of our system is quick, the user can observe the consequences of the added annotations on the classification results and correct the big mistakes for the next round of annotations (for example false positives with high scores). Once the user annotations have been added, we train the network with the same user annotations and 5 different seeds. We then compute standard MOT metrics \cite{ristani2016performance}: IDF1, MOTA and ID switches. Since our main objective is to correctly identify each player, the IDF1 metric is the most important to observe. MOTA is however key to report the completeness of the tracking bounding boxes for each player. Figure \ref{fig:graph} shows the results of our method obtained with four variants. Results are analyzed according to two conditions: \(R_{img}\) frozen or trained and with the iterative association algorithm or with the RNMF algorithm. As small tracklets are filtered, our method cannot achieve 100\% performance. In order to estimate the upper performance limit, we associate each tracklet to the ground truth. However, since our generated tracklets are not perfect, their association to the ground truth may also be ambiguous, which explains the not null ID switch limits. \textbf{Number of annotations}. For the three video extracts, the more user annotations are provided, the best the MOT metrics are. However, we can observe that above the third round of annotations (about 3.5 annotations per player), the metrics only slightly improve and sometimes slightly deteriorate. This performance threshold can be explained by the difficult tracking conditions of some instants: the players are sometimes highly occluded or very small, there are very few details to identify them and the detection is difficult on complex postures. Some errors are illustrated on Figure \ref{fig:errors}. From the first to the third round of annotations, with \(R_{img}\) frozen and the iterative association algorithm, IDF1 and MOTA metrics increases on average respectively by 11 and 9 p.p. (percentage points) while the ID switches is divided by 5. \begin{figure} \begin{center} \includegraphics{images/errors.pdf} \end{center} \caption{Illustration of complex situations that lead to missing detections and identification of players (here the players in blue).} \label{fig:errors} \end{figure} \textbf{Association algorithm choice}. The global RNMF optimization matches an identity to each tracklet but sometimes generates conflicts and wrong associations. This leads to better MOTA metrics as more detections are kept than with the simple iterative algorithm. For the third round of annotations, the MOTA metric is increased by 12 p.p. on average when \(R_{img}\) is frozen. However, the IDF1 metric is decreased by 1 p.p. and the number of ID switches increases by 25. The iterative association should therefore be prefered to minimize wrong identity associations. The RNMF algorithm however leads to a more complete tracking. \textbf{Training strategy}. Our experiments demonstrate that, even if \(R_{img}\) is not fine-tuned with data from the target domain (\(R_{img}\) frozen), it is still able, thanks to the Transformer network, to generate relevant features to re-identify the players. For the third round of annotations, the IDF1 and MOTA metrics are respectively on average 75\% and 66\% with the iterative association algorithm. The best results are however obtained when weights of \(R_{img}\) are also updated during training. For the third round of annotations, with the iterative association algorithm, the IDF1 and MOTA metrics are increased respectively by 3 and 2 p.p. The number of ID switches is reduced on average by 3. When the weights of \(R_{img}\) are updated, the training time for the 120 epochs significantly increases (from 28 seconds to 25 minutes for 32 annotations) and the system is no longer interactive. Indeed, the number of trainable parameters raises from about 4 to 25 millions. So, the optimal usage is to create the user annotations with \(R_{img}\) frozen and once the user is satisfied with the results, restart the training with the same annotations and \(R_{img}\) updated to obtain even better results. \subsubsection{Comparison with state of the art multiple person tracking methods} \label{comparison_generic_tracking} \begin{table} \begin{center} \footnotesize{ \begin{tabular}{|c|c|c|c|c|c|} \hline Video & Method & IDF1 & IDs & MOTA \\ \hhline{=====} & ByteTrack \cite{zhang2021bytetrack} & 48.8 & 26 & 49.4 \\ Argentina & TWBW \cite{bergmann2019tracking} & 24.4 & 64 & 40.8 \\ / France & MOT neur. solv. \cite{braso2020learning} & 34.0 & 54 & 33.9 \\ & Ours & \textbf{76.8} & \textbf{17} & \textbf{64.6} \\ \hline & ByteTrack \cite{zhang2021bytetrack} & 54.9 & 23 & 64.4 \\ France & TWBW \cite{bergmann2019tracking} & 22.7 & 74 & 28.4 \\ / Chile & MOT neur. solv. \cite{braso2020learning} & 29.6 & 53 & 40.2 \\ & Ours & \textbf{84.3} & \textbf{21} & \textbf{75.4} \\ \hline & ByteTrack \cite{zhang2021bytetrack} & 60.3 & 14 & 64.0 \\ France & TWBW \cite{bergmann2019tracking} & 30.6 & 44 & 45.0 \\ / Kenya & MOT neur. solv. \cite{braso2020learning} & 48.0 & 26 & 61.1 \\ & Ours & \textbf{82.2} & \textbf{7} & \textbf{70.1} \\ \hline \end{tabular} } \end{center} \caption{MOT metrics for the tracking of the rugby sevens French team players.} \label{table:track_table} \end{table} Generic tracking algorithms track all the persons appearing in the video frames. This would include in our case, players of both teams, the referees and the public. Our approach however track players from a single team. This makes the comparison not straightforward. Some approaches have been proposed for the tracking of team sport players with single moving views \cite{lu2013learning, hurault2020self} but the comparison is still not easy since their evaluation datasets are private. We therefore decided to run generic tracking algorithms on our rugby sevens extracts. In order to make a fair comparison with our approach, we manually selected the tracks generated by these algorithms that are associated, even partially, with players from the French team. We tested two online methods, TWBW tracker \cite{bergmann2019tracking} and ByteTrack \cite{zhang2021bytetrack}, with their detections. ByteTrack achieves a very high performance on the MOT 2017 challenge \cite{milan2016mot}. We also tested an offline method, the MOT neural solver \cite{braso2020learning}, with our detections. The results are presented in Table \ref{table:track_table}. With the limitations mentioned above, the metrics shows significantly lower performances for generic trackers. This is probably due to difficulties to handle correctly the occlusions and the players entering or leaving the view field. It therefore justifies our usage of a closed identity gallery with few annotations to learn the player appearances. Compared to ByteTrack \cite{zhang2021bytetrack}, the IDF1 metric is increased on average by 26 p.p. \subsection{Evaluation of player identification on a full rugby sevens game} Our system aims to track and identify players on a full game. Yet, a human-annotated tracking ground truth for a full game would be costly to generate. We therefore decided to evaluate the detection and re-ID performance of our approach on 32 frames regularly sampled in the France / Kenya game and focus on the French players. With players changes, 12 French players in total participated to this game. The ground truth represents 128 players bounding boxes. For each experiment, we trained the model with 5 different seeds using the same 70 annotations (about 6 per player). Results are shown in Table \ref{table:classification_table}. The best total detection and identification performance (53.6\%) is obtained when the \(R_{img}\) is trained and the RNMF association algorithm is used. A significant number of French players are not detected or correctly identified. This happens when the players on the back are only visible on few pixels or when some players occlude others. Nevertheless, the total recall goes up to 67.9\% for the bounding boxes with an area superior to the average area of all the ground truth bounding boxes (25214 pixels). This demonstrates that when the players are sufficiently visible, our system is able to track them during a full match with few annotations. \begin{table} \begin{center} \footnotesize{ \begin{tabular}{|c|c||c|c|c||c|} \hline \multirow{2}{*}{\(R_{img}\)} & \multirow{2}{*}{assoc.} & Det. & Team class. & Id. class. & Total \\ & & recall & recall & recall & recall \\ \hline \multicolumn{6}{l}{All detected bounding boxes} \\ \hline frozen & iter. & \multirow{4}{*}{75.8} & 58.4\(\pm\)2.1 & 73.8\(\pm\)4.5 & 32.7\(\pm\)2.4 \\ \cline{1-2} \cline{4-6} frozen & RNMF & & 74.6\(\pm\)2.5 & 60.9\(\pm\)6.5 & 34.5\(\pm\)4.6 \\ \cline{1-2} \cline{4-6} trained & iter. & & 75.9\(\pm\)3.9 & \textbf{84.0\(\pm\)3.4} & 48.3\(\pm\)3.0 \\ \cline{1-2} \cline{4-6} trained & RNMF & & \textbf{89.1\(\pm\)2.0} & 79.4\(\pm\)2.6 & \textbf{53.6\(\pm\)1.8} \\ \hline \multicolumn{6}{l}{Big detected bounding boxes (area superior to 25214 pixels)} \\ \hline frozen & iter. & \multirow{4}{*}{89.7} & 60.8\(\pm\)2.2 & 77.3\(\pm\)6.8 & 42.1\(\pm\)3.1 \\ \cline{1-2} \cline{4-6} frozen & RNMF & & 72.3\(\pm\)2.2 & 66.4\(\pm\)5.0 & 43.1\(\pm\)4.4 \\ \cline{1-2} \cline{4-6} trained & iter. & & 76.2\(\pm\)3.5 & \textbf{87.4\(\pm\)5.2} & 59.7\(\pm\)4.2 \\ \cline{1-2} \cline{4-6} trained & RNMF & & \textbf{90.8\(\pm\)0.9} & 83.5\(\pm\)3.4 & \textbf{67.9\(\pm\)2.6} \\ \hline \end{tabular} } \end{center} \caption{French player detection and classification results on 32 frames of the France / Kenya game for 5 different seeds. Average values and standard deviations are provided. The detection recall corresponds to the number of players detected. The team classification recall corresponds to the number of players classified as French among the detected players. The identity classification recall corresponds to the number of correctly identified players among the players classified as French. The total recall is the product of all the previous columns and represents the complete performance of our system.} \label{table:classification_table} \end{table} \section{Conclusion} In this paper, we proposed a new method to track team sport players with few user annotations. We demonstrated the performance of our approach on a rugby sevens dataset that we publicly release. We also showed that our method can track rugby sevens players during a full match with the annotation of only 6 few seconds length tracklets per player if they are observable with a minimal resolution. To our knowledge, no previous work on tracking of rugby players has been published. As future work, we would like to improve the detection of small and partially occluded players. Since our approach can be applied to any team sport, we would like to test it on other sports such as basketball. We also believe that the user annotation step would be sped up if an active learning process could smartly suggests tracklets to annotate. \section{Acknowledgments} This work benefited from a government grant managed by the French National Research Agency under the future investment program (ANR-19-STHP-0006) and the FactoryIA supercomputer financially supported by the Ile-de-France Regional Council. The videos of our Rugby Sevens dataset are the courtesy of World Rugby. We would also like to thank Jérôme Daret, Jean-Baptiste Pascal and Julien Piscione from the French Rugby Federation for making this work possible. {\small \bibliographystyle{ieee_fullname} \section{Introduction} Player tracking in team sports consists in detecting and identifying the players in video sequences. It is a necessary task to automate the generation of individual statistics such as ball possession, field position or involvement in play sequences. Player tracking in team sports such as rugby is however a challenging task. Rugby is a sport of physical contact where player occlusions are very frequent on camera during rucks, tackles and scrums. The players can also adopt a wide range of body postures from sprinting to laying on the ground in a foetal position. Players from the same team share a very similar appearance since they wear the same jerseys. Moreover, the number of pixels in which the players are visible is often limited in the case of a TV stream (sometimes with a height below 150 pixels). This prevents the access to fine identification details. \begin{figure} \begin{center} \includegraphics{images/captures.pdf} \end{center} \caption{Tracking French players (blue jerseys) in our rugby sevens dataset: a. France / Kenya extract, b. Argentina / France extract, c. France / Chile extract.} \label{fig:captures} \end{figure} Player tracking is a specific Multi-Object Tracking (MOT) problem. MOT has been widely studied in the literature. Security applications have lead to the development of many people tracking approaches. Offline methods use all the frames of the input video sequence to optimize the generation of tracks while online methods target real-time applications by relying only on the current and previous frames to generate the tracks. The most recent frameworks achieve the best performance using deep neural network architectures. The availability of large public datasets and challenges such as the MOT challenge \cite{milan2016mot} allows to fairly train and compare the various approaches. Some recent people tracking methods have also been proposed for the specific context of team sports: soccer \cite{zhang2020multi, hurault2020self}, basketball \cite{lu2013learning} and hockey \cite{vats2021player}. These methods often use private datasets specific to their studied sport to get competitive results evaluated on short video clips extracted from a match. Game-specific annotations are required to train a player tracking and identification system to adapt to the player identities and the context of a game. The number of such annotations is an important factor that will determine the success of using such a system in real world scenario. Little focus was made in previous work on the practicability of this annotation process. Consequently, we propose an incremental learning approach to identify players with very few game-specific annotations. Our method is offline: it tracks and identifies players once the game has been completed. It benefits from the closed gallery re-identification (re-ID) hypothesis as, contrary to video surveillance, the number of players is known and limited. Since our method does not use any sport-specific knowledge, it can be applied to any team sport. Our annotation process consists in several steps. Bounding boxes around all the persons in the frames are first extracted from the input video to generate non-ambiguous tracklets. A tracklet is the uninterrupted sequence of the bounding box images of a single player. Tracklets can have a variable length since a player can enter or leave the camera field of view or be occluded by an other player. At this stage, the user provides few annotations per player to train the tracklet re-ID network. Finally, the obtained tracklet classification scores or appearance features feed an algorithm that look for an optimal association between tracklets and identities. The contributions of this paper are the following: We tackle the sparsity of training data in team sport contexts by leveraging generic detection and re-ID datasets. The detection network is only learned on a public dataset. The re-ID network is pretrained on a video surveillance public dataset. We propose a new architecture based on a Transformer network \cite{vaswani2017attention} to classify and generate tracklet appearance features. An incremental learning mechanism using few user interactions trains this model and strengthen the re-ID performances throughout the whole annotation process. Some datasets have been proposed for basketball \cite{delannay2009detection} and soccer \cite{dorazio2009semi} player tracking with multiple static cameras. However, although Deliege et al. \cite{deliege2021soccernet} are extending their SoccerNet dataset to tracking and re-ID, no dataset with a moving point of view has been made available. We publicly release our rugby sevens tracking dataset composed of single-view videos that can pan, tilt or zoom to follow the action. It is one of the most challenging team sport for tracking on which no approach was tested. We demonstrate the efficiency of our approach on our dataset. On a full game, it can achieve up to 67.9\% detection and identity classification recall when the players are sufficiently visible with only 6 annotations per player. The paper is organized as follows: Section~\ref{sec:sota} introduces Related Work. Our method is described in Section~\ref{sec:method}. Finally, Section~\ref{sec:results} provides our results on our challenging rugby sevens dataset, compares them to state-of-the-art methods and analyzes them in an ablative study. \section{Related Work} \label{sec:sota} \subsection{Multiple people tracking} Two categories of MOT algorithms can be distinguished. \textbf{Offline methods} leverage the full sequence of images to globally optimize the generated tracks with a graph paradigm. The vertices are the detections on each frame and the edges are the connections between detections that form tracks. Thus, Zhang et al. \cite{zhang2008} uses a minimum cost flow iterative algorithm that models long term occlusions. The approach described by Berclaz et al. \cite{berclaz2011} takes only an occupancy map of detection as input and applies a k-shortest path algorithm on the flows. More recently, Brasó and Leal-Taix \cite{braso2020learning} proposed a fully differentiable network that learns both the appearance and geometrical feature extraction as well as the detection association to generate tracks. Hornakova et al. \cite{hornakova2020lifted} use lifted edges to model long term interactions and generate the optimized solution with a linear programming relaxation. \textbf{Online methods} only use the current and past frames to associate new detections with tracks. They have raised more interest in the literature as they fit real time scenario. Thus, the SORT algorithm \cite{bewley2016} uses a Faster R-CNN \cite{ren2015} person detector. Then, a Kalman filter \cite{kalman1960new} predicts the future positions of each track. The Intersection-Over-Union (IOU) between these predictions and the detected bounding boxes are used as inputs of an Hungarian algorithm that matches the detection with the tracks. ByteTrack \cite{zhang2021bytetrack} achieves state of the art tracking performance with a two steps association algorithm: the first step focuses on high confidence detections while the second step deals with the low confidence ones. The Deep SORT algorithm \cite{wojke2017simple} adds a re-ID network to extract the visual appearance of each person. The input data of the Hungarian algorithm becomes a combination of a Manaholis distance as the spatial term and a cosine distance between the re-ID vectors as the appearance term. Using distinct networks for detection and re-ID has the advantage of separating the two tasks that may have opposite objectives. The detection task aims at learning common features to recognize humans while the re-ID task aims at learning distinctive features of each individual. However this may cause scalability issues as each detected bounding box must be independently processed by the re-ID network. Single-shot methods were therefore proposed to generate the bounding boxes coordinates and re-ID vectors with a single network. Thus, Track-RCNN \cite{voigtlaender2019} uses a common backbone with specific heads for each task. FairMOT \cite{zhang2020fairmot} achieves better tracking performance by focusing only on the detection and re-ID tasks. Meinhard et al. \cite{meinhardt2021trackformer} use a Transformer architecture. Applying traditional MOT to team sport players usually leads to many ID switches. Each time a player leaves the vision field or is occluded too much time, a new identity is generated at reappearance. This prevents the reliable generation of individual statistics (see section \ref{comparison_generic_tracking}). \subsection{Multiple team sport player tracking and re-identification} \subsubsection{Tracking} Some tracking methods have been proposed for the context of team sports. For soccer, many approaches performed tracking by first extracting the field regions \cite{manafifard2017survey, khatoonabadi2009automatic, baysal2015sentioscope, liu2009automatic, d2009investigation, xing2010multiple}. In the method of Liu et al. \cite{liu2009automatic}, an unsupervised clustering algorithm classifies the players among four classes (two teams, referee or outlier). The tracking is formulated as a Markov chain Monte Carlo data association. D'Orazio et al. \cite{d2009investigation} classify each player with an unsupervised clustering algorithm. The tracking takes as input geometrical and motion information. It is based on a set of logical rules with a merge split strategy. In Xing et al. \cite{xing2010multiple}, the observation model of each player is composed of the color histogram in the cloth regions, the size and the motion. The tracking is formulated as particle filtering. Theagarajan and Bhanu \cite{theagarajan2020automated} used a YOLOv2 \cite{redmon2016you} network detector and a DeepSORT tracker \cite{wojke2017simple} to identify the player controlling the ball. All the previous approaches do not build individual appearance signatures per player identities. If a player leaves the camera field of view and re-enter later, he/she will be considered as a new person. This prevents the generation of individual statistics. \subsubsection{Re-identification} Jersey number recognition has been studied in the literature to identify team sport players. Ye et al. \cite{ye2005jersey} developed a method based on Zernike moments features \cite{khotanzad1990invariant}. Gerke et al. \cite{gerke2015soccer} were the first to use a convolutional neural network to classify jersey numbers from bounding box images of players. It was later combined to spatial constellation features to identify soccer players \cite{gerke2017soccer}. To ease the recognition of distorted jersey numbers, Li et al. \cite{li2018jersey} trained a branch of their network to correct the jersey number deformation before the classification. Liu and Bhanu \cite{liu2019pose} enabled jersey number recognition only in the relevant zones by detecting body keypoints. For hockey player identification, Chan et al. \cite{chan2021player} used a ResNet + LSTM network \cite{he2016deep, hochreiter1997long} on tracklet images to extract jersey numbers. When a single view is available, as in our rugby sevens dataset, jersey numbers are often not visible, partially visible or distorted. Besides, to our knowledge, there is no publicly available training dataset for team sport jersey number recognition. A solution can therefore be to use appearances to re-identify players. Teket and Yetik \cite{teket2020fast} proposed a framework to identify the player responsible for a basketball shot. Their re-ID network, based on MobileNetV2 \cite{sandler2018mobilenetv2}, is trained with a triplet loss formulation. The framework described by Senocak et al. \cite{senocak2018part} combines part-based features and multiscale global features to generate basketball player signatures. Both approaches are based, as ours, on the hypothesis of a closed gallery however they use a private dataset to train their model which makes comparisons impossible. \subsubsection{Tracking with re-identification} Several methods tracks players by using re-ID features \cite{lu2013learning, zhang2020multi, yang2021multi, hurault2020self, vats2021player}. Lu et al. \cite{lu2013learning} use DPM \cite{felzenszwalb2008discriminatively} to detect basketball players. Local features and RGB color histograms are extracted on players for the re-ID. Zhang et al. \cite{zhang2020multi} proposed a multi-camera tracker that locates basketball players on a grid based on a K-shortest paths algorithm \cite{berclaz2011}. Players are detected and segmented with a network based on Mask R-CNN \cite{he2017mask}. Re-ID features are computed thanks to the team classification, jersey number recognition and a pose-guided feature embedding. To track soccer players, Yang et al. \cite{yang2021multi} iteratively reduced the location and identification errors generated by the previous approach by creating a bayesian model that is optimized to best fit input pixel level segmentation and identification. Hurault et al. \cite{hurault2020self} use a single network with a Faster R-CNN backbone \cite{ren2015} to detect small soccer players and extract re-ID features. Kong et al. \cite{kong2021online} mix player appearance, posture and motion criteria to match new detections with existing tracks. Vats et al. \cite{vats2021player} use a Faster R-CNN network \cite{ren2015} to detect hockey players and a batch method for tracking \cite{braso2020learning}. Specific ResNet-18 networks \cite{he2016deep} are used to identify the player teams and jersey numbers. Most of the approaches presented here \cite{lu2013learning, zhang2020multi, yang2021multi, vats2021player} train their re-ID or jersey number recognition model with a private dataset. \subsubsection{Minimizing the number of annotations} To our knowledge, few previous work focus on the minimization of the game-specific training annotations for re-ID. For example, Lu et al. \cite{lu2013learning} used a mere 200 labels for every players in a team with their semi-supervised approach. Senocak et al. \cite{senocak2018part} use 2500 cropped images for each player to train their re-ID network. Teket and Yetik \cite{teket2020fast} use a training dataset that contains 30 to 1000 images per player. In this paper, by asking the user to annotate tracklets, we aim to demonstrate that it is possible to produce meaningful player re-ID results for a rugby sevens full game with only 6 annotations per player. \section{Proposed method} \label{sec:method} \subsection{Overview} \begin{figure} \begin{center} \includegraphics{images/process.pdf} \end{center} \caption{Incremental learning of tracklet classification. The user provides annotations to train the model to correctly classify the tracklets to a player identity.} \label{fig:process} \end{figure} We propose a new method to track the \(N_p\) players of a team in a video with a single moving view of a game. The first step of our method generates \( N_{t} \) tracklets we qualify as non-ambiguous because they contain a single identity. For this purpose, bounding boxes around persons are detected and associated across frames automatically. The user can then provide few identity annotations to some of the generated tracklets thanks to a dedicated interface show on Figure \ref{fig:interface_capture}. The tracklet re-ID network can then be trained with these annotations. Once the model is trained, classification scores and re-ID features are generated for all the tracklets. This data feeds an algorithm that matches every tracklet to an identity. Once the annotation interface has been updated, the user can then decide to add more annotations to correct the wrong classifications or to stop this incremental learning mechanism if she/he is satisfied by the results. The whole process is depicted on Figure \ref{fig:process}. \subsection{Tracklet generation} \label{sec:tracklet_generation} Non-ambiguous tracklets are generated with a tracking by detection paradigm. A Faster R-CNN network \cite{ren2015} with a ResNet-50 backbone \cite{he2016deep} trained on the COCO dataset \cite{lin2014microsoft} detects all the persons in the video frames. This detector is a well-known model used in several recent work \cite{hurault2020self, vats2021player}. To generate the tracklets, we use the simple and classic approach described in \cite{bewley2016}. Bounding boxes between the previous and the current frames are associated by bipartite matching with an Hungarian algorithm \cite{kuhn1955hungarian}. This matching is performed with a single IoU criteria since the player appearances are later taken into account by our tracklet re-ID model. We also use a Kalman filter \cite{kalman1960new} to predict the position of an existing track in the current frame. Each generated tracklet will be later matched to a single identity. We therefore want to avoid as much as possible identity switches inside tracklets. When a tracklet partially occludes an other one, bipartite matching may generate a wrong association. Our algorithm therefore splits the tracklets that intersect since they are considered as ambiguous. If at the current frame, two tracklet bounding boxes have an IoU above a threshold \(\mu = 0.5\) these tracklets are terminated and new ones are created. We also filter out tracklets that have a length inferior to \( l_{min} \). We indeed consider that they may also be ambiguous by containing several identities in their images. Besides, they do not provide enough diverse data to the tracklet re-ID model. \subsection{Incremental learning tracklet classification} \begin{figure} \begin{center} \includegraphics[scale=0.53]{images/architecture_training.pdf} \end{center} \caption{Architecture of the tracklet classification network. \( R_{img} \) extracts re-ID features \( T^1_t \) from the tracklet images. They are combined by the transformer to generate a single tracklet re-ID vector \( F_t \). The model is trained by ID loss and triplet loss.} \label{fig:archi} \end{figure} The aim of our system is to match tracklets to identities with the fewest possible annotations. This process is done through incremental learning since the user can choose to add more training annotations while the quality of the generated tracklet association is not satisfying. We set the target number of classes \(N_c = 1 + N_p\). The class zero corresponds to all persons we do not want to track (players from the opponent team, referees, public). Our tracklet re-ID model is mainly composed of a single image re-ID network \( R_{img} \) followed by a Transformer \cite{vaswani2017attention} as illustrated on Figure \ref{fig:archi}. For \( R_{img} \), we chose the model described by Luo et al. \cite{luo2019bag} for its simplicity. It uses a ResNet-50 backbone \cite{he2016deep} and has been trained on the generic Market1501 dataset \cite{zheng2015scalable}. It takes as input single images at resolution \(H \times W\) and outputs player appearance features at dimension \(d_{1}\). We regularly sample \(d_{t}\) images from each tracklet and combine their appearance features to obtain the tracklet features tensor \( T^1_t \in \mathbb{R}^{d_{t} \times d_{1}} \). The feature dimension of \(T^1_t\) is then reduced to \(d_{2}\) by a fully connected layer to obtain \(T^2_t\). This limits the dimension of the features inside the next nodes of our model in order to train it quickly. The Transformer in our model then combines the re-ID features of the sampled tracklet images \(T^2_t\) to generate a single tracklet re-ID vector \( F_t \). Its cross-attention nodes can learn to focus on the most distinctive features across the tracklet sampled frames. It takes as input of the encoder \(T^2_t\) and as input of the decoder the \(N_{q}\) queries \(Q_q\). Similarly to DETR \cite{carion2020end}, the queries \(Q_q \in \mathbb{R}^{d_{2}}\) are learned embeddings. Each query learns to specialize on some features of the player identities. However, we do not use any input positional encoding because, since our initial variable length tracklets are resampled to fixed length \(d_{t}\), there are no common temporal link between the features. We found that using 16 encoder layers, one decoder layer and 16 heads in the multi-head attention models was the best set of parameters. At the output of the decoder, a batch norm layer generates the tracklet features \( F_t \). For the classification, a fully connected layer computes the classification scores \(S_t \in \mathbb{R}^{N_{c}} \). Given a tracklet \( t \), the \(N_{qc}\) queries among \(N_{q}\) that gives the highest classification scores are selected for the back-propagation. The optimized loss is defined by \[ L = L_{ID}(S_t, \hat{S_t}) + \alpha L_{Triplet}(D_{t,p}, D_{t,n}), \] where \( L_{ID} \) is the standard cross entropy loss, \( \hat{S_t} \) are the target classification logits, \( L_{Triplet} \) is the soft-margin triplet loss \cite{hermans2017defense}, \( D_{t,p} \) and \( D_{t,n} \) are feature distances of positive pairs and negative pairs and \( \alpha \) is a binary constant. As described by Luo et al. \cite{luo2019bag}, the idea of combining a classification loss and a triplet loss is to let the model learn more discriminative features \( F_t \in \mathbb{R}^{d_2} \). For the triplet loss, we use a batch hard strategy that finds the hardest positive and negative samples. Once the model has been trained, all tracklets are processed by the model at inference stage to compute the tracklet classification scores \( S_t \) and features \( F_t \). \subsection{Association algorithms} With generated scores \(S_t\) and the features \(F_t\), we have the needed data to match tracklets to player identities by using an association algorithm. Two alternative methods are investigated. \subsubsection{Iterative association} An iteration of the association algorithm consists in selecting the highest score in the matrix of all tracklet scores \(S_t\). The highest score represents a matching between the tracklet \(t\) and the identity \(i\). The algorithm then checks that \(t\) can be associated to \(i\) by verifying that the tracklets already associated to \(i\) do not already appear in the frames where \(t\) appears. If the association is possible, \(t\) is added to the list of tracklets associated to \(i\) and a new iteration of the algorithm is run. When the iterative association is used, we set \( \alpha = 0 \) during the incremental learning to only optimize the classification scores \( S_t \). \subsubsection{Matrix factorization association} \label{sec:rnmf} The second algorithm is inspired from \cite{he2020multi}. The authors describe a multi-camera batch people tracking system that assigns tracklets extracted from different views to identities. The input of the algorithm is a tracklet similarity matrix \(S\) generated with appearance, motion and localization criteria. A Restricted Non-negative Matrix Factorization (RNMF) algorithm optimizes the identity assignment. The association matrix \(A \in \mathbb{R}^{N_{t} \times N_{p}} \) is computed thanks to the iterative updating rule given in \cite{ding2008convex}. We applied the RNMF algorithm to our single view case with \(S\) as the sum of an appearance term \(\Psi_{app}\) and a localization term \( \Psi_{loc} \). The similarity between two tracklets \(u\) and \(v\) is computed with: \[S(u,v) = clip(\Psi_{app}(F_u, F_v)) + clip(\Psi_{loc}(B_{ul}, B_{vf})) \] where \( clip(x) = max(min(x; 1); 0) \). \(\Psi_{app} \) is defined by equation \ref{eq:psi_app}. \begin{equation} \label{eq:psi_app} \Psi_{app(F_u, F_v)} = 1 - \frac{1}{\eta_{app}} \cdot d(F_u, F_v) \end{equation} where \(d(F_u, F_v)\) is the cosine distance between the feature vectors of the two tracklets and \(\eta_{app}\) is the cosine distance threshold above which we consider that \(u\) and \(v\) belongs to two distinct identities. \(\Psi_{loc} \) is defined by the equation \ref{eq:psi_loc}. \( t_{ul} \) is the end time of the of the first tracklet and \( t_{vf} \) is the start time of the second tracklet. \(B_{ul}\) and \(B_{vf}\) are the corresponding bounding boxes. \begin{multline} \label{eq:psi_loc} \Psi_{loc}(B_{ul}, B_{vf}) = \\ \begin{cases} (1 + \eta_{loc}) \cdot IoU(B_{ul}, B_{vf}) - \eta_{loc} & \text{if } t_{vf} - t_{ul} \leq \tau \\ 0, & \text{otherwise} \end{cases} \end{multline} where \(\eta_{loc}\) and \( \tau \) are constant numbers. \(\Psi_{loc} \) aims at giving a high similarity scores to two successive tracklets if \(B_{ul}\) and \(B_{vf}\) have a high IoU. When the RNMF association is used, we set \( \alpha = 1 \) during the incremental learning. \section{Experimental Results} \label{sec:results} \subsection{Implementation details} \begin{figure} \begin{center} \includegraphics[scale=0.225]{images/interface_capture.png} \end{center} \caption{Partial screen capture of our semi-interactive annotation interface. Each cell corresponds to one tracklet. Each column corresponds to one player identity, except the zero column that contains all the persons we do not want to track.} \label{fig:interface_capture} \end{figure} Our system is implemented using the Pytorch framework. The minimum number of frames of a tracklet \( l_{min} \) is set to 10. All the tracklets are resampled to \(d_t = 10\). Our re-ID network \cite{luo2019bag} takes as input images of resolution \(H=256\) and \(W=128\). It outputs features at dimension \(d_{1} = 2048\). Our Transformer network takes input features at \(d_{2} = 128\). The number of input queries \(N_{q}\) is set to 32. They are randomly initialized. The number of queries selected for backpropagation \(N_{qc}\) is set to 4. It is trained during 120 epochs with an AdamW optimizer, a learning rate of \(9 \times 10^{-5}\), a weight decay of \(10^{-4}\) and a batch size of 4. The transformer parameters are initialized with Xavier initialization \cite{glorot2010understanding}. For the linear layer, He initialization \cite{he2015delving} is used. \(\eta_{app}\) and \(\eta_{loc}\) are experimentally set to 0.35 and 0.43. The time threshold \(\tau\) for the localization similarity is set to 0.5 seconds. Our semi-interactive annotation interface, illustrated on Figure \ref{fig:interface_capture}, can run on a laptop GPU (Quadro M2000M). It allows the annotator to generate training data for our model by indicating to which player belongs a tracklet. The training time represents about 0.8 second per annotation when \(R_{img}\) is frozen and the iterative association is used. \subsection{Player tracking on rugby sevens samples} \label{tracking} \begin{figure*} \newcommand{0.344}{0.344} \begin{center} \setlength\tabcolsep{0pt} \begin{tabular}{cc} \includegraphics[scale=0.344]{images/france_argentina_fr.pdf} & \includegraphics[scale=0.344]{images/france_argentina_arg.pdf} \\ \includegraphics[scale=0.344]{images/france_chile_fr.pdf} & \includegraphics[scale=0.344]{images/france_chile_chi.pdf} \\ \includegraphics[scale=0.344]{images/france_kenya_fr.pdf} & \includegraphics[scale=0.344]{images/france_kenya_ken.pdf} \\ \end{tabular} \end{center} \caption{MOT metrics for the tracking of rugby sevens players in 3 videos. The x-axis corresponds to the total number of annotations divided by the number of tracked players. The variation intervals for the 5 seeds and average values are represented. The tested variants are: \(R_{img}\) frozen with the iterative association (\textcolor[RGB]{31,119,180}{\textbf{---}}), \(R_{img}\) frozen with the RNMF association (\textcolor[RGB]{255,127,14}{\textbf{---}}), \(R_{img}\) trained with the iterative association (\textcolor[RGB]{44,160,44}{\textbf{---}}), \(R_{img}\) trained with the RNMF association (\textcolor[RGB]{214,39,40}{\textbf{---}}) and the ground truth association (\textcolor[RGB]{148,103,189}{\textbf{- - -}}). } \label{fig:graph} \end{figure*} \subsubsection{Dataset} Rugby sevens is a variant of rugby where two teams of seven players play a game composed of two seven minute halves. It is an Olympic sport since 2016. We annotated a total of 58193 person bounding boxes in the images of three rugby sevens samples of 40 seconds to use them as ground truth for players of both teams, the referees and some people in the public. These samples come from the Argentina / France, France / Chile and France / Kenya games of the 2021 Dubai tournament. They are encoded at a resolution of 1920 by 1080 pixels and a frame rate of 50 frames per seconds. The aim of our experiments is to track players from one of the two teams taking part to the game. Tracklets were extracted with the method detailed in section \ref{sec:tracklet_generation}. About 30\% of the tracklets have a number of frames superior to \(l_{min} = 10\). This represent an average of 346 tracklets per video of 40 seconds. These tracklets have an average length of one second and correspond to about 89\% of the detected bounding boxes. We publicly release the tracking ground truth and the generated tracklets at \url{https://kalisteo.cea.fr/index.php/free-resources/}. \subsubsection{Quantitative results and ablation studies} \label{sec:ablation} The annotator selects a number of tracklet examples for each player appearing in the sequence and also for the class 0 (opponent team, referees, public). At each round of annotations, a new user annotation for each player and two user annotations for the class 0 are added on average. As the training of our system is quick, the user can observe the consequences of the added annotations on the classification results and correct the big mistakes for the next round of annotations (for example false positives with high scores). Once the user annotations have been added, we train the network with the same user annotations and 5 different seeds. We then compute standard MOT metrics \cite{ristani2016performance}: IDF1, MOTA and ID switches. Since our main objective is to correctly identify each player, the IDF1 metric is the most important to observe. MOTA is however key to report the completeness of the tracking bounding boxes for each player. Figure \ref{fig:graph} shows the results of our method obtained with four variants. Results are analyzed according to two conditions: \(R_{img}\) frozen or trained and with the iterative association algorithm or with the RNMF algorithm. As small tracklets are filtered, our method cannot achieve 100\% performance. In order to estimate the upper performance limit, we associate each tracklet to the ground truth. However, since our generated tracklets are not perfect, their association to the ground truth may also be ambiguous, which explains the not null ID switch limits. \textbf{Number of annotations}. For the three video extracts, the more user annotations are provided, the best the MOT metrics are. However, we can observe that above the third round of annotations (about 3.5 annotations per player), the metrics only slightly improve and sometimes slightly deteriorate. This performance threshold can be explained by the difficult tracking conditions of some instants: the players are sometimes highly occluded or very small, there are very few details to identify them and the detection is difficult on complex postures. Some errors are illustrated on Figure \ref{fig:errors}. From the first to the third round of annotations, with \(R_{img}\) frozen and the iterative association algorithm, IDF1 and MOTA metrics increases on average respectively by 11 and 9 p.p. (percentage points) while the ID switches is divided by 5. \begin{figure} \begin{center} \includegraphics{images/errors.pdf} \end{center} \caption{Illustration of complex situations that lead to missing detections and identification of players (here the players in blue).} \label{fig:errors} \end{figure} \textbf{Association algorithm choice}. The global RNMF optimization matches an identity to each tracklet but sometimes generates conflicts and wrong associations. This leads to better MOTA metrics as more detections are kept than with the simple iterative algorithm. For the third round of annotations, the MOTA metric is increased by 12 p.p. on average when \(R_{img}\) is frozen. However, the IDF1 metric is decreased by 1 p.p. and the number of ID switches increases by 25. The iterative association should therefore be prefered to minimize wrong identity associations. The RNMF algorithm however leads to a more complete tracking. \textbf{Training strategy}. Our experiments demonstrate that, even if \(R_{img}\) is not fine-tuned with data from the target domain (\(R_{img}\) frozen), it is still able, thanks to the Transformer network, to generate relevant features to re-identify the players. For the third round of annotations, the IDF1 and MOTA metrics are respectively on average 75\% and 66\% with the iterative association algorithm. The best results are however obtained when weights of \(R_{img}\) are also updated during training. For the third round of annotations, with the iterative association algorithm, the IDF1 and MOTA metrics are increased respectively by 3 and 2 p.p. The number of ID switches is reduced on average by 3. When the weights of \(R_{img}\) are updated, the training time for the 120 epochs significantly increases (from 28 seconds to 25 minutes for 32 annotations) and the system is no longer interactive. Indeed, the number of trainable parameters raises from about 4 to 25 millions. So, the optimal usage is to create the user annotations with \(R_{img}\) frozen and once the user is satisfied with the results, restart the training with the same annotations and \(R_{img}\) updated to obtain even better results. \subsubsection{Comparison with state of the art multiple person tracking methods} \label{comparison_generic_tracking} \begin{table} \begin{center} \footnotesize{ \begin{tabular}{|c|c|c|c|c|c|} \hline Video & Method & IDF1 & IDs & MOTA \\ \hhline{=====} & ByteTrack \cite{zhang2021bytetrack} & 48.8 & 26 & 49.4 \\ Argentina & TWBW \cite{bergmann2019tracking} & 24.4 & 64 & 40.8 \\ / France & MOT neur. solv. \cite{braso2020learning} & 34.0 & 54 & 33.9 \\ & Ours & \textbf{76.8} & \textbf{17} & \textbf{64.6} \\ \hline & ByteTrack \cite{zhang2021bytetrack} & 54.9 & 23 & 64.4 \\ France & TWBW \cite{bergmann2019tracking} & 22.7 & 74 & 28.4 \\ / Chile & MOT neur. solv. \cite{braso2020learning} & 29.6 & 53 & 40.2 \\ & Ours & \textbf{84.3} & \textbf{21} & \textbf{75.4} \\ \hline & ByteTrack \cite{zhang2021bytetrack} & 60.3 & 14 & 64.0 \\ France & TWBW \cite{bergmann2019tracking} & 30.6 & 44 & 45.0 \\ / Kenya & MOT neur. solv. \cite{braso2020learning} & 48.0 & 26 & 61.1 \\ & Ours & \textbf{82.2} & \textbf{7} & \textbf{70.1} \\ \hline \end{tabular} } \end{center} \caption{MOT metrics for the tracking of the rugby sevens French team players.} \label{table:track_table} \end{table} Generic tracking algorithms track all the persons appearing in the video frames. This would include in our case, players of both teams, the referees and the public. Our approach however track players from a single team. This makes the comparison not straightforward. Some approaches have been proposed for the tracking of team sport players with single moving views \cite{lu2013learning, hurault2020self} but the comparison is still not easy since their evaluation datasets are private. We therefore decided to run generic tracking algorithms on our rugby sevens extracts. In order to make a fair comparison with our approach, we manually selected the tracks generated by these algorithms that are associated, even partially, with players from the French team. We tested two online methods, TWBW tracker \cite{bergmann2019tracking} and ByteTrack \cite{zhang2021bytetrack}, with their detections. ByteTrack achieves a very high performance on the MOT 2017 challenge \cite{milan2016mot}. We also tested an offline method, the MOT neural solver \cite{braso2020learning}, with our detections. The results are presented in Table \ref{table:track_table}. With the limitations mentioned above, the metrics shows significantly lower performances for generic trackers. This is probably due to difficulties to handle correctly the occlusions and the players entering or leaving the view field. It therefore justifies our usage of a closed identity gallery with few annotations to learn the player appearances. Compared to ByteTrack \cite{zhang2021bytetrack}, the IDF1 metric is increased on average by 26 p.p. \subsection{Evaluation of player identification on a full rugby sevens game} Our system aims to track and identify players on a full game. Yet, a human-annotated tracking ground truth for a full game would be costly to generate. We therefore decided to evaluate the detection and re-ID performance of our approach on 32 frames regularly sampled in the France / Kenya game and focus on the French players. With players changes, 12 French players in total participated to this game. The ground truth represents 128 players bounding boxes. For each experiment, we trained the model with 5 different seeds using the same 70 annotations (about 6 per player). Results are shown in Table \ref{table:classification_table}. The best total detection and identification performance (53.6\%) is obtained when the \(R_{img}\) is trained and the RNMF association algorithm is used. A significant number of French players are not detected or correctly identified. This happens when the players on the back are only visible on few pixels or when some players occlude others. Nevertheless, the total recall goes up to 67.9\% for the bounding boxes with an area superior to the average area of all the ground truth bounding boxes (25214 pixels). This demonstrates that when the players are sufficiently visible, our system is able to track them during a full match with few annotations. \begin{table} \begin{center} \footnotesize{ \begin{tabular}{|c|c||c|c|c||c|} \hline \multirow{2}{*}{\(R_{img}\)} & \multirow{2}{*}{assoc.} & Det. & Team class. & Id. class. & Total \\ & & recall & recall & recall & recall \\ \hline \multicolumn{6}{l}{All detected bounding boxes} \\ \hline frozen & iter. & \multirow{4}{*}{75.8} & 58.4\(\pm\)2.1 & 73.8\(\pm\)4.5 & 32.7\(\pm\)2.4 \\ \cline{1-2} \cline{4-6} frozen & RNMF & & 74.6\(\pm\)2.5 & 60.9\(\pm\)6.5 & 34.5\(\pm\)4.6 \\ \cline{1-2} \cline{4-6} trained & iter. & & 75.9\(\pm\)3.9 & \textbf{84.0\(\pm\)3.4} & 48.3\(\pm\)3.0 \\ \cline{1-2} \cline{4-6} trained & RNMF & & \textbf{89.1\(\pm\)2.0} & 79.4\(\pm\)2.6 & \textbf{53.6\(\pm\)1.8} \\ \hline \multicolumn{6}{l}{Big detected bounding boxes (area superior to 25214 pixels)} \\ \hline frozen & iter. & \multirow{4}{*}{89.7} & 60.8\(\pm\)2.2 & 77.3\(\pm\)6.8 & 42.1\(\pm\)3.1 \\ \cline{1-2} \cline{4-6} frozen & RNMF & & 72.3\(\pm\)2.2 & 66.4\(\pm\)5.0 & 43.1\(\pm\)4.4 \\ \cline{1-2} \cline{4-6} trained & iter. & & 76.2\(\pm\)3.5 & \textbf{87.4\(\pm\)5.2} & 59.7\(\pm\)4.2 \\ \cline{1-2} \cline{4-6} trained & RNMF & & \textbf{90.8\(\pm\)0.9} & 83.5\(\pm\)3.4 & \textbf{67.9\(\pm\)2.6} \\ \hline \end{tabular} } \end{center} \caption{French player detection and classification results on 32 frames of the France / Kenya game for 5 different seeds. Average values and standard deviations are provided. The detection recall corresponds to the number of players detected. The team classification recall corresponds to the number of players classified as French among the detected players. The identity classification recall corresponds to the number of correctly identified players among the players classified as French. The total recall is the product of all the previous columns and represents the complete performance of our system.} \label{table:classification_table} \end{table} \section{Conclusion} In this paper, we proposed a new method to track team sport players with few user annotations. We demonstrated the performance of our approach on a rugby sevens dataset that we publicly release. We also showed that our method can track rugby sevens players during a full match with the annotation of only 6 few seconds length tracklets per player if they are observable with a minimal resolution. To our knowledge, no previous work on tracking of rugby players has been published. As future work, we would like to improve the detection of small and partially occluded players. Since our approach can be applied to any team sport, we would like to test it on other sports such as basketball. We also believe that the user annotation step would be sped up if an active learning process could smartly suggests tracklets to annotate. \section{Acknowledgments} This work benefited from a government grant managed by the French National Research Agency under the future investment program (ANR-19-STHP-0006) and the FactoryIA supercomputer financially supported by the Ile-de-France Regional Council. The videos of our Rugby Sevens dataset are the courtesy of World Rugby. We would also like to thank Jérôme Daret, Jean-Baptiste Pascal and Julien Piscione from the French Rugby Federation for making this work possible. {\small \bibliographystyle{ieee_fullname}
{ "attr-fineweb-edu": 2.632812, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdbA5qsNCPfdFecav
\section{Introduction} Research in sports analytics has recently substantially increased because of the availability of a huge corpus of data. Such data provides a challenging test-bed for machine learning algorithms - e.g. for tracking, action and activity recognition etc. At the same time, the huge commercial interests in a better understanding of player's and team's abilities using sport analytics is encouraging a lot of interest in the field. Team sport analytics deals with the analysis of long-term data of both individual players and teams. The most common data forms are GPS tracks and videos. Such an analysis can assist the clubs, coaches, and players in decision making - at player level and at team level. At player level, the individual statistics can assist in assessing one's performance, fitness level, strengths and weaknesses, etc. At team level, it could assist in team building, tactical analysis, formation planning, etc. This paper focuses on soccer in particular, and discusses the challenges and opportunities available for the fields of computer vision and machine learning in this sport. \\ Tracking players during matches and training sessions is of high importance because numerous performance metrics (e.g. high-speed runs, acceleration, deceleration, etc.) can be extracted from these tracks. These metrics are useful to sports scientists in accessing a player's fitness, strengths and other factors. There are different ways of tracking players - using wearable sensors such as GPS or using camera(s). Once the tracks are available, these different metrics can be estimated. These metrics are based solely on track data, hence they may not be enough to provide the complete profile of a player. For example, \textit{jumping} is an important ability for attacking players as well as for defending players, whereas tracking data cannot really quantify such features because of its inability to identify such events or actions. The ability to capture different actions performed by a player either in a match or during a training session can enhance the understanding of the overall performance and importance of the each player in the team. Additionally, to understand the game at a higher level, one needs to know what each player is doing at any given point of time and understand player interactions over the time. This is where computer vision can pitch in to contribute. Developing algorithms for recognising a single player's actions, multiple players interactions, and team tactics can be a step towards a complete understanding of the match. \\ To capture visual data of the players during the match or a training session, each club or sport analytics company has its own unique setup. Some would use multiple cameras around the field while others would use a single panoramic camera. These different setups pose different challenges and need different approaches to address them. The primary objective of this paper is to describe and discuss some of the challenges that are common in a real-life sport analytics setup. Towards this, we discuss the unique setting we are working with. Unfortunately, due to privacy regulation, we cannot release any image in the paper from our dataset. We organise the paper in the following way. After a brief note on related work, we discuss the input format. We then formally present our problem statement. We list out the challenges associated with the setup followed by a few specific challenges in sport action detection. We provide some experimental results and discussions followed by our conclusions. \section{Related Work} Sport analytics has recently gained massive attention from the AI researchers. One of the most frequently used techniques in sports analysis is tracking. Player tracking is useful in estimating performance metrics for the player \cite{pt2,pt3}. Ball tracking is important to analyse ball possession statistics \cite{bt3,bt4}. Also, there has been some work in automatic understanding of sports videos \cite{ana3,ana4}. Nevertheless, there is a lot of scope in hierarchical understanding of a match.\\ Sport action detection is a problem of classifying an action as well as localising it temporally and spatially in the input video. It is a widely studied problem in computer vision. Action detection models can be either single frame based \cite{det_frame_2,det_frame_3} or multi-frame based \cite{tube3,tube4}. Action recognition is a relatively simpler task of predicting a class label for an input video. Some of the single frame based action recognition models are proposed in \cite{rec_frame_1,rec_frame_2}. There is also a substantial amount of work on video based models \cite{rec_vid_1,rec_vid_3}. \\ Additionally, several sports datasets are available to be used as benchmarks for sports analysis, namely \textit{UCF101}\cite{ucf101}, \textit{Sports Videos in the Wild (SVW)}\cite{svw}, \textit{Sports-1M}\cite{rec_vid_1}, \textit{SoccerNet}\cite{soccernet}. Out of all these datasets, \textit{UCF101}, \textit{SVW}, and \textit{Sports-1M} are generic in the sense that they contain videos from multiple sports. \textit{SoccerNet} is specific to soccer but contains annotations for limited events. The lack of sport specific datasets with extensive annotations poses some limitations in learning sport specific actions. \section{Input Setup} In this section, we describe our input setup. We have a multiple cameras framework. Each camera has a frame-rate of 10 fps and provides thumbnails of players that are in its field of view. A thumbnail is a cropped part in the image containing a player and the surrounding. That is, if a camera sees $M$ players at time $t$, we extract $M$ thumbnails from the camera image for the time instant $t$. The remaining part of the image is discarded and not saved due to memory constraints. The following points summarise the setup: \begin{itemize} \item If a player is visible in a camera for a duration, the camera produces thumbnails around the player for the duration. Since the camera may drop frames in between, these thumbnails are produced at irregular time intervals. \item If a player is visible in more than one camera, we have multiple thumbnails for the player from different cameras. \item The resolution of the thumbnails is $256\times256$. \end{itemize} \section{Problem Statement} Just to recall the input setup in a formal way, let there be $N$ cameras and $M$ players. The video is recorded over a time interval $[0,T]$. Consider the $i^{th}$ player for this duration. This player is visible $K_j^i$ times in $j^{th}$ camera. Hence we have $K_j^i$ sequences of thumbnails for $i^{th}$ player that come from $j^{th}$ camera with time stamps of $[t^{i}_{1j}, t^{i}_{2j}]^k$ where $k~\forall~[1, K_j^i]$. Figure~\ref{fig:setup} illustrates this setup with a toy example. Consider that we have three cameras in our setup. These cameras capture thumbnails of a player at different times. The coloured boxes denote the appearance of a player in the corresponding camera. For example, (a) Camera one sees the player from periods $[0,t_1]$ and $[t_2,T]$, (b) the player is visible in all the cameras in the periods $[t_2,t_3]$ and $[t_4,T]$. When more than one player is present on the field, then for each camera and for each player all the thumbnails need to be associated in time to create the relevant tracklets. We are interested in solving the following problems under this challenging and unique data collection setup - \begin{enumerate} \item Thumbnails can contain other players also. So it is essential to localise the players in the thumbnails and identify the central player whom this thumbnail belongs to. \item A camera generates thumbnails of the players. We need to associate these thumbnails to generate the tracklets of the players as seen by the camera. \item Combine the tracklets of the players from all the cameras to track them over the whole time interval $[0,T]$. \item Number detection: The number printed on a player's jersey is useful to associate two tracklets. Hence, it is an important problem. \item Recognise the sequence of actions performed by each player over this time duration. A few examples of individual actions that potentially of interest in the sports analytics domain, are \textit{jumping, kicking, running, etc.} \item Recognise the activities that are performed by multiple players in collaboration. A few examples of interactions are \textit{passing a ball, tackling, etc.} \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.25]{multi_camera_vis.png} \caption{Illustration of data capture process under multi-camera setup. The coloured boxes denote the appearance of a player in the corresponding camera for that duration.} \label{fig:setup} \end{figure} Out of these interesting problems, we discuss only a couple of problems in the paper as a part of our initial experimentation $-$ player detection and number detection. In the next section, we discuss some of the challenges that need to be addressed in order to be able to solve these problems. \begin{figure} \centering \includegraphics[scale=1.55]{2.jpg}~ \includegraphics[scale=1.45]{1.jpg}~ \includegraphics[scale=1.1]{6.jpg}~ \includegraphics[scale=1.4]{3.jpg}~ \includegraphics[scale=1.2]{7.jpg}~ \includegraphics[scale=1.2]{8.jpg} \caption{Illustration of wide range of image quality in our dataset.} \label{fig:imgs} \end{figure} \section{Challenges} The unique setup poses some unique challenges and questions as well. In this section, we discuss some of them: \begin{enumerate} \item The base frame-rate of the cameras is low. How difficult is it to capture the fast actions with slow frame-rate? \item The existing models usually work with fixed frame-rate videos. In our case, sometimes the frames are dropped intermittently. So an interesting challenge is to address the varying nature of the frame-rate with such models. \item The quality of the thumbnails are poor, making the computer vision tasks harder to solve effectively. Figure \ref{fig:imgs} shows a few examples from our dataset. The images are cropped because of privacy issues. \item Since the players are highly mobile in the game, it is possible that the parts of the same action are visible in different cameras. The challenge is to combine the relevant clips from different cameras to identify the action performed. \item Is it possible to identify the interactions at all in such a setup where the field context is not available? \end{enumerate} Next, we look at some of the challenges pertaining to sport action detection task. \begin{table}[] \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline \textbf{Classes} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \textbf{mAP} & 0.57 & 0.57 & 0.68 & 0.61 & 0.30 & 0.58 & 0.47 & 0.31 & 0.29 & 0.51 \\ \hline \end{tabular}% } \caption{Class-wise mAP on validation dataset for digit detection.} \label{table:mAP_digits} \end{table} \begin{table}[] \resizebox{\columnwidth}{!}{% \begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Model}} & \textbf{Time} & \textbf{AP\_{[}0.50:0.95{]}} & \textbf{AP\_0.50} & \textbf{AP\_0.75} \\ \hline Faster-RCNN-Inception & 51 ms & 0.45 & \textbf{0.72} & 0.50 \\ \hline \begin{tabular}[c]{@{}l@{}}Faster-RCNN-Inception-Resnet \\ (300 proposals)\end{tabular} & 345 ms & 0.43 & 0.68 & 0.49 \\ \hline \begin{tabular}[c]{@{}l@{}}Faster-RCNN-Inception-Resnet\\ (20 proposals)\end{tabular} & 109 ms & 0.36 & 0.54 & 0.41 \\ \hline RetinaNet & \textbf{35 ms} & 0.20 & 0.45 & 0.14 \\ \hline \end{tabular}% } \caption{Performance comparison of some object detectors on the test-set. RetinaNet was the fastest but the performance was poor. Faster-RCNN-Inception was found to be optimal in term of speed and performance.} \label{table:mAP_person} \end{table} \section{Sport Action Detection} In sports analytics, automatic detection or recognition of a sequence of single player actions and multi-player interactions can provide useful insights. Since we do not have a typical input setting, we need to investigate the following: \begin{enumerate} \item Typically, the existing methods take fixed frame-rate videos as an input. It would be interesting to test their applicability on variable frame-rate videos. \item The base frame-rate is 10 fps and at times, it can go further down. Some investigation is required to see if the existing methods can capture the fast actions with this frame-rate as well. \item Recall that we have thumbnails of individual players from the different cameras. It is challenging to recognise the multi-player interactions in such a setup. \end{enumerate} Our work programme is to address these challenges. We started with the tasks of player detection and jersey number detection. In the next section, we discuss the experimental details. \section{Evaluation} As mentioned, we started tackling the relatively simpler tasks of number and player detection. For these tasks, we tested a few object detectors. \begin{enumerate} \item \textbf{Number detection}: The objective is to identify the number printed on player's jersey from a set of thumbnails of the player. We addressed the number detection problem into two stages - (a) Digit detection, and (b) Aggregation of all the predicted digits. To detect the digits, we used {RetinaNet} \cite{retinanet} with data augmentation. We pre-train the model using \textit{SVHN} dataset \cite{svhn} which consists of house numbers. The mAP on \textit{SVHN} validation set was 0.92. We subsequently fine-tune the model with our dataset. The training and validation datasets consist of around 10,500 images and 2,800 images, respectively. The training dataset was unbalanced, so we used class weights to lessen its effects. We achieved an mAP of 0.48 on our validation set. The class-wise mAP is mentioned in Table \ref{table:mAP_digits}. We suspect the reason for the performance gap on our and \textit{SVHN} datasets is the poor quality of our images, as shown in Figure \ref{fig:imgs}. \\ The next task is to combine all the predictions and get a final number for the player. Some of the challenges in the task are - missing predictions (e.g. no prediction in a thumbnail), partial predictions (e.g. a digit is missing from a number), presence of multiple numbers in the thumbnails, etc. We use Dempster's rule for combination of multiple evidences (which is based on Dempster-Shafer Theory) \cite{dst}, where each thumbnail provides some evidence for the jersey number. An illustration of an example is shown in Figure \ref{fig:agg}. The different outputs from all the thumbnails are aggregated to estimate the final number for the player. \item \textbf{Player detection}: The objective is to localise the players in an input thumbnail. This is useful in player matching for the task of tracklet generation for the players, and for action recognition. We tested recent object detectors - Faster-RCNN \cite{faster_rcnn} (Inception based), Faster-RCNN (ResNet-Inception based), and RetinaNet \cite{retinanet}. We fine-tune these models with our dataset but did not do any parameter tuning. The raw performance of these detectors on our test-set is mentioned in the Table \ref{table:mAP_person}. Faster-RCNN (Inception) performed the best with respect to mAP. RetinaNet was the fastest one but the performance was poor. We fine-tune the parameters of Faster-RCNN (Inception) to achieved a bit higher mAP of 0.74. \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.5]{Aggregate.png} \caption{Aggregation of multiple predictions that are coming from different thumbnails to get the final jersey number for the player.} \label{fig:agg} \end{figure} \section{Discussions} These initial and small set of experiments inspired us to reflect on our standard approach to fine tuning and other aspects. Many of such related concerns are already under investigation by many researchers and have given rise to interesting directions. \begin{enumerate} \item Do state-of-the-art methods perform as well on real life data as they perform on benchmark datasets? The performance of {RetinaNet} for player detection task was disappointing and hence raised a few concerns about the benchmark datasets - Are they skewed? The real-life dataset comes with real-life variety. For example in soccer, the videos could have - noise due to weather conditions; the motion blur due to fast movement of players; variable size of the player and the ball depending on their distances from the cameras; considerable amount of occlusion due to multi-player interactions. Do the benchmark datasets have enough variability to judge the generalised performance of a model? How do we quantify the amount of real-life variety in benchmark datasets? Maybe we need to examine our benchmark datasets in more principled way. \item In both of our experiments, we initially fine-tuned the classifier layer only and not the feature extractor but that led to poor performance of the model. We achieved better performance after fine-tuning the feature extractor. So one natural question arises about what is an optimal and principled transfer learning approach for real-life datasets. \end{enumerate} \section{Conclusions} In this paper, we discussed the importance of data analytics in soccer and the role of AI. We detailed some of the challenges that we faced in our initial experiments. Our experimentation raised a few concerns regarding the benchmark datasets and the applicability of state-of-the-art methods on real-life problems. Finally, we explored the opportunities in the form of questions that are lying ahead of us in the field of sports analytics which truly provides a challenging and real-life test-bed.
{ "attr-fineweb-edu": 2.712891, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUe03xaKgQUJVNhyK7
\section{Appendices} \printbibliography \end{document} \endinput \section{Introduction} Individual and professional sports have always had a strong impact on the economic, political, and cultural aspects of our society. When only considering the economical side, this impact is likely going to increase as the global sports market size, including services and goods from sport entities, is expected to grow from $\$354.96$~billion in 2021 to $\$707.84$~billion in 2026\footnote{\url{https://www.thebusinessresearchcompany.com/report/sports-global-market-report}}. The online live-streaming market alone is going to increase in value from $\$18.12$bn in 2020 to $\$87.34$bn in 2028\footnote{\url{https://www. verifiedmarketresearch.com/product/sports-online-live-video-streaming-market}}. A strong contribution to this growth is provided by the recent and rapid technological advancement which has changed the way people watch and enjoy sports. Indeed, Computer Vision (CV) and the recent developments in Deep Learning (DL)~\cite{lecun2015deep} provide the opportunity to extract meaningful information from live streamed events resulting in a much richer experience for both consumers and leagues. The ability of DL-based solutions to be useful and reliable in real world applications strongly depends on the quantity and quality of data on which the model has been trained on in the first place~\cite{alom2019state}. Specifically for sports, the different disciplines and conditions make them unique in terms of the problems the model has to face. Moreover, the quality of the annotations often determines the model performances overall. In the past few years, the SoccerNet~\cite{giancola2018soccernet, deliege2021soccernet, cioppa2022scaling, cioppa2022soccernet} datasets have received increasing attention for the amount of data and the benchmark models provided to the CV community. However, two main issues pertain such initiative: first, considering only soccer as representative of all sports does not allow to extend results to other domains; secondly, and more importantly, the SoccerNet annotations are created out of broadcast videos which bring a series of concerns. These concerns include: a limited spatial and temporal coverage of the game due to, on one side, the frequent camera movements which return a subset of the field and, on the other, the replays or advertisements which interrupt the live stream; a lower image resolution with respect to the original sensor; no access to camera parameters or position; and the overlaying graphics such as scores, teams name, advertisements, game statistics, that obstruct the image. In summary, broadcast video annotations remain distant from the actual sensors and tools used to record the game. In conclusion, while initiatives as SoccerNet represent a strong and valid tool for the computer vision community, the introduction of an high quality dataset with available raw images and camera parameters, will help closing the gap between academic research and real world settings. This paper introduces two datasets in the basketball domain and four different CV tasks each associated with a task-specific dataset extracted from the first two. The data and annotations are provided by SynergySports\footnote{\url{https://synergysports.com/}}, a division of Sportradar\footnote{\url{https://sportradar.com/}}, and have been recorded with the Keemotion/Synergy Automated Camera System™. The proposed four tasks are: \begin{itemize} \item \textbf{Ball 3D localization in calibrated scenes.} This task tackles the estimation of ball size on basketball scenes given the oracle ball position. \item \textbf{Camera calibration.} This task aims at predicting the camera calibration parameters from images taken from basketball games. \item \textbf{Player instance segmentation.} This task deals with the segmentation of individual humans (players, coaches and referees) on the basketball court. \item \textbf{Player re-identification.} In this task, the objective is to re-identify basketball players across multiple video frames captured from the same camera viewpoint at various time instants. \end{itemize} Moreover, a competition around the four tasks has been organized and results will be presented at the 5th International ACM Workshop on Multimedia Content Analysis in Sports\footnote{\url{http://mmsports.multimedia-computing.de/mmsports2022/index.html}}. A toolkit is provided for each task containing data, annotations and metrics. Moreover, a baseline for each task has been added which serves the purpose of providing an example to consume the data, and as a starting point for the challenge participants' solutions. The next section explains the original datasets, while the subsequent sections will describe each task in detail. \section{Datasets} \label{sec:datasets} The four tasks are built on two different datasets. The DeepSport dataset---a multi-labels dataset containing ball 3D annotations, image calibration data and human segmentation masks---is used for the ball 3D localization, court calibration and instance segmentation tasks. The DeepSportradar player ReID dataset is used for the players re-identification task only and will be described further in Section \ref{sec:reid}. Both datasets were acquired during professional basketball matches with the Keemotion/Synergy Automated Camera System™ that offers a sideline view of the court from a position aligned with the court center line. Images were fully annotated and reviewed by human operators, leading to high quality labels, and are made freely available to the research community by Sportradar. \subsection*{DeepSport dataset} \label{sec:deepsport-dataset} Hereafter, the multi-labels DeepSport dataset, used for three tasks, is described. Originally introduced in~\cite{ballseg} with only ball annotations, it was later supplemented with new data and additional annotations. It is now made available publicly on the Kaggle platform~\cite{kaggle-deepsport}. \paragraph{Description} The dataset is a collection of \emph{raw-instants}: sets of images captured at the same instant by an array of cameras covering a panorama of the sport field. It features only in-game basketball scenes. Figure~\ref{fig:pair} shows a \emph{raw-instant} from a two cameras setup. In the DeepSport dataset, camera resolutions range from 2Mpx to 5Mpx. As illustrated in Figure~\ref{fig:crosssection}, the resulting images have a definition varying between 80px/m and 150px/m, depending on camera resolution, sensor size, lens focal-length and distance to the court. \paragraph{Origin} The dataset was captured in 15 different basketball arenas, each identified by a unique label, during 37 professional games of the French league LNB-Pro~A. Figure~\ref{fig:crosssection} depicts a cross section of a basketball court where the camera setup height and distance to the court is shown for each arena. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{figures/camera_distances.pdf} \caption{Cross section showing camera setup height from the ground and distance to the court of the different arenas in which images were acquired. The camera definition depends on camera resolution, sensor size, lens focal length and camera setup distance to the court.} \label{fig:crosssection} \end{figure*} \newcommand{\testset}[1]{\textbf{#1}} \begin{table}[tb] \caption{The DeepSport dataset was captured in 15 different arenas and three of them are kept for the testing set. It features a variety of angle of views, distance to the court and image resolution.} \label{tab:arena_sets} \resizebox{\columnwidth}{!}{% \begin{tabular}{@{}ll@{}c@{}} \toprule Arena label & Arena name (City) & \shortstack{Number \\ of items} \\ \midrule \textsc{ks-fr-stchamond} & Halle André Boulloche\, (Saint-Chamond) & 12\\ \textsc{ks-fr-fos} & HdS Parsemain\, (Fos-sur-Mer) & 23\\ \textsc{ks-fr-strasbourg} & Rhénus Sport\, (Strasbourg) & 8\\ \textsc{ks-fr-vichy} & PdS Pierre Coulon\, (Vichy) & 9\\ \textsc{ks-fr-nantes} & la Trocardière\, (Nantes) & 20\\ \textsc{ks-fr-bourgeb} & Ekinox\, (Bourg-en-Bresse) & 12\\ \textsc{ks-fr-gravelines} & Sportica\, (Gravelines) & 129\\ \textsc{ks-fr-monaco} & Salle Gaston Médecin\, (Monaco) & 9\\ \textsc{ks-fr-poitiers} & Stade Poitevin\, (Poitiers) & 5\\ \textsc{ks-fr-nancy} & PdS Jean Weille de Gentilly\, (Nancy) & 40\\ \textsc{ks-fr-lemans} & Antarès\, (Le Mans) & 16\\ \textsc{ks-fr-blois} & Le Jeu de Paume\, (Blois) & 39\\ \testset{\textsc{ks-fr-caen}} & \testset{PdS de Caen\, (Caen)} & \testset{31}\\ \testset{\textsc{ks-fr-roanne}} & \testset{HdS Andre Vacheresse\, (Roanne)} & \testset{3}\\ \testset{\textsc{ks-fr-limoges}} & \testset{PdS de Beaublanc\, (Limoges)} & \testset{8}\\ \bottomrule \end{tabular} } \end{table} \begin{figure} \centering \includegraphics[width=.49\columnwidth, trim=30 30 30 30, clip]{figures/left.png} \includegraphics[width=.49\columnwidth, trim=30 30 30 30, clip]{figures/right.png} \caption{A \emph{raw instant} captured by the Keemotion/Synergy Automated Camera System with a two cameras setup.} \label{fig:pair} \end{figure} \paragraph{Split} The dataset is split in 3 subsets: training, validation and testing. The testing set contains all images from arena labels \textsc{ks-fr-caen}, \textsc{ks-fr-limoges} and \textsc{ks-fr-roanne}. If not otherwise specified, the remaining instants are split in 15\% for the validation set and 85\% for the training set. A mapping between arena labels and arena names is given in Table~\ref{tab:arena_sets} and provides the amount of instants for each arena. An additional challenge set, introduced for the competition, is composed of 35 additional similar instants. They come from a new set of arenas and labels will remain secret. \paragraph{Annotations} The cameras used to capture the \emph{raw-instants} are calibrated, which means that intrinsic and extrinsic parameters are known. The ball 3D annotation was obtained by leveraging the calibration data and clicking two points in the image space: the ball center and its vertical projection to the ground. This process is described and validated in~\cite{ball3d}. The contouring of humans lying near the court were annotated on each image individually following \cite{niels2022}, with a special care given to occlusions. \section{The tasks} This section describes the four tasks, explaining the dataset splits, the metrics used for evaluating and compare results for the competition and the baselines provided to the users. \subsection{Ball 3D localization} Automated ball 3D localization in team sport scenes has important applications like assisting referees or feeding game analytics. In sports like basketball where the ball is often occluded and in players hands, an image based approach is required to fill the gap between the trajectories proposed by a ballistic approach. Hence, this task aims at localizing the ball in 3D on basketball scenes, from a single calibrated image. This problem can be solved by both detecting the ball center and estimating its size in the image space. Indeed, the 3D localization can be recovered using camera calibration information and the knowledge of the real ball size~\cite{ball3d}. Since ball detection has been largely studied~\cite{kamble2019ball}, this task focuses on 3D localization given oracle ball positions. Hence, the task consists in estimating the ball diameter in pixels in the image space from an image patch centered on the ball. \subsubsection{Dataset} The task uses $N\times N$ crops around oracle ball positions from the {\textbf{DeepSport dataset}} presented in Section~\ref{sec:deepsport-dataset}, where $N$ is a parameter. Figure~\ref{fig:ballsamples} shows samples from the dataset with $N=128$. As shown in Figure~\ref{fig:balldistribution}, ball diameter ranges from 15px to 35px in the dataset. \begin{figure*}[ht] \centering \begin{tabular}{@{}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{}} \includegraphics[width=2cm]{figures/ball_samples/00.png}& \includegraphics[width=2cm]{figures/ball_samples/01.png}& \includegraphics[width=2cm]{figures/ball_samples/02.png}& \includegraphics[width=2cm]{figures/ball_samples/03.png}& \includegraphics[width=2cm]{figures/ball_samples/04.png}& \includegraphics[width=2cm]{figures/ball_samples/05.png}& \includegraphics[width=2cm]{figures/ball_samples/06.png}& \includegraphics[width=2cm]{figures/ball_samples/07.png}\\ \includegraphics[width=2cm]{figures/ball_samples/08.png}& \includegraphics[width=2cm]{figures/ball_samples/09.png}& \includegraphics[width=2cm]{figures/ball_samples/10.png}& \includegraphics[width=2cm]{figures/ball_samples/11.png}& \includegraphics[width=2cm]{figures/ball_samples/12.png}& \includegraphics[width=2cm]{figures/ball_samples/13.png}& \includegraphics[width=2cm]{figures/ball_samples/14.png}& \includegraphics[width=2cm]{figures/ball_samples/15.png}\\ \includegraphics[width=2cm]{figures/ball_samples/16.png}& \includegraphics[width=2cm]{figures/ball_samples/17.png}& \includegraphics[width=2cm]{figures/ball_samples/18.png}& \includegraphics[width=2cm]{figures/ball_samples/20.png}& \includegraphics[width=2cm]{figures/ball_samples/21.png}& \includegraphics[width=2cm]{figures/ball_samples/22.png}& \includegraphics[width=2cm]{figures/ball_samples/23.png}& \includegraphics[width=2cm]{figures/ball_samples/25.png}\\ \includegraphics[width=2cm]{figures/ball_samples/26.png}& \includegraphics[width=2cm]{figures/ball_samples/27.png}& \includegraphics[width=2cm]{figures/ball_samples/28.png}& \includegraphics[width=2cm]{figures/ball_samples/29.png}& \includegraphics[width=2cm]{figures/ball_samples/31.png}& \includegraphics[width=2cm]{figures/ball_samples/32.png}& \includegraphics[width=2cm]{figures/ball_samples/33.png}& \includegraphics[width=2cm]{figures/ball_samples/34.png}\\ \end{tabular} \caption{Samples of $128\times 128$ crops around oracle ball positions. The dataset features many different scenes with different colors, backgrounds and lighting conditions. Ball is often partly occluded or in players hand and suffers from motion blur.} \label{fig:ballsamples} \end{figure*} \begin{figure}[b] \centering \includegraphics[width=.6\columnwidth]{figures/ball_size_distribution.pdf} \caption{Ball size distribution in the DeepSport dataset.} \label{fig:balldistribution} \end{figure} \subsubsection{Metrics} The main metric used to evaluate methods is the mean absolute diameter error (MADE) between the prediction and the ground-truth. In addition, the mean absolute projection error (MAPE) and the mean absolute relative error (MARE), described in~\cite{ball3d}, are used. The MAPE measures the error of a vertical projection of positions on the ground plane. The MARE measures the distance error relative to the camera position. \subsubsection{Baseline and results} The baseline proposed with this task is the regression model described in~\cite{ball3d}. It is composed of a VGG16~\cite{Simonyan2014} features extractor followed by 3 fully connected layers and is supervised with a Huber loss~\cite{Huber1964}. The baseline is trained with random scaling and random color-gamma data augmentations on image patches with $N=64$. The supervision is performed using Adam optimizer for 100 epochs with an initial learning rate of $10^{-4}$, exponentially decayed by half during two epochs. This decay is applied every 10 epochs starting at epoch 50. The baseline reaches a MADE of 2.12 pixels. This corresponds to a MAPE of 3.05 meters and a MARE of 10\%. \subsection{Camera calibration} The Camera calibration task aims at predicting the camera parameters from images taken from basketball games. The toolkit for this task can be found on the main DeepSportRadar GitHub page. Formally, this task objective is to predict the projection matrix, $P_{3\times 4}$ that maps a 3D point in homogeneous coordinates (a 4-dimensional vector) to a 2D point in homogeneous coordinates (3-dimensional vector) in the image space\footnote{\url{https://en.wikipedia.org/wiki/Camera_matrix}}. The projection matrix combines intrinsic (sensor and lens) and extrinsic (position and rotation) camera parameters as: \begin{align} P :&= \left[\begin{matrix}K_{3\times 3};{\bf 0}_{3\times 1}\end{matrix}\right] \left[\begin{matrix}R_{3\times 3};T_{3\times 1}\\{\bf 0}_{1\times 3};1\end{matrix}\right] \nonumber \\ &=K_{3\times 3}\left[\begin{matrix}R_{3\times 3};T_{3\times 1}\end{matrix}\right], \end{align} where $K$ is the matrix of intrinsic parameters, while $R, T$ are the rotation and translation matrices respectively\footnote{\url{https://ispgroupucl.github.io/calib3d/calib3d/calib.html}}. For Synergy/Keemotion produced images, the origin of the 3D world is located on the furthest left corner of the basketball court relative to the main camera setup; more precisely in the inner side of the court lines. The unit of length is the centimeter and axis orientation is given by $x$ along the court length, $y$ along the court width and $z$ pointing downward\footnote{\url{https://gitlab.com/deepsport/deepsport_utilities/-/blob/main/calibration.md}}. For simplicity, this task assumes that lenses have no distortion. The camera calibration parameters are crucial for several CV tasks such as the 3D tracking of players in the field. These parameters can be retrieved on site; however, an automatic method that estimates them is needed when the field and the camera are not accessible anymore. The Camera calibration task falls under the sport-field registration tasks and takes advantage of the known official sizes of the basketball court\footnote{\url{https://en.wikipedia.org/wiki/Basketball_court}}. Several approaches have been adopted to solve sport-field registrations for different sport domains including tennis, volleyball and soccer~\cite{farin2003robust, yao2017fast}, relying essentially on keypoints retrieval methods. With the advent of Deep Learning, common approaches tackle the problem as a segmentation task~\cite{homayounfar2017sports, chen2019sports, sha2020end, cioppa2021camera}. The baseline introduced for this task adopts this approach. \subsubsection{Dataset} This task purpose is to predict the camera calibration parameters from a single frame of a basketball game. The dataset is made of 728 views (pairs of images and corresponding camera calibration parameters) randomly generated from the \textbf{DeepSport dataset}. The random view generation process\footnote{See implementation at: \url{https://gitlab.com/deepsport/deepsport_utilities/-/blob/main/deepsport_utilities/ds/instants_dataset/views_transforms.py}} generates a random 3D position within the court and image limits on which the view will be centered. It then samples a pixel density between $\alpha\cdot 20$~px/m and $\alpha\cdot 60$~px/m (see Figure~\ref{fig:crosssection}), a rotation between $-10^{\circ}$ and $10^{\circ}$, and a boolean horizontal flip, to create a crop from the original image of size $\alpha\cdot 480\times \alpha\cdot 270$. Please note that, the test and challenge sets were provided with $\alpha=2$, resulting in an output image dimension of $920\times 540$. The corresponding affine transformation matrix is applied to the intrinsic camera matrix $K$ to produce the camera calibration parameters that correspond to the generated view. These views are divided in train (480), val (164) and test (84) sets. For this challenge, having a validation set on arenas not seen during training is of foremost importance, therefore the arenas of \textsc{ks-fr-nantes}, \textsc{ks-fr-blois} and \textsc{ks-fr-fos} are used for the validation set (see Table \ref{tab:arena_sets}). A final challenge split composed of 84 images is provided for the competition purpose and its camera parameters are kept secret. A few sport-field registration datasets have been publicly released so far among which the SoccerNet-v2 is the largest~\cite{deliege2021soccernet} (20028 images from 500 games). It is worth noticing that, with our random view generation process, a potentially infinite number of views can be generated from the original dataset images. \begin{figure}[t] \centering \begin{tabular}{@{}c@{}c@{}} \includegraphics[width=0.98\columnwidth]{figures/camcalib/images_mmsport.png}\\ \includegraphics[width=0.98\columnwidth]{figures/camcalib/targets_mmsport.png}\\ \end{tabular} \caption{Examples of images from the camera calibration task (top). The target camera parameters have been used to generate the court lines (bottom), which can then be used as target for a segmentation model as described for the baseline.} \label{fig:cam_calib} \end{figure} \subsubsection{Metrics} In order to evaluate the proposed methods on this task, the predictions are evaluated based on a Mean Squared Error (MSE) of the projection error of 6 points---left, center and right extremities at the middle and bottom parts of the frame---in the 3D coordinates. \subsubsection{Baseline and results} The baseline is composed by two models: the first is a segmentation model (DeepLabv3\cite{chen2017rethinking}) that predicts the 20 lines of the basketball court (see Figure \ref{fig:cam_calib}); the second finds the 2D intersections in the image space and matches them with the visible 3D locations of the court\footnote{\url{https://github.com/DeepSportRadar/camera-calibration-challenge/blob/main/utils/intersections.py}}. If enough intersections points are found (>5) the method \texttt{cv2.calibrateCamera}\footnote{\url{https://docs.opencv.org/4.6.0/d9/d0c/group__calib3d.html}} predicts the camera parameters. In all the other cases, the model returns an average of the camera parameters in the training set as default. The segmentation model has been fine-tuned on the Camera calibration dataset (with $\alpha=1$) for $40\text{k}$ steps with AdamW optimizer, base learning rate of $0.001$, weight decay of $0.0001$, and Amsgrad, reaching an mIoU of $0.46$ on the validation set. The current baseline has an MSE error of $592.48$~cm on the Test split and $490.31$~cm on the Challenge set. \subsection{Player instance segmentation} The player instance segmentation task tackles the delineation of individual humans (players, coaches and referees) lying on a basketball court or less than 1 meter away from its borders. Instance segmentation is a pervasive task that can apply to images captured from any domain in which objects can be individually identified and counted. Instance segmentation datasets have been collected, among other domains, in microbiology~\cite{kumar2019multi}, biology~\cite{minervini2016finely}, autonomous driving~\cite{cordts2016cityscapes} and in everyday life~\cite{gupta2019lvis,lin2014microsoft}. The set of methods that solve instance segmentation is equally rich. A dichotomy is usually drawn between top-down and bottom-up methods. Top-down methods first propose candidate bounding boxes, and then segment the main object of interest in each of them~\cite{maskrcnn,bolya2019yolact}. Bottom-up methods first label pixels with embedding vectors and then cluster pixels with similar embeddings into instance masks~\cite{neven2019instance,cheng2020panoptic}. In both types, main limitations usually arise in crowded regions, where objects are close to or occlude each other. So much so that many of the new state-of-the-art methods explicitly tackle those weaknesses~\cite{ke2021deep, yuan2021robust}. The dataset we propose here has three key aspects that make it particularly relevant for studying instance segmentation in those challenging cases. Please refer to Figure~\ref{fig:segmentation_samples} for visual examples that illustrate those. First, instances only belong to one class. This renders the training and analysis of models less cumbersome, as there is no interference between classes of different frequencies during the training, and no averaging of performance metrics across them. Second, although only one class is present, instances have varied appearances and poses, and are sometimes already tricky to extract from the background. Furthermore, occlusions are frequent, constituting challenging cases. The fact that instances of a same class have high interactions, sometimes leading each other to be split in disconnected parts, stresses greatly current instance segmentation methods. Third, instance masks provided are very precise. Those annotations have been semi-automatically annotated as reported in~\cite{niels2022}. All in all, we believe this dataset provides a good compromise that is challenging for state-of-the-art models, yet practical to study models with. \begin{figure*}[t] \centering \begin{tabular}{c@{\;}c} \includegraphics[width=0.48\textwidth]{figures/segmentation_samples/00.png}& \includegraphics[width=0.48\textwidth]{figures/segmentation_samples/01.png}\\ \includegraphics[width=0.48\textwidth]{figures/segmentation_samples/02.png}& \includegraphics[width=0.48\textwidth]{figures/segmentation_samples/03.png}\\ \end{tabular} \caption{Samples of annotated images from the instance segmentation task (cropped around annotated instances). Annotated instances are highlighted in distinct colors.} \label{fig:segmentation_samples} \end{figure*} \subsubsection{Dataset} The task uses the \textbf{DeepSport dataset} presented in Section~\ref{sec:deepsport-dataset}, with every images used individually\footnote{Recall that dataset items, the \emph{raw-instants}, are composed of multiple images}, and provided in the COCO format~\cite{lin2014microsoft}. The \textit{train} and \textit{val} subsets contain respectively 223 and 37 images sampled uniformly from the first set of arenas presented in Table~\ref{tab:arena_sets}. They contain respectively 1674 and 344 annotations. The \textit{test} subset contains 64 images coming from the last three arenas of Table~\ref{tab:arena_sets}. It contains 477 annotated humans. For the competition, participants were evaluated on the 84 images coming from the challenge set introduced in Section~\ref{sec:deepsport-dataset}. The number of annotated humans is kept secret. \subsubsection{Metrics} Because the mAP metric is well established in instance segmentation~\cite{lin2014microsoft,cordts2016cityscapes}, and because we are mostly interested about segmentation quality, we focus on it as main metric. The version specific to instance segmentation (sometimes referred to as segm\_mAP) is different from that of object detection (bbox\_mAP). Indeed, it uses the intersection-over-union (IoU) of predicted and ground-truth masks rather than bounding boxes to compute each intermediate \rm{AP} curve. This way, good segmentation ($\rm{IoU} \ge 0.80$) is strongly rewarded while low-quality segmentation ($\rm{IoU} \approx 0.55$) is not. Because only one class is present, there is no averaging between the metrics of frequent and rare classes. Our \rm{mAP} simply looks like \begin{equation} \rm{mAP} = \frac{1}{10}\sum_{\tau\in[0.50:0.05:0.95]} \rm{AP}@\tau \end{equation} $\rm{AP}@\tau$ being the area under the precision-recall curve, when considering as true-positives the predicted masks that have $\rm{IoU}>\tau$ with any ground-truth mask. ($\tau \ge 0.5$ prevents one-to-many mappings) \subsubsection{Baseline and results} The code base we provide\footnote{\url{https://github.com/DeepSportRadar/instance-segmentation-challenge}} is built on MMDet \cite{mmdetection}. Our baseline consists of a Mask-RCNN model \cite{maskrcnn}, with a ResNeXt 101 backbone \cite{xie2017aggregated} and default configuration, trained for 20 epochs. This model reaches a mAP of $0.51$. Relying on MMDet gives contestants the possibility to use a wide range of renowned and top-performing models off-the-shelf. \subsection{Player re-identification} \label{sec:reid} Person re-identification \cite{Ye2022DeepLF}, or simply ReID, is a person retrieval task which aims at matching an image of a person-of-interest, called \textit{the query}, with other person images within a large database, called \textit{the gallery}, captured from various camera viewpoints. ReID has important applications in smart cities, video-surveillance and sports analytics, where it is used to perform person retrieval or tracking. The objective of the DeepSportradar player ReID task is to re-identify players, coaches and referees across images captured successively from the same moving camera during a basketball game, as illustrated in Figure \ref{fig:player_reid_samples}. Compared to traditional street surveillance type re-identification datasets \cite{Miao2019PoseGuidedFA, Zheng2015ScalablePR, Zheng2017UnlabeledSG, li2014deepreid, Zheng2016MARSAV}, the DeepSportradar ReID dataset is challenging because players from the same team have very similar appearance, which makes it hard to tell them apart. However, as opposed to standard ReID datasets, all images are captured by the same camera, from the same point of view. ReID has gained more and more attention recently, with several works proposing state-of-the-art methods based on global \cite{fu2020unsupervised, he2020fastreid, Luo2019BagOT, Wang2018LearningDF}, or part-based \cite{Sun2018BeyondPM, Li2021DiversePD} feature extractor. Other works introduced alternative ReID tasks, such as occluded ReID \cite{Miao2019PoseGuidedFA} or video-based ReID \cite{Zheng2016MARSAV}. Multiple frameworks were also open-sourced to support further research on supervised \cite{torchreid, he2020fastreid} or unsupervised ReID \cite{ge2020selfpaced}. \input{tables/reid_dataset_stats} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/player_reid_samples.png} \caption{The player re-identification task: illustration of some correctly retrieved gallery samples for four players of interest given as queries.} \label{fig:player_reid_samples} \end{figure} \subsubsection{Dataset} The DeepSportradar player ReID dataset was built using 99 short video sequences from 97 different professional basketball games of the LNB~proA league played in 29 different arenas. This dataset of short basketball video sequences was originally introduced for the VIPriors 2021 workshop~\cite{vipriors}, and is different from the DeepSport dataset described in Section \ref{sec:datasets}. These sequences are 20 frames long, with a frequency of 10 frames per second (FPS), and they contain on average 10 different tracklets, i.e. identities. Therefore, the dataset contains a wide variety of players and sportswear appearance, within multiple arenas with different illumination and court appearance. Image crops\footnote{We refer to these image crops as "player thumbnails" for conciseness, without mentioning coaches and referees, because the large majority of these thumbnails actually depicts players.} from players, coaches and referees have been extracted within each of these frames. The resulting re-identification dataset is composed of 18.703 thumbnails divided into three subsets: train, test and challenge set, as summarized in Table \ref{tab:reid_dataset_stats}. Similar to other ReID datasets, the subset used for evaluating model performance, namely the test and challenge set, are split into a query and a gallery set. For these sets, we chose thumbnails from the $1^{st}$ frame of the sequence as queries, and remaining thumbnails from the $2^{nd}$ to the $20^{th}$ frame as galleries. Labels from the challenge are kept secret to avoid any cheating in the DeepSportradar Player ReID challenge. \subsubsection{Metrics} Two standard retrieval evaluation metrics are used to compare different ReID models: the mean average precision \cite{Zheng2015ScalablePR} (mAP), and the cumulative matching characteristics (CMC) \cite{Wang2007ShapeAA} at Rank-1 and Rank-5. The mAP is used to assess the average retrieval performance considering all ranked gallery samples. The Rank-K accuracy is the probability that at least one correct match appears in the top-K ranked retrieved results. Participants to the DeepSportradar Player ReID challenge are ranked according to their mAP score on the challenge set. \subsubsection{Baseline and results} Person re-identification is generally formulated as a metric learning task \cite{Ye2022DeepLF}. Firstly, a feature vector, also called "embedding", is extracted for each image in the dataset using a feature extractor. Secondly, the query to gallery similarity scores are measured as the pairwise euclidean distance of these features vectors in the embedding space. To address the DeepSportradar ReID challenge, we provide a simple CNN-based feature extractor as a baseline. This feature extractor was implemented using the Open-ReID\footnote{https://github.com/Cysu/open-reid} framework, a lightweight library of person re-identification, open-sourced for research purpose. Open-ReID aims to provide a uniform interface for different datasets, a full set of models and evaluation metrics. The baseline employed a ResNet-50 \cite{He2016DeepRL} CNN as backbone and is trained with a classification objective: the model tries to predict each sample identity among the 436 identities in the training set. The model is trained for 50 epochs with an SGD optimizer and a cross-entropy loss. Training batches are made of 64 players thumbnails, all resized to $256\times128$. We refer readers to our open-source toolbox on GitHub\footnote{\url{https://github.com/DeepSportRadar/player-reidentification-challenge}} for more details about the baseline architecture and training setup. The baseline achieves \textbf{65\% mAP}, \textbf{90\% Rank-1} and \textbf{96\% Rank-5} on the testing set of the DeepSportradar Player ReID dataset. \section{Conclusions} This paper has introduced two new datasets, namely the DeepSport dataset and the Basketball ReID dataset both acquired during professional basketball games with the Keemotion/Synergy Automated Camera System™. Together with these datasets, four CV tasks have been set up: the Ball 3D localization, Camera calibration, Player instance segmentation and Player re-identification. For each task, the dataset, the metrics and the baseline have been specified. The aim of this contribution was to provide a high-quality sports dataset framework where images, camera parameters and annotations are available and built close to the actual game recording setup, therefore providing an unique tool to experiment methods and solutions on real world settings.
{ "attr-fineweb-edu": 2.263672, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUa0jxK6-gDz87Nu7b
\section{Introduction: Sport Climbing and the 2020 Olympics} \label{intro.sec} While outdoor rock climbing has a diverse and lengthy global history, competitive sport climbing\footnote{In rock climbing, the term ``sport climbing'' has historically been employed to mean climbing on natural rock with permanently-placed protective gear. In this paper, we use ``sport climbing'' to refer to the multi-discipline event of competitive climbing done on artificial walls.} is relatively new, with the earliest records of competitions being held on artificial walls in the 1980s \cite{Kiewa}. The discipline of competitive sport climbing has since become increasingly bifurcated from more traditional rock climbing, not only though changes in ethics (i.e., concern with Leave No Trace and other moral constraints), purpose (i.e., immersion in and/or ``conquering'' of nature), and location (i.e., on natural vs.\ artificial walls), but also through the increase of active governing bodies, organizations, and institutional logics surrounding competitive sport climbing \cite{BR19,Kiewa}. Competitive sport climbing is seen as the ``rationalization'' or ``quantification'' of rock climbing \cite{Heywood,Kiewa}, an assertion that has been made uniquely visible by the choice to include sport climbing in the 2020 Tokyo Olympic Games, and through the subsequent choice of scoring format. In 2015, the International Federation of Sport Climbing (IFSC) and the Japanese Mountaineering Association (JMA) proposed that sport climbing be added to the roster of the 2020 Tokyo Olympic Games \cite{Adie}. This proposal was then approved by International Olympic Committee (IOC) at the 129th IOC Session in Rio de Janeiro, allowing sport climbing into the 2020 Tokyo Olympic Games alongside skateboarding, baseball/softball, karate, and surfing \cite{ioc}. This series of events followed a documented power struggle between the IOC and the IFSC, as the IFSC sought legitimization for sport climbing through Olympic inclusion \cite{BR19}. Seeking greater levels of funding, professionalization, and notoriety---and having been less than successful with its solo ventures to do so---the IFSC ``allowed the IOC to obtain a degree of power over the sport and traded its autonomy to some extent'' to partially fulfill these aims \cite[p.\ 1684]{BR19}. Although Olympic sport climbing remains governed by the IFSC, the IOC has imposed certain organizational standards onto sport climbing, which have resulted in tensions of ethics and questions of the IFSC's organizational power in the broader climbing community \cite{BR19}.\footnote{For a comprehensive history of international competitive sport climbing and the organizational tension between the IFSC and the IOC, as well as the implications of the Olympic Agenda 2020, please see both Bautev and Robinson \cite{BR19} and Thorpe and Wheaton \cite{TW19}.} One of these frequently-questioned standards is the development of the IFSC sport climbing combined format, which was influenced by an IOC recommendation (Degun, as cited in \cite{BR19}). Currently, IFSC sport climbing consists of three individual disciplines that are then scored in a combined format. These disciplines include speed climbing, bouldering, and lead climbing. Speed climbing is a race-format event in which two athletes compete for the fastest time on a 15m fixed route \cite{rules2021}. Speed climbing rewards the fastest time but also rewards precision, as false starts or falls are automatic losses in finals \cite{rules2021}. Bouldering consists of athletes ``solving'' either four (in qualifications) or three (in finals) boulder ``problems'' roughly 4.5m in height, where they are rewarded for completing these climbs in the least number of attempts, with a halfway ``zone hold'' to further separate the field through partial attempts \cite{rules2021}. Climbing here is done unroped, one at a time, and without safety equipment save for padded mats. Bouldering rewards athleticism, strength, quick-thinking, and adaptability. Lead climbing is perhaps the event that most outwardly resembles traditional rock climbing. Athletes compete one at a time, roped, on a 15m wall on a progressively-difficult course \cite{rules2021}. The athlete to climb the highest wins the event; if two climbers reach the same point on the wall, the quickest athlete is rewarded (with a total available climbing time of six minutes) \cite{rules2021}. Lead climbing rewards endurance and precision---once an athlete falls, their attempt is over. In each discipline, athletes are assigned a ranked score \cite{rules2021}. Despite the wishes of the IFSC, the IOC only granted one metal per gender to sport climbing, citing limitations due to crowding if each discipline were to have its own medal \cite{BD}. This format brings each athlete's performance in all three disciplines together under one score---the combined format. The combined format remains controversial among the climbing community, and it was publicly denounced by numerous high-profile climbers, including eventual sport climbing Olympians Adam Ondra and Jakob Schubert \cite{BR19,BD}. The controversy surrounds both the clustering of the three disciplines into one event, as well as the specific inclusion of speed climbing in the event in general. In particular, speed climbing is noted as the ``outlier discipline'' and the ``proverbial wrench in the whole system,'' as ``it's a discipline of climbing that resembles very little traits of outdoor climbing'' \cite{BD}. And yet the inclusion of speed climbing was necessary in order to offer all sport climbing athletes equal possibility of participation in the Olympic Games while also working with the single-medal quota imposed by the IOC \cite{BD}. Whether or not speed climbing is a ``legitimate'' form of climbing is not relevant to the conversation of scoring, save from the cascade effect of scoring outcomes that came from its inclusion in the combined format. The interesting part of the question of whether or not speed climbing should be part of the combined format---or what happens to the scoring when it is---lies with the fact that traditionally, speed climbing has been dominated by athletes who are notably uncompetitive in the other two disciplines \cite{BD}. While there are similar discrepancies among individual athletes' skills in bouldering and lead climbing, there is more athlete crossover in those two disciplines than into the speed discipline. However, the announcement of the combined format resulted in many sport climbers taking seriously all three disciplines, and some very respectable all-around climbers have since emerged in both the women's and men's fields. The inclusion of speed climbing in the combined format likely informed the choice of the current multiplicative scoring system, though finding official documentation of this process proves to be difficult \cite{plastic}. The current scoring for the combined format uses a multiplicative system that takes into account a climber's ranked score (through ``overall ranking points'') in each discipline \cite{rules2018}. This multiplicative system was introduced in April of 2018 following an IFSC Rules Modification in advance of the debut of the Combined format at the 2018 IFSC Climbing World Championships in Innsbruck, Austria \cite{BD,rules2018}). This scoring system has been noted as confusing and anti-climactic, and it was even publicly misunderstood as favouring all-around athletes at first glance \cite{BD,epic}. Despite the possible perception of a combined event being structured to reward consistency across disciplines, the multiplicative system works such that climbers are actually rewarded for being dominant in one discipline, as opposed to being all-around athletes. As Black Diamond \cite{BD} explains, ``[e]ven just one first place finish significantly increases your chances of having a low score, which [favours] the best climbers.'' Our paper herein deals with the possibilities of alternative scorings for the combined format. These alternative scorings include the currently-used multiplicative scoring method, the additive scoring method used prior to 2018, and a new approach that we term the \emph{square root method}. \subsection{Our Contributions} The remainder of this paper is organized as follows. In Section \ref{multi.sec}, we provide a brief summary of multi-event scoring methods, including the additive ranking-based scoring systems that are the main subject of this paper. Section \ref{2020.sec} analyzes the results of the 2020 Olympics, comparing the product-based ranking that was used there to the more traditional sum-based rankings. In Section \ref{improved.sec}, we introduce and analyze a square-root based ranking system, which can be viewed as a compromise between the two other scoring methods. We also discuss how additive ranking-based scoring methods can conveniently be implemented using precomputed \emph{scoring tables}. Finally, Section \ref{summary.sec} summarizes our findings and conclusions. \section{Multi-event Scoring Methods} \label{multi.sec} There are many sporting events where the final standings are based on multiple disciplines or on multiple stages of the same discipline. For example, the men's decathlon consists of ten different track and field events. A diving competition may consist of five or six dives (each dive is termed a \emph{round}). It is very common to derive a score for each round of the competition and then compute the sum of each competitor's scores to obtain the final standings. The scores of each round are numerical values that typically fall within some prespecified range. There are a much smaller number of sports where the outcomes of each discipline or round are used only to determine a \emph{ranking} of the competitors, and the final outcome only depends on these rankings (sometimes the ranking are called \emph{ordinals}). In this paper, we will refer to such a scoring system as an \emph{ranking-based scoring system}. Sport climbing was introduced as an Olympic sport at the 2020 Games (which were held in 2021 due to the Covid-19 pandemic). As discussed in Section \ref{intro.sec}, sport climbing consists of three disciplines: speed, bouldering and lead. Each climber competes in all three disciplines, and the final rankings are determined by \emph{multiplying} the placements in each discipline (the lowest score determines the ultimate winner). In other sports using ranking-based scoring systems, it is more common to compute the \emph{sum} of the rankings in the component disciplines. We now present a general mathematical description of certain ranking-based scoring systems based on an additive function. Suppose a sporting event consists of $s$ stages and there are $n$ competitors, each of whom competes in each stage. For $1 \leq j \leq n$ and $1 \leq i \leq s$, let $r_{j,i}$ denote the \emph{rank} of the $j$th player in the $i$th stage (a rank is an integer between $1$ and $n$). The \emph{rank vector} for player $j$ is the $s$-tuple $\mathbf{r}_j = (r_{j,1} , \dots , r_{j,s})$. For convenience, and to simplify the discussion, we assume that there are no ties in any stage, so each $n$-tuple $(r_{1,i} , \dots , r_{n,i})$ is a permutation of $\{1, \dots , n\}$, for $1 \leq i \leq s$. Let $f: \{1, \dots , n\} \rightarrow \mathbb{R}^+ \cup \{0\}$ be a monotone increasing function; we call $f$ the \emph{score function}.\footnote{The score function is monotone increasing because we want an $i$th-place finish in any give stage to score less than than an $(i+1)$st-place finish in the same stage (a lower score is better).} The most common choice for a score function is the linear function $f(j)= j$ for $1 \leq j \leq n$. The \emph{$f$-score} of player $j$ is the quantity \[ \mathsf{score}_j = \sum_{i=1}^s f(r_{j,i}). \] The final ranking of the $n$ competitors is determined by sorting the list of values $\mathsf{score}_j$ in \emph{increasing} order. We note that there may also need to be a tie-breaking mechanism, if $\mathsf{score}_j = \mathsf{score}_k$ for some $j \neq k$. The above definition gives equal weight to each stage. A generalization is to specify a \emph{weight vector} $(w_1, \dots , w_s)$ and define the final scores to be \[ \mathsf{score}_j = \sum_{i=1}^s w_i f(r_{j,i}). \] We will call this a \emph{weighted score}. Observe that we obtain the original formula if $w_1 = \dots = w_s = 1$; we could call such a score an \emph{unweighted score}. \begin{Example} {\rm Prior to 2004, figure skating used a weighted additive ranking-based scoring system. Each figure skating competition consisted of a \emph{short program} and a \emph{long program}. The score function was the linear function $f(j)= j$ (for $j = 1,24$), but the long program received twice the weight of the short program. The rank in the long program was used to break any ties that arose. One consequence of this scoring system is that any of the top three skaters in the short program could win the competition by subsequently winning the long program. For example, the total score of a skater who finished third in the short program and first in the long program is $1 \times 3 + 2 \times 1 = 5$. The best total score any other skater could obtain would be $1 \times 1 + 2 \times 2 = 5$; however, in this case, the skater who won the long program would then be declared the winner. } \end{Example} \begin{Example} \label{sailing.exam} {\rm A sailing regatta typically consists of a series of races using an additive ranking-based scoring system. The score function is often, but not always, the linear function $f(j)= j$. In the 1968 Olympics, the scoring function was defined as follows: $f(1) = 0$, $f(2)= 3$, $f(3) = 5.7$, $f(4) = 8$, $f(5) = 10$, $f(6) = 11.7$, and $f(j) = j+6$ if $j \geq 7$. From this scoring system, it can be inferred that a first- and third-place finish in two races is considered to be better than two second-place finishes, because $0 + 5.7 < 2 \times 3$.} \end{Example} \begin{Example} {\rm The William Lowell Putnam Mathematical Competition \cite{Putnam} is an annual written mathematics competition for undergraduate mathematics students in Canada and the U.S. Each student receives a score between $0$ and $120$. This determines a ranking of all the students who took part in the competition. Before 2019, each university could also designate a 3-person team before the competition took place. The team score was obtained by computing the sum of the rankings of the three students in the team.\footnote{Starting in 2019, a new team scoring system was used, in which the sums of the \emph{scores} of the team members was used.}} \end{Example} We already mentioned in Section \ref{intro.sec} that sport climbing in the 2020 Olympics used a product-based scoring system. The score function is the usual linear function $f(j)= j$, but a player's score is the product of their three scores (or rankings): \[ \mathsf{score}_j = \prod_{i=1}^3 r_{j,i}. \] However, we can easily see that there is an equivalent ranking function for sport climbing that is just an additive ranking-based scoring system with a \emph{nonlinear} scoring function. We have \begin{eqnarray*} \mathsf{score}_j \leq \mathsf{score}_k &\Leftrightarrow & \prod_{i=1}^3 r_{j,i} \leq \prod_{i=1}^3 r_{k,i}\\ & \Leftrightarrow & \ln \left( \prod_{i=1}^3 r_{j,i}\right) \leq \ln \left( \prod_{i=1}^3 r_{k,i} \right)\\ & \Leftrightarrow & \sum_{i=1}^3 \ln r_{j,i} \leq \sum_{i=1}^3 \ln r_{k,i}. \end{eqnarray*} Thus, if we use a \emph{logarithmic} scoring function, $f(j) = \ln j$, then the resulting additive ranking-based scoring system yields the same final rankings as the previously described multiplicative ranking-based scoring system. For computations, it is probably simplest to compute the product of three rankings as opposed to computing the sum of their logarithms. Furthermore, it most sports announcers on television would probably not be comfortable discussing logarithms. However, we can gain some insight into the properties of the sum vs the product scoring system by recognizing that the product scoring system is just an additive system with a different score function. We will discuss this further in Section \ref{improved.sec}. \section{Analysis of Results at the 2020 Olympics} \label{2020.sec} Tables \ref{tab3}--\ref{tab6} show two possible sets of outcomes of the sport climbing preliminaries and finals (men's and women's) at the 2020 Olympics. Note that at the 2020 Olympics, preliminaries were used to reduce the number of competitors from 20 to 8. The finals then involved the eight best climbers from the preliminaries.\footnote{It is important to note that in the men's final, the seventh place qualifier, B.\ Mawem, did not compete, due to a torn bicep injury sustained during his last climb of the qualification round. B.\ Mawem was marked Did Not Start (DNS) for the finals round, but according to IOC rules he still finished 8th overall.} First, we give the official rankings as determined by multiplying the discipline rankings. The second (hypothetical) set of rankings uses the more common method of computing the sum of the discipline rankings. Each triple of discipline rankings consists of the rankings for speed, bouldering and lead (in that order). \begin{table} \caption{Sport Climbing Men's Preliminaries Sum vs Product Rankings} \label{tab3} \begin{center} \begin{tabular}{|l|c||r|r||r|r|} \hline Name & Discipline Rankings & Product & Ranking & Sum & Ranking \\ \hline \hline M.\ Mawem & $(3 , 1 , 11)$ & 33 & 1 & 15 & 2 \\\hline Narasaki & $(2 , 2 , 14)$ & 56 & 2 & 18 & 3\\\hline Duffy & $(6 , 5 , 2)$ & 60 & 3 & 13 & 1\\\hline Schubert & $(12 , 7 , 1)$ & 84 & 4 & 20 & 4\\\hline Ondra & $(18 , 3 , 4)$ & 216 & 5& 25 & 6\\\hline Gin\'{e}s L\'{o}pez & $(7 , 14 , 3)$ & 294 & 6 & 24 & 5\\\hline B.\ Mawem & $(1 , 18 , 20)$ & 360 & 7 &\textcolor{red}{39} & \textcolor{red}{17}\\\hline Coleman & $(10 , 11 , 5)$ & 550 &8 & 26 & 7\\\hline \hline Megos & $(19 , 6 , 6)$ & 684 & 9 & 31 & 9 \text{(tie)} \\\hline Chon & $(5 , 10 , 16)$ & 800 & 10& 31 & 9 \text{(tie)}\\\hline Khaibullin & $(4 , 17 , 13)$ & 884 & 11& 34 & 12\\\hline Hojer & $(11 , 9 , 9)$ & 891 & 12& \textcolor{red}{29} & \textcolor{red}{8}\\\hline Rubtsov & $(16 , 4 , 15)$ & 960 &13& 35 & 13 \text{(tie)}\\\hline Pan & $(20 , 8 , 7)$ & 1120 &14 & 35 & 13 \text{(tie)}\\\hline Piccolruaz & $(8 , 13 , 12)$ & 1248 & 15 & 33 & 11\\\hline Cosser & $(9 , 16 , 10)$ & 1440 & 16 & 35 & 13 \text{(tie)}\\\hline McColl & $(14 , 15 , 8)$ & 1680 & 17 & 37 & 16\\\hline Harada & $(15 , 12 , 17)$ & 3060 & 18 & 44 & 18\\\hline Fossali & $(13 , 19.5 , 18)$ & 4563 & 19 & 50.5 & 19\\\hline O'Halloran & $(17 , 19.5 , 19)$ & 6298.5 & 20 & 55.5 & 20\\\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sport Climbing Men's Finals Sum vs Product Rankings} \label{tab4} \begin{center} \begin{tabular}{|l|c||r|r||r|r|} \hline Name & Discipline Rankings & Product & Ranking & Sum & Ranking \\ \hline \hline Gin\'{e}s L\'{o}pez & $(1,7,4)$ & 28 & 1 & 12 & 2 \text{(tie})\\\hline Coleman & $(6,1 , 5)$ & 30 &2 & 12 & 2 \text{(tie)}\\\hline Schubert & $(7,5 , 1)$ & 35 & 3 & 13 & 7\\\hline Narasaki & $(2 , 3,6)$ & 36 & 4 & 11 & 1\\\hline M.\ Mawem & $(3 , 2,7)$ & 42 & 5 & 12 & 2 \text{(tie)} \\\hline Ondra & $(4,6,2)$ & 48 & 6& 12 & 2 \text{(tie)}\\\hline Duffy & $(5,4,3)$ & 60 & 7 & 12 & 2 \text{(tie)}\\\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sport Climbing Women's Preliminaries Sum vs Product Rankings} \label{tab5} \begin{center} \begin{tabular}{|l|c||r|r||r|r|} \hline Name & Discipline Rankings & Product & Ranking & Sum & Ranking \\ \hline \hline Garnbret & $(14,1,4)$ & 56 & 1 & 19 & 3 \\\hline Seo & $(17,5,1)$ & 85 & 2 & 23 & 6\\\hline Nonaka & $(4,8,3)$ & 96 & 3 & 15 & 1\\\hline Noguchi & $(9,3,6)$ & 162 & 4 & 18 & 2\\\hline Raboutou & $(12,2,8)$ & 192 & 5& 22 & 4 \text{(tie)} \\\hline Pilz & $(11,9,2)$ & 198 & 6 & 22 & 4 \text{(tie)}\\\hline Miroslaw & $(1 , 20,19)$ & 380 & 7 & \textcolor{red}{40} & \textcolor{red}{16 \text{(tie)}}\\\hline Jaubert & $(2,13,15)$ & 390 &8 & \textcolor{red}{30} & \textcolor{red}{9}\\\hline \hline Meshkova & $(15,6,5)$ & 450 & 9 & \textcolor{red}{26} & \textcolor{red}{7} \\\hline Coxsey & $(16,4,13)$ & 832 & 10& 33 & 11 \\\hline Condie & $(7,11,11)$ & 847 & 11& \textcolor{red}{29} & \textcolor{red}{8}\\\hline Song & $(3,19,18)$ & 1026 & 12& 40 & 16 \text{(tie)}\\\hline Chanourdie & $(8,15,9)$ & 1080 &13& 32 & 10 \\\hline Yip & $(6,16,12)$ & 1152 &14 & 34 & 12 \text{(tie)} \\\hline Rogora & $(19,7,10)$ & 1330 & 15 & 36 & 14\\\hline Klingler & $(10,10,14)$ & 1400 & 16 & 34 & 12 \text{(tie)}\\\hline Kaplina & $(5,18,17)$ & 1530 & 17 & 40 & 16 \text{(tie)}\\\hline Krampl & $(18,14,7)$ & 1764 & 18 & 39 & 15\\\hline MacKenzie & $(13 , 12,16)$ & 2496 & 19 & 41 & 19\\\hline Sterkenburg & $(20,17,20)$ & 6800 & 20 & 57 & 20\\\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sport Climbing Women's Finals Sum vs Product Rankings} \label{tab6} \begin{center} \begin{tabular}{|l|c||r|r||r|r|} \hline Name & Discipline Rankings & Product & Ranking & Sum & Ranking \\ \hline \hline Garnbret & $(5,1,1)$ & 5 & 1 & 7 & 1 \\\hline Nonaka & $(3,3,5)$ & 45 &2 & 11 & 2 \\\hline Noguchi & $(4,4,4)$ & 64 & 3 & 12 & 3\\\hline Miroslaw & $(1,8,8)$ & 64 & 4 & 17 &7 \text{(tie)}\\\hline Raboutou & $(7,2,6)$ & 84 & 5 & 15 & 5 \text{(tie)} \\\hline Jaubert & $(2,6,7)$ & 84 & 6& 15 & 5 \text{(tie)}\\\hline Pilz & $(6,5,3)$ & 90 & 7 & 14 & 4\\\hline Seo & $(8,7,2)$ & 112 & 8 & 17 & 7 \text{(tie)}\\\hline \end{tabular} \end{center} \end{table} It could be argued that the choice to multiply discipline rankings was made by the IFSC because under the former additive system, no speed specialists would qualify for the finals \cite{plastic}. This is because many of the top speed specialists are not as competitive in the more technical disciplines of bouldering and lead climbing, something that is most evident with men's competitor B. Mawem, and women's competitor Miroslaw. Thus, the modified (multiplicative) system was employed, in the hope that this would lead to some speed specialists qualifying for the final round. The main effect of multiplying rankings is that it places a very large premium on finishing first in a discipline. (For example, if a first place finish is replaced by a second place finish, then the overall score is doubled.) A competitor who finishes first in a discipline is very likely to qualify for the finals, even if their finish is close to the bottom in the other two disciplines. So, in this respect, the modified scoring system achieved its desired goal. Unfortunately, at the same time, it could be argued that multiplying rankings tends to undervalue to a certain extent an all-around climber who is quite good but not outstanding in all three disciplines. This seems directly contrary to what should be the purpose of a combined event. Tables \ref{tab3}--\ref{tab6} illustrate how the outcomes would have differed in the 2020 Olympics in the two scoring systems. It should be emphasized that many of the final rankings are similar or roughly similar in both scoring systems. But examining the differences and identifying the outliers is interesting and instructive, particularly if we wish to develop a scoring system more reflective of the aims of a combined format (i.e., finding the best overall athlete). First, we look at the men's preliminary round. Recall that the purpose of the preliminary round is to reduce the size of the field from 20 to 8. From Table \ref{tab3}, we see that the main difference between the results of two scoring methods is that B.\ Mawem would have been replaced by Hojer if a sum-based scoring system had been used.\footnote{B.\ Mawem did not compete in the finals due to injury, so there were only seven climbers in the final.} B.\ Mawem won the speed discipline and finished 18th and 20th in the other two disciplines. Thus, he ended up in the top eight according to the product score, but he would have finished 17th out of 20 if the sum score had been used instead. Had the sum score been used, B.\ Mawem would have been replaced by Hojer, who had three ``middle-of-the-pack'' finishes, namely, 11th, 9th and 9th. We think that despite the decisions of the IFSC, many people would find it problematic that someone who combined a first place finish with two very low finishes should advance to the final in a combined event, while someone who is competent but not outstanding an all three disciplines is passed over. When we turn to the men's finals, we find that the three top placements were obtained by the three competitors who won one of the disciplines. Again, the premium for finishing first in a discipline outweighs significantly poorer placements in the other disciplines, something that we see even with the Gold medal finisher Gin\'{e}s L\'{o}pez, who scored first in speed climbing yet last in the bouldering round. Gin\'{e}s L\'{o}pez's first-place finish in speed was a particular boon for him as he is known as a lead climber, the discipline in which he finished fourth \cite{plastic}. If we instead computed the sum of the three rankings, we see that the fourth-place finisher (Narasaki) would have won. Narasaki is notable as a cross-disciplinary athlete, and perhaps the most accomplished non-speed specialist in speed climbing; he developed and popularized a unique way of moving through the speed route, deemed the ``Tomoa skip'' \cite{Samet}. (Ironically, it was this move that he fumbled in his race against Gin\'{e}s L\'{o}pez, ultimately resulting in his second-place speed finish.) Narasaki would have been followed by five climbers who tied for second place (of course a tie-breaking mechanism would be employed to separate the finishes of these five climbers, e.g., a count-back to their qualification standings). The third-place finisher (Schubert) would have finished last if the final ranking had been based on the sum of the rankings. In the women's competition, similar differences can be found between the two scoring systems. In the preliminary round, the 7th and 8th finishers (Miroslaw and Jaubert) both combined one high finish (first or second) with two below average finishes, but this enabled them to qualify for the finals. The seventh place finisher, Miroslaw, won the speed event but finished 19th and 20th in the other two disciplines. She would have finished in a three-way tie for 16th if the sum scoring system had been used. The two competitors (namely, Meshkova and Condie) who would have replaced Miroslaw and Jaubert (had the sum scoring system been used) both had more ``uniform'' finishes in the three disciplines. The three medalists in women's final would have been the same under both scoring systems. Garnbret was the favourite to win the gold medal, and indeed she was exceptionally dominant with her discipline rankings of $(5,1,1)$. Nonaka and Noguchi were both particularly consistent cross-discipline, which was also not unexpected. Indeed, just before the Games, Nonaka became one of the first women's non-speed specialists to podium at an IFSC World Cup speed event \cite{Walker}. The most significant discrepancy is that Miroslaw would have finished in a tie for last place under the sum system, instead of finishing fourth (she combined a first-place finish with two last-place finishes in the finals). As a true speed specialist, Miroslaw is perhaps the most overt example of the speed vs bouldering/lead inconsistency. In her final speed run, Miroslaw set a new women's speed world record with a time of 6.84 seconds. However, in the bouldering final round shortly after, Miroslaw was unable to score a single zone (finishing with a score of 0), and she fell off the lead route at hold 9+, only a quarter of the progress of first-place finisher Garnbret. In another notable ranking shift, Pils would have moved up from 7th place to 4th place. The women's finals also included two two-way ties, one for third and fourth place, and one for fifth and sixth place. A two-way tie was broken by comparing the head-to-head finishes; the climber who won two out of three of these was ranked higher \cite{rules2021}. Thus Raboutou was ranked above Jaubert and Noguchi was ranked above Miroslaw. As a result, Noguchi won the bronze medal. It is interesting to compare the rankings of Noguchi and Miroslaw: Noguchi's rankings were $(4,4,4)$ while Miroslaw's were $(1,8,8)$. In this particular case, the tie-breaking mechanism favoured the climber with three equal finishes over the climber with one first-place and two last place finishes. Arguably this is a reasonable result, but it is contrary to the apparent goal of the product system to give preference to first-place results. \section{An Alternative Ranking-Based Scoring Method} \label{improved.sec} As can be seen from the analysis of the data sets that we carried out in Section \ref{2020.sec}, the product scoring system enabled some speed specialists to achieve much higher finishes (indeed, any climber who finishes first in one discipline and low in the other two disciplines would benefit greatly). But we question whether this is completely fair in the context of a combined event. On the other hand, the sum scoring system tends to undervalue first place finishes. For example a first and third place finish is treated the same as two second-place finishes (see Example \ref{sailing.exam} for a different scoring method in the setting of sailing competitions that intentionally avoids this scenario). Thus we think it would be useful to consider an alternate scoring system. To further illustrate, let us consider when a first- and last-place is ranking in two disciplines is equivalent to two ``similar'' rankings. If we use the product scoring system, we see that a first- and 20th-place finish is equivalent to a fourth- and fifth-place finish, because $1 \times 20 = 4 \times 5$. On the other hand, in the sum scoring system, a first- and 20th-place finish is equivalent to a 10th- and 11th-place finish, because $1 + 20 = 10 + 11$. It might be preferable to have a scoring system that achieves more of a compromise, e.g., one in which a first- and 20th-place finish is (roughly) equivalent to a 6th- and 7th-place finish, or a 7th- and 8th-place finish. For the time being, it will be useful to consider additive systems, so we will speak in terms of the logarithmic scoring function instead of the product system (recall that they lead to identical rankings). Given the drawbacks of the linear and logarithmic scoring functions, we could instead consider a scoring function that is between them. Basically, we would seek a concave function, but one that is less concave than the logarithm function.\footnote{A real-valued function is \emph{concave} if the line segment between any two points on the graph of the function lies below the graph between the two points.} \begin{figure}[t] \begin{center} \includegraphics[width=3.0in]{scoring.png} \end{center} \[ \begin{array}{ll} \text{black} & g_1(j) \text{ (linear) }\\ \text{blue} & g_2(j) \text{ (square root) }\\ \text{red} & g_3(j) \text{ (logarithmic) }\\ \end{array} \] \caption{Three possible scoring functions} \label{scoringfunctions.fig} \end{figure} The function $f(j) =\sqrt {j}$ is a reasonable choice. (More generally, we could employ a function of the form $f(j) = j^f$, where $0<f < 1$ is a fixed real number.) We compute \begin{eqnarray*} \sqrt{1} + \sqrt{20} &=& 5.472\\ \sqrt{7} + \sqrt{8} &=& 5.474. \end{eqnarray*} Thus, this square root scoring function treats a 7th- and 8th-place finish as basically equivalent to a first- and 20th-place finish, as we suggested above. In Figure \ref{scoringfunctions.fig}, we illustrate how a square root scoring function lies between a linear and a logarithmic scoring function. We want to compare the three functions $f_1(j) = j$, $f_2(j) = \sqrt{j}$ and $f_3(j) = \ln j$. To obtain a nice visual comparison, we adjust the three functions via affine transformations so that $f_1(1) = f_2(1) = f_3(1) = 1$ and $f_1(20) = f_2(20) = f_3(20) = 20$. The affine transformations do not affect any resulting rankings. So we are actually comparing the following three functions in Figure \ref{scoringfunctions.fig}: \begin{eqnarray*} g_1(j) &=& j\\ g_2(j) &=& \frac{\sqrt{20}- 20}{\sqrt{20}- 1} + \frac{19}{\sqrt{20}- 1} \sqrt{j}\\ g_3(j) &=& 1 + \frac{19}{\ln 20}\ln j. \end{eqnarray*} It is interesting to compare the rankings obtained from the square root scoring function to the rankings we obtained previously in Section \ref{2020.sec}. For the men's preliminaries, the square root ranking would have qualified Megos while demoting B.\ Mawem. Hojer would have moved up, but only to 9th place instead of 8th place. (See Table \ref{tab7} for the complete rankings.) For the men's finals, the square root ranking is essentially the same as the sum ranking: Narasaki would be ranked first, followed by Gin\'{e}s L\'{o}pez and Coleman (see Table \ref{tab8}). The women's results are found in Tables \ref{tab9} and \ref{tab10}. In the preliminaries, Meshkova would have qualified instead of Miroslaw, as with the sum ranking. However, the second swap in the sum ranking (Jaubert for Condie) does not occur in the square root ranking. Thus, the square root ranking is a compromise between the sum and product rankings. For the finals, the three medal winners are identical in all three scoring methods considered. \begin{table} \caption{Sport Climbing Men's Preliminaries Rankings Including Square Root Scores} \label{tab7} \begin{center} \begin{tabular}{|l|c|c||r|r|r|} \hline Name & Discipline Rankings & \multicolumn{1}{|c||}{$\sqrt{\quad}$ score} & \multicolumn{3}{|c|}{Overall Rankings}\\ & & & Product & Sum & $\sqrt{\quad}$ \\ \hline \hline M.\ Mawem & $(3 , 1 , 11)$ & 6.049& 1 & 2 & 1\\\hline Narasaki & $(2 , 2 , 14)$ & 6.570 & 2 & 3& 3\\\hline Duffy & $(6 , 5 , 2)$ & 6.100 & 3 & 1& 2\\\hline Schubert & $(12 , 7 , 1)$ & 7.110 & 4 & 4& 4\\\hline Ondra & $(18 , 3 , 4)$ &7.975 & 5& 6& 5\\\hline Gin\'{e}s L\'{o}pez & $(7 , 14 , 3)$ &8.119 & 6 & 5& 6\\\hline B.\ Mawem & $(1 , 18 , 20)$ & 9.715& 7 & \textcolor{red}{17}& \textcolor{red}{11}\\\hline Coleman & $(10 , 11 , 5)$ & 8.715& 8 & 7& 7\\\hline \hline Megos & $(19 , 6 , 6)$ & 9.258 & 9 & 9 \text{(tie)} & \textcolor{red}{8}\\\hline Chon & $(5 , 10 , 16)$ & 9.398 & 10& 9 \text{(tie)}& 10\\\hline Khaibullin & $(4 , 17 , 13)$ & 9.729 & 11& 12& 12\\\hline Hojer & $(11 , 9 , 9)$ & 9.317 & 12& \textcolor{red}{8}& 9\\\hline Rubtsov & $(16 , 4 , 15)$ & 9.873 & 13& 13 \text{(tie)}& 13\\\hline Pan & $(20 , 8 , 7)$ & 9.946 &14 & 13 \text{(tie)}& 15\\\hline Piccolruaz & $(8 , 13 , 12)$& 9.898 & 15 & 11& 14\\\hline Cosser & $(9 , 16 , 10)$ & 10.162 & 16 & 13 \text{(tie)}& 16\\\hline McColl & $(14 , 15 , 8)$ & 10.443 & 17 & 16& 17\\\hline Harada & $(15 , 12 , 17)$ & 11.460 & 18 & 18& 18\\\hline Fossali & $(13 , 19.5 , 18)$ & 12.264 & 19 & 19& 19\\\hline O'Halloran & $(17 , 19.5 , 19)$ & 12.898 & 20 & 20& 20\\\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sport Climbing Men's Finals Rankings Including Square Root Scores} \label{tab8} \begin{center} \begin{tabular}{|l|c|c||r|r|r|} \hline Name & Discipline Rankings & \multicolumn{1}{|c||}{$\sqrt{\quad}$ score} & \multicolumn{3}{|c|}{Overall Rankings}\\ & & & Product & Sum & $\sqrt{\quad}$ \\ \hline \hline Gin\'{e}s L\'{o}pez & $(1,7,4)$ & 5.646& 1 & 2 \text{(tie)} & 2\\\hline Coleman & $(6,1,5)$ & 5.686 & 2 & 2 \text{(tie)}& 3\\\hline Schubert & $(7,5,1)$ & 5.882 & 3 & 7& 6\\\hline Narasaki & $(2,3,6)$ & 5.596 & 4 & 1& 1\\\hline M.\ Mawem & $(3,2,7)$ &5.792 & 5& 2 \text{(tie)}& 4\\\hline Ondra & $(4,6,2)$ &5.864 & 6 & 2 \text{(tie)}& 5\\\hline Duffy & $(5,4,3)$ &5.968 & 7 & 2 \text{(tie)} & 7\\\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sport Climbing Women's Preliminaries Rankings Including Square Root Scores} \label{tab9} \begin{center} \begin{tabular}{|l|c|c||r|r|r|} \hline Name & Discipline Rankings & \multicolumn{1}{|c||}{$\sqrt{\quad}$ score} & \multicolumn{3}{|c|}{Overall Rankings}\\ & & & Product & Sum & $\sqrt{\quad}$ \\ \hline \hline Garnbret & $(14,1,4)$ & 6.742 & 1 & 3 & 2\\\hline Seo & $(17,5,1)$ & 7.359 & 2 & 6& 4\\\hline Nonaka & $(4,8,3)$ & 6.560& 3 & 1& 1\\\hline Noguchi & $(9,3,6)$ & 7.182 & 4 & 2& 3\\\hline Raboutou & $(12,2,8)$ &7.707 & 5& 4 \text{(tie)}& 5\\\hline Pilz & $(11,9,2)$ &7.731 & 6 & 4 \text{(tie)}& 6\\\hline Miroslaw & $(1 , 20,19)$ & 9.831& 7 & \textcolor{red}{16}& \textcolor{red}{12}\\\hline Jaubert & $(2,13,15)$ & 8.893& 8 & \textcolor{red}{9} & 8\\\hline \hline Meshkova & $(15,6,5)$ & 8.559 & 9 & \textcolor{red}{7} & \textcolor{red}{7}\\\hline Coxsey & $(16,4,13)$ & 9.606 & 10& 11 & 10\\\hline Condie & $(7,11,11)$ & 9.279 & 11& \textcolor{red}{8} & 9\\\hline Song & $(3,19,18)$ & 10.334 & 12& 16 \text{(tie)} & 16\\\hline Chanourdie & $(8,15,9)$ & 9.701 & 13 & 10 & 11\\\hline Yip & $(6,16,12)$ & 9.914 &14 & 12 \text{(tie)} & 13\\\hline Rogora & $(19,7,10)$ & 10.167 & 15 & 14 & 15\\\hline Klingler & $(10,10,14)$ & 10.066 & 16 & 12 \text{(tie)}& 14\\\hline Kaplina & $(5,18,17)$ & 10.602 & 17 & 16 \text{(tie)} & 17\\\hline Krampl & $(18,14,7)$ & 10.630 & 18 & 15 & 18\\\hline MacKenzie & $(13 , 12,16)$ & 11.070 & 19 & 19& 19\\\hline Sterkenburg & $(20,17,20)$ & 13.067 & 20 & 20& 20\\\hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Sport Climbing Women's Finals Rankings Including Square Root Scores} \label{tab10} \begin{center} \begin{tabular}{|l|c|c||r|r|r|} \hline Name & Discipline Rankings & \multicolumn{1}{|c||}{$\sqrt{\quad}$ score} & \multicolumn{3}{|c|}{Overall Rankings}\\ & & & Product & Sum & $\sqrt{\quad}$ \\ \hline \hline Garnbret & $(5,1,1)$ & 4.236 & 1 & 1 & 1\\\hline Nonaka & $(3,3,5)$ & 5.700 & 2 & 2& 2\\\hline Noguchi & $(4,4,4)$ & 6.000 & 3 & 3& 3\\\hline Miroslaw & $(1,8,8)$ & 6.657 & 4 & 7 \text{(tie)} & 7\\\hline Raboutou & $(7,2,6)$ &6.509 (tie) & 5& 5 \text{(tie)}& 5 \text{(tie)}\\\hline Jaubert & $(2,6,7)$ &6.509 (tie) & 6 & 5 \text{(tie)}& 5 \text{(tie)}\\\hline Pilz & $(6,5,3)$ & 6.418 & 7 & 4 & 4\\\hline Seo & $(8,7,2)$ &6.888 & 8 & 7 \text{(tie)} & 8\\\hline \end{tabular} \end{center} \end{table} \subsection{Complexity of the Scoring Methods} The three scoring methods we have analyzed are similar in that they can all be viewed as additive ranking-based systems. The only difference is that they employ different scoring functions. Obviously the usual sum-based system is the simplest to understand. As we pointed out, the product-based system is equivalent to computing the sum of the logarithms of the rankings, and we have proposed a new scoring system based on computing the sum of the square roots of the rankings. Viewers and commentators who are not mathematically inclined might not be comfortable discussing square roots and logarithms. However, it simple to generate a \emph{scoring table} which lists the points awarded for each ranking in a discipline (e.g., rankings of 1--20 in the preliminaries and 1--8 in the finals). To avoid having to deal with fractions, the relevant logarithms or square roots could be multiplied by 100 or 1000, say, and then rounded to the nearest integer. (This of course would not affect the rankings obtained from these scores.) It should be noted that using a score table is common in other athletic events, e.g., the decathlon, where there are ten different ``performance tables,'' one for each event. The decathlon performance tables convert a time or distance into a numerical score for that event. Two possible scoring tables are listed in Table \ref{tab11}. We have used the function $100 \ln n$ for the logarithm-based scores (which yield rankings equivalent to the product-based scoring method) and the function $100 \sqrt{n} - 100$ for the the square-root based scores. These logarithm-based scores range from $0$ to $347$, while the square-root based scores range from $0$ to $300$, which seem to be a reasonable range of possible values. Of course, these scoring tables could be adjusted to any desired range by applying a suitable affine transformation, which would preserve any rankings obtained using them. \begin{table}[t] \caption{Two Possible Scoring Tables} \label{tab11} \begin{center} \begin{tabular}{|r|c|c|} \hline Ranking & Square root-based score & Logarithm-based score \\ \hline \hline 1 & 0 & 0\\ \hline 2 & 41 & 69\\ \hline 3 & 73 & 110\\ \hline 4 & 100 & 139\\ \hline 5 & 124 & 161\\ \hline 6 & 145 & 179 \\ \hline 7 & 165 & 195\\ \hline 8 & 183 & 208\\ \hline 9 & 200 & 220\\ \hline 10 & 216 & 230\\ \hline 11 & 232 & 240\\ \hline 12 & 246 & 248\\ \hline 13 & 261 & 256 \\ \hline 14 & 274 & 264\\ \hline 15 & 287 & 271\\ \hline 16 & 300 & 277\\ \hline 17 & 312 & 283\\ \hline 18 & 324 & 289\\ \hline 19 & 336 & 294\\ \hline 20 & 347 & 300\\ \hline \end{tabular} \end{center} \end{table} \section{Summary and Conclusion} \label{summary.sec} In a follow-up analysis of the 2020 Games by Plastic Weekly, host Tyler Norton expressed the opinion that one of the downfalls of a product-based scoring system was that the mental load of constantly calculating standings eclipsed the performances of many climbers \cite{plastic}. Although the dynamic nature of the multiplicative system resulted in the dramatic shifting of the men's podium based on Schubert's final lead climb, ultimately this took away from the ``raw climbing experience,'' making it ``less about the climbing'' \cite{plastic}. While we empathize with the inherent tensions in and complications of quantifying and rationalizing rock climbing in general, we don't believe that \emph{all} possible sport climbing scoring formats would be equally as distracting as the multiplicative format. With an appropriate scoring system, competitive sport climbing can still be ``about the climbing.'' There are also numerous comments that could be made about the intricacies of each discipline's scoring formats, including how the effects of athlete injury (B. Mawem), speed false starts (Duffy) and slips (Narasaki), and unexpected bouldering performances (Ondra) affected the final rankings, particularly in the men's event.\footnote{For a detailed play-by-play of both the men's and women's finals, please see Climber News \url{https://www.climbernews.com/mens-olympic-climbing-final-results/} and \url{https://www.climbernews.com/womens-olympic-climbing-final-results/}.} Another factor to consider is not athlete performance, but the effect of route-setting (the design and construction of the climbing problems and routes), especially in the bouldering rounds with reference to what has been called ``the Janja problem'' (i.e., that Garnbret so far exceeds the field in bouldering that building bouldering problems that achieve decent separation is difficult. We see this in the women's finals, where Garnbret topped two of the three problems and no one else topped a single problem.) We have intentionally limited the scope of our paper, and thus we do not discuss the effects of the individual-discipline scoring and competition rules, though there are likely interesting conversations to pick up about the differences in speed scoring between qualifications (best time) and finals (head-to-head knockout format), as well as bouldering (four boulder problems in qualifiers, three boulder problems in finals). The men's final placements have been the subject of much public scrutiny, and indeed much of the conversation surrounding alternative scoring formats post-Olympics was oriented toward trying to manufacture a podium that was more ``publicly acceptable'' than the actual final results \cite{plastic}. (This is not the case with the women's finals, which were widely considered to be an accurate reflection of the field.) It is important to clarify that we are not attempting to add to this conversation to detract from the accomplishments of the winners, but to speak to the disconnect between event aims and goals (i.e., a combined ``overall'' event) and scoring (i.e., scoring that rewards outstanding performance in one discipline). Finally, our recommendation for a square root-based scoring method is primarily a theoretical exercise, as the combined event as structured at the Tokyo 2020 Games will not be held again at the Paris 2024 Games \cite{plastic}. Instead, the IOC has granted an additional medal to each gender, and the IFSC has decided to run a speed-only event, and a second event combining bouldering and lead climbing \cite{Paris}. While this does not completely remove the problem of calculating overall scoring across two disciplines, there is much more traditional overlap between bouldering and lead climbing than between speed and either of the other two disciplines. In part, this Paris 2024 two-event format should produce better and more interesting results in both speed climbing and in bouldering/lead climbing. Nevertheless, we wanted to attempt to respond to John Burgman's claim of ``I don't know if anybody has thought of a better system yet,'' with a possible alternative scoring system (namely, the square-root based system) that addresses the problems of over-valuing single-discipline wins and under-valuing cross-discipline consistency \cite{plastic}. \section*{Acknowledgement} We would like to thank Bill Martin for helpful discussions.
{ "attr-fineweb-edu": 2.130859, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfcc5qhLBzMvmWHQq
\section{Introduction} A salesman has a map of $n$ cities that they want to visit, including the roads between the cities and how long each road is. Their aim is to start at their home, visit each city and then return home. To avoid wasting time, they want to visit each city exactly once and travel via the shortest route. So what route should the salesman take? This is an instance of the Travelling Salesman Problem (TSP). More generally, this problem takes an undirected graph $G = (V, E)$ of $n$ vertices connected by $m$ weighted edges and returns the shortest cycle which passes through every vertex exactly once, known as a Hamiltonian cycle, if such a cycle exists. If no Hamiltonian cycle exists, we should report that no Hamiltonian cycle has been found. The length or cost of an edge is given by an $n \times n$ matrix $C = (c_{ij})$ of positive integers, known as a cost matrix. This problem has a number of applications, ranging from route finding as in the story above to circuit board drilling~\cite{grotschel1991}. Unfortunately, the salesman might have to take a long time in order to find the shortest route. The TSP has been shown to be NP-hard~\cite[Chapter $3$]{lawler1985}, suggesting that even the best algorithms for exactly solving it must take a superpolynomial amount of time. Nevertheless, the importance of the problem has motivated a substantial amount of classical work to develop algorithms for solving it provably more efficiently than the na\"ive algorithm which checks all $O((n-1)!)$ of the potential Hamiltonian cycles in the graph. Here we consider whether these algorithms can be accelerated using quantum computational techniques. Grover's famous quantum algorithm~\cite{grover96} for fast unstructured search can be applied to the na\"ive classical algorithm to achieve a runtime of $O(\sqrt{n!})$, up to polynomial terms in $n$. However, the best classical algorithms are already substantially faster than this. For many years, the algorithm with the best proven worst-case bounds for the general TSP was the Held-Karp algorithm~\cite{held1962}, which runs in $O(n^22^n)$ time and uses $O(n2^n)$ space. This algorithm uses the fact that for any shortest path, any subpath visiting a subset of vertices on that path must be the shortest path for visiting those vertices. Held and Karp used this to solve the TSP by computing the length of the optimal route for starting at vertex $1$, visiting every vertex in a set $S \subseteq V$ and finishing at a vertex $l \in S$. Denoting the length of this optimal route $D(S, l)$, they showed that this distance could be computed as \[ D(S, l) = \begin{cases} c_{1l} & \text{if } S = \{l\}\\ \min_{m \in S \setminus \{l\}}\left[D(S \setminus \{l\}, m) + c_{ml}\right] & \text{otherwise.} \end{cases} \] Solving this relation recursively for $S=V$ would result in iterating over all $O((n-1)!)$ Hamiltonian cycles again, but Held and Karp showed that the relation could be solved in $O(n^22^n)$ time using dynamic programming. Bj{\"o}rklund et al.\ \cite{bjorklund2008} developed on this result, showing that modifications to the Held-Karp algorithm could yield a runtime of \[ O((2^{k + 1} - 2k - 2)^{n/(k + 1)}\poly(n)), \] where $k$ is the largest degree of any vertex in the graph; this bound is strictly less than $O(2^n)$ for all fixed $k$. Unfortunately, it is not known whether quantum algorithms can accelerate general dynamic programming algorithms. Similarly, it is unclear whether TSP algorithms based around the standard classical techniques of branch-and-bound~\cite{little1963} or branch-and-cut~\cite{padberg1991} are amenable to quantum speedup. Here we apply known quantum-algorithmic techniques to accelerate more recent classical TSP algorithms for the important special case of bounded-degree graphs. We say that a graph $G$ is degree-$k$ if the maximal degree of any vertex in $G$ is at most $k$. A recent line of research has produced a sequence of algorithms which improve on the $O^*(2^n)$ runtime of the general Held-Karp algorithm in this setting, where the notation $O^*(c^n)$ omits polynomial factors in $n$. First, Eppstein presented algorithms which solve the TSP on degree-3 graphs in time $O^*(2^{n/3}) \approx O^*(1.260^n)$, and on degree-4 graphs in time $O^*((27/4)^{n/3}) \approx O^*(1.890^n)$~\cite{eppstein2007}. The algorithms are based on the standard classical technique of {\em backtracking}, an approach where a tree of partial solutions is explored to find a complete solution to a problem (see Section \ref{sec:backtrack} for an introduction to this technique). Following subsequent improvements~\cite{iwama07,liskiewicz14}, the best classical runtimes known for algorithms based on this general approach are $O^*(1.232^n)$ for degree-3 graphs~\cite{xiao2016degree3}, and $O^*(1.692^n)$ for degree-4 graphs~\cite{xiao2016degree4}, in each case due to Xiao and Nagamochi. All of these algorithms use polynomial space in $n$. An algorithm of Bodlaender et al.~\cite{bodlaender15} achieves a faster runtime of $O^*(1.219^n)$ for solving the TSP in degree-3 graphs, which is the best known; however, this algorithm uses exponential space. Similarly, an algorithm of Cygan et al.~\cite{cygan11} solves the TSP in unweighted degree-4 graphs in $O^*(1.588^n)$ time and exponential space. Both of these algorithms use an approach known as cut-and-count, which is based on dynamic programming, so a quantum speedup is not known for either algorithm. In the case where we have an upper bound $L$ on the maximum edge cost in the graph, Bj\"orklund~\cite{bjorklund14} gave a randomised algorithm which solves the TSP on arbitrary graphs in $O^*(1.657^n L)$ time and polynomial space, which is an improvement on the runtime of the Xiao-Nagamochi algorithm for degree-4 graphs when $L$ is subexponential in $n$. Again, the techniques used in this algorithm do not seem obviously amenable to quantum speedup. Here we use a recently developed quantum backtracking algorithm~\cite{montanaro2015} to speed up the algorithms of Xiao and Nagamochi in order to find Hamiltonian cycles shorter than a given upper bound, if such cycles do exist. We run this algorithm several times, using binary search to specify what our upper bound should be, in order to find the shortest Hamiltonian cycle and solve the Travelling Salesman Problem. In doing so, we achieve a near-quadratic reduction in the runtimes: \begin{theorem} There are bounded-error quantum algorithms which solve the TSP on degree-3 graphs in time $O^*(1.110^n \log L \log \log L)$ and on degree-4 graphs in time $O^*(1.301^n \log L \log \log L)$, where $L$ is the maximum edge cost. The algorithms use $\poly(n)$ space. \label{thm:deg34} \end{theorem} In this result and elsewhere in the paper, ``bounded-error'' means that the probability that the algorithm either doesn't find a Hamiltonian cycle when one exists or returns a non-optimal Hamiltonian cycle is at most $1/3$. This failure probability can be reduced to $\delta$, for arbitrary $\delta > 0$, by repeating the algorithm $O(\log 1/\delta)$ times. Also here and throughout the paper, $\log$ denotes $\log$ base 2. Note that the time complexity of our algorithms has some dependence on $L$, the largest edge cost in the input graph. However, this dependence is quite mild. For any graph whose edge costs are specified by $w$ bits, $L \le 2^w$. Thus terms of the form $\polylog(L)$ are at most polynomial in the input size. Next, we show that degree-5 and degree-6 graphs can be dealt with via a randomised reduction to the degree-4 case. \begin{theorem} \label{thm:deg6} There is a bounded-error quantum algorithm which solves the TSP on degree-5 and degree-6 graphs in time $O^*(1.680^n\log L \log \log L)$. The algorithm uses $\poly(n)$ space. \end{theorem} We summarise our results in Table \ref{tab:summary}. \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Degree & Quantum & Classical (poly space) & Classical (exp space) \\ \hline 3 & $O^*(1.110^n \polylog L)$ & $O^*(1.232^n)$~\cite{xiao2016degree3} & $O^*(1.219^n)$~\cite{bodlaender15} \\ 4 & $O^*(1.301^n \polylog L)$ & $O^*(1.692^n)$~\cite{xiao2016degree4}, $O^*(1.657^n L)$~\cite{bjorklund14} & $O^*(1.588^n)$~\cite{cygan11}\\ 5, 6 & $O^*(1.680^n \polylog L)$ & $O^*(1.657^n L)$~\cite{bjorklund14} & --- \\ \hline \end{tabular} \end{center} \caption{Runtimes of our quantum algorithms for a graph of $n$ vertices with maximum edge cost $L$, compared with the best classical algorithms known.} \label{tab:summary} \end{table*} \subsection{Related work} Surprisingly little work has been done on quantum algorithms for the TSP. D\"orn \cite{dorn2007} proposed a quantum speedup for the TSP for degree-3 graphs by applying amplitude amplification \cite{brassard1997} and quantum minimum finding~\cite{durr1996} to Eppstein's algorithm, and stated a quadratic reduction in the runtime. However, we were not able to reproduce this result (see Section~\ref{sec:backtrack} below for a discussion). Very recently, Mandr{\`a}, Guerreschi and Aspuru-Guzik~\cite{mandra2016} developed a quantum algorithm for finding a Hamiltonian cycle in time $O(2^{(k-2)n/4})$ in a graph where {\em every} vertex has degree $k$. Their approach reduces the problem to an Occupation problem, which they solve via a backtracking process accelerated by the quantum backtracking algorithm~\cite{montanaro2015}. The bounds obtained from their algorithm are $O(1.189^n)$ for $k = 3$ and $O(1.414^n)$ for $k=4$, in each case a bit slower than the runtimes of our algorithms; for $k \ge 5$, their algorithm has a slower runtime than Bj\"orklund's classical algorithm~\cite{bjorklund14}. Marto\v{n}\'ak, Santoro and Tosatti \cite{martonak2004} explored the option of using quantum annealing to find approximate solutions for the TSP. Rather than solve the problem purely through quantum annealing, they simplify their Ising Hamiltonian for solving the TSP and use path-integral Monte Carlo \cite{barker1979} to run their model. While no bounds on run time or accuracy were strictly proven, they concluded by comparing their algorithm to simulated annealing via the Metropolis-Hastings algorithm \cite{metropolis1953} and the Kernighan-Lin algorithm for approximately solving the TSP \cite{kernighan1970}. Their results showed that ad hoc algorithms could perform better than general simulated or quantum annealing, but quantum annealing could outperform simulated annealing alone. However, they noted that simulated annealing could perform better than in their analysis if combined with local search heuristics~\cite{martin1996}. Chen et al.\ \cite{chen11} experimentally demonstrated a quantum annealing algorithm for the TSP. Their demonstration used a nuclear-magnetic-resonance quantum simulator to solve the problem for a graph with 4 vertices. \subsection{Organisation} We start by introducing the main technique we use, backtracking, and comparing it with amplitude amplification. Then, in Section \ref{sec:bd}, we describe how this technique can be used to accelerate classical algorithms of Xiao and Nagamochi for graphs of degree at most 4~\cite{xiao2016degree3,xiao2016degree4}. In Section \ref{sec:higher-bound}, we extend this approach to graphs of degree at most 6. \section{Backtracking algorithms for the TSP} \label{sec:backtrack} Many of the most efficient classical algorithms known for the TSP are based around a technique known as backtracking. Backtracking is a general process for solving constraint satisfaction problems, where we have $v$ variables and we need to find an assignment to these variables such that they satisfy a number of constraints. A na\"{i}ve search across all possible assignments will be inefficient, but if we have some local heuristics then we can achieve better performance by skipping assignments that will definitely fail. Suppose each variable can be assigned one value from $[d] := \{0, \dots,d-1\}$. We define the set of partial assignments for $v$ variables as $\mathcal{D} := (\{1,\dots,v\}, [d])^j$, where $j \leq v$, with the first term denoting the variable to assign and the second denoting the value it is assigned. Using this definition for partial assignments, backtracking algorithms have two components. The first is a predicate, $P:\mathcal{D} \rightarrow \{\text{true}, \text{false}, \text{indeterminate}\}$, which takes a partial assignment and returns true if this assignment will definitely result in the constraints being satisfied regardless of how everything else is assigned, false if the assignment will definitely result in the constraints being unsatisfied, and indeterminate if we do not yet know. The second is a heuristic, $h:\mathcal{D} \rightarrow \{1,\dots,v\}$, which takes a partial assignment and returns the next variable to assign. The following simple recursive classical algorithm takes advantage of $P$ and $h$ to solve a constraint satisfaction problem. We take as input a partial assignment (initially, the empty assignment). We run $P$ on this partial assignment; if the result is true then we return the partial assignment, and if it is false then we report that no solutions were found in this recursive call. We then call $h$ on this partial assignment and find out what the next variable to assign is. For every value in $i \in [d]$ we can assign that variable, we recursively call the backtracking algorithm with $i$ assigned to that variable. If one of the recursive calls returns a partial assignment then we return that assignment, otherwise we report that no solutions were found in this call. We can view this algorithm as exploring a tree whose vertices are labelled with partial assignments. The size of the tree determines the worst-case runtime of the algorithm, assuming that there is no assignment that satisfies all the constraints. It is known that this backtracking algorithm can be accelerated using quantum techniques: \begin{theorem}[Montanaro \cite{montanaro2015}] \label{thm:backtrack} Let $\mathcal{A}$ be a backtracking algorithm with predicate $P$ and heuristic $h$ that finds a solution to a constraint satisfaction problem on $v$ variables by exploring a tree of at most $T$ vertices. There is a quantum algorithm which finds a solution to the same problem with failure probability $\delta$ with $O(\sqrt{T}v^{3/2}\log v\log(1/\delta))$ uses of $P$ and $h$. \end{theorem} Montanaro's result is based on a previous algorithm by Belovs \cite{belovs2013,belovs13a}, and works by performing a quantum walk on the backtracking tree to find vertices corresponding to assignments which satisfy the constraints. The reader familiar with \cite{montanaro2015} may note that the definition of the set of partial assignments $\mathcal{D}$ is different to that given there, in that it incorporates information about the ordering of assignments to variables. However, it is easy to see from inspection of the algorithm of \cite{montanaro2015} that this change does not affect the stated complexity of the algorithm. It is worth noting that more standard quantum approaches such as amplitude amplification~\cite{brassard1997} will not necessarily achieve a quadratic speedup over the classical backtracking algorithm. Amplitude amplification requires access to a function $f:\{0,1\}^k \rightarrow \{\text{true}, \text{false}\}$ and a guessing function $\mathcal{G}$. If the probability of $\mathcal{G}$ finding a result $x \in \{0,1\}^k$ such that $f(x) = \text{true}$ is $p$, then amplitude amplification will succeed after $O(1/\sqrt{p})$ applications of $f$ and $\mathcal{G}$~\cite{brassard1997}. \begin{figure*} \subfloat[A perfectly balanced backtracking tree.\label{fig:balanced-tree}]{ \begin{tikzpicture} \tikzstyle{vertex}=[draw,shape=circle] \path (0,0) node[vertex](b0){} (-4,-1) node[vertex](b1){} (4,-1) node[vertex](b2){} (-6,-2) node[vertex](b3){} (-2,-2) node[vertex](b4){} (2,-2) node[vertex](b5){} (6,-2) node[vertex](b6){} (-7,-3) node[vertex](b7){$l_0$} (-5,-3) node[vertex](b8){$l_1$} (-3,-3) node[vertex](b9){$l_2$} (-1,-3) node[vertex](b10){$l_3$} (1,-3) node[vertex](b11){$l_4$} (3,-3) node[vertex, accepting](b12){$l_5$} (5,-3) node[vertex](b13){$l_6$} (7,-3) node[vertex](b14){$l_7$}; \draw (b7) -- (b3) -- (b1) -- (b0) -- (b2) -- (b6) -- (b14); \draw (b8) -- (b3); \draw (b9) -- (b4) -- (b1); \draw (b10) -- (b4); \draw (b11) -- (b5) -- (b2); \draw (b12) -- (b5); \draw (b13) -- (b6); \end{tikzpicture} } \hfill \subfloat[An unbalanced backtracking tree.\label{fig:unbalanced-tree}]{ \begin{tikzpicture} \tikzstyle{vertex}=[draw,shape=circle] \path (0,0) node[vertex](b0){} (-4,-1) node[vertex](b1){} (4,-1) node[vertex](b2){} (-6,-2) node[vertex](b3){$l_0$} (-2,-2) node[vertex](b4){$l_1$} (2,-2) node[vertex](b5){$l_2$} (6,-2) node[vertex](b6){} (5,-3) node[vertex](b7){$l_3$} (7,-3) node[vertex](b8){} (6,-4) node[vertex](b9){$l_4$} (8,-4) node[vertex](b10){} (7,-5) node[vertex, accepting](b11){$l_5$} (9,-5) node[vertex](b12){} (8,-6) node[vertex](b13){$l_6$} (10,-6) node[vertex](b14){$l_7$}; \draw (b3) -- (b1) -- (b0) -- (b2) -- (b6) -- (b8) -- (b10) -- (b12) -- (b14); \draw (b4) -- (b1); \draw (b5) -- (b2); \draw (b7) -- (b6); \draw (b9) -- (b8); \draw (b11) -- (b10); \draw (b13) -- (b12); \end{tikzpicture} } \caption{Example backtracking trees, where $l_5$ is a leaf corresponding to a solution to a constraint satisfaction problem. In the perfectly balanced case of Fig.\ \ref{fig:balanced-tree}, each leaf can be associated with a 3-bit string corresponding to a path to that leaf. But in the unbalanced case of Fig.\ \ref{fig:unbalanced-tree}, specifying a path to a leaf requires 6 bits.} \label{fig:tree} \end{figure*} To apply amplitude amplification, we would need to access the leaves of the tree, as these are the points where the backtracking algorithm is certain whether or not a solution will be found. Thus, for each integer $i$, we would need to find a way of determining the $i$'th leaf $l_i$ in the backtracking tree. In the case of a perfectly balanced tree, such as Fig.\ \ref{fig:balanced-tree}, where every vertex in the tree is either a leaf or has exactly $d$ branches descending from it, such a problem is easy: write $i$ in base $d$ and use each digit of $i$ to decide which branch to explore. But not all backtracking trees are perfectly balanced, such as in Fig.\ \ref{fig:unbalanced-tree}. In these cases, finding leaf $l_i$ is hard as we cannot be certain which branch leads to that leaf. Some heuristic approaches, by performing amplitude amplification on part of the tree, can produce better speedups for certain trees, but do not provide a general speedup on the same level as the quantum backtracking algorithm~\cite{montanaro2015}. It is also worth understanding the limitations of the quantum backtracking algorithm, and why it cannot necessarily speed up all algorithms termed ``backtracking algorithms''~\cite{montanaro2015}. First, a requirement for the quantum algorithm is that decisions made in one part of the backtracking tree are independent of results in another part of the tree, which is not true of all classical algorithms, such as constraint recording algorithms \cite{dechter1990}. Second, the runtime of the quantum algorithm depends on the size of the entire tree. Thus, to achieve a quadratic speedup over a classical algorithm, the algorithm must explore the whole backtracking tree, instead of stopping after finding the first solution or intelligently skipping branches such as in backjumping \cite{dechter1990}. Therefore, it is important to check on a case-by-case basis whether classical backtracking algorithms can actually be accelerated using Theorem \ref{thm:backtrack}. Another limitation of the quantum backtracking algorithm is that often there will be a metric $M:\mathcal{D} \rightarrow \mathbb{N}$ we want the backtracking algorithm to minimise while satisfying the other constraints. This is particularly relevant for the TSP, where the aim is to return the shortest Hamiltonian cycle. Classical backtracking algorithms can achieve this by recursively travelling down each branch of the tree to find results $D_1,\dots,D_d \in \mathcal{D}$ and returning the result that minimises $M$. The quantum backtracking algorithm cannot perform this; it instead returns a solution selected randomly from the tree that satisfies the constraints. In order to achieve a quantum speedup when finding the result that minimises $M$, we can modify the original predicate to prune results which are greater than or equal to a given bound. We then repeat the algorithm in a binary search fashion, updating our bound based on whether or not a solution was found. This will find the minimum after repeating the quantum algorithm at most $O(\log M_{max})$ times, where \[M_{max} = \max\{M(D):D\in \mathcal{D}, P(D) = \text{true}\}.\] We describe this binary search approach in more detail in Sec.\ \ref{sec:deg3speedup}. The intuition behind why backtracking is a useful technique for solving the TSP is that we can attempt to build up a Hamiltonian cycle by determining for each edge in the graph whether it should be included in the cycle (``forced''), or deleted from the graph. As we add more edges to the cycle, we may either find a contradiction (e.g.\ produce a non-Hamiltonian cycle) or reduce the graph to a special case that can be handled efficiently (e.g.\ a collection of disjoint cycles of four unforced edges). This can sometimes allow us to prune the backtracking tree substantially. To analyse the performance of backtracking algorithms for the TSP, a problem size measure is often defined that is at least 0 and at most $n$ (e.g.\ the number of vertices minus the number of forced edges). Note that if there are more than $n$ forced edges then it is impossible to form a Hamiltonian cycle that includes every forced edge, so the number of forced edges is at most $n$. At the start of the backtracking algorithm, there are no forced edges so the problem size is $n$. Each step of the backtracking algorithm reduces the problem size until the size is $0$, at which point either the $n$ forced edges form a Hamiltonian cycle or a Hamiltonian cycle that includes every forced edge cannot be found. A quasiconvex program can be developed based on how the backtracking algorithm reduces the problem size. Solving this quasiconvex problem produces a runtime in terms of the problem size, which can be re-written in terms of $n$ due to the problem size being at most $n$. It was proposed by D\"orn \cite{dorn2007} that amplitude amplification could be applied to speed up the runtime of Eppstein's algorithm for the TSP on degree-3 graphs~\cite{eppstein2007} from $O^*(2^{n/3})$ to $O^*(2^{n/6})$. Amplitude amplification can be used in this setting by associating a bit-string with each sequence of choices of whether to force or delete an edge, and searching over bit-strings to find the shortest valid Hamiltonian cycle. However, as suggested by the general discussion above, a difficulty with this approach is that some branches of the recursion, as shown in Fig.~\ref{fig:size-decrease-by-two}, only reduce the problem size by 2 (as measured by the number of vertices $n$, minus the number of forced edges). The longest branch of the recursion can, as a result, be more than $n/3$ levels deep. In the worst case, this depth could be as large as $n/2$ levels. Specifying the input to the checking function $f$ could then require up to $n/2$ bits, giving a search space of size $O(2^{n/2})$. Under these conditions, searching for the solution via amplitude amplification could require up to $O^*(2^{n/4})$ time in the worst case. To yield a better runtime, we must take more of an advantage of the structure of our search space to avoid instances which will definitely not succeed. The same issue with amplitude amplification applies to other classical algorithms for the TSP which are based on backtracking~\cite{xiao2016degree3,xiao2016degree4}. In the case of the Xiao-Nagamochi algorithm for degree-3 graphs, although the overall runtime bound proven for the problem means that the number of vertices in the tree is $O(2^{3n/10})$, several of the branching vectors used in their analysis have branches that reduce the problem size by less than $10/3$, leading to a branch in the tree that could be more than $3n/10$ levels deep. \begin{figure*} \begin{center} \begin{tikzpicture}[scale=0.8] \tikzstyle{vertex}=[draw,shape=circle,inner sep=0pt,minimum size=15pt] \path (0,0) node[vertex](f0){$a$}; \path (-2,-1) node[vertex](x0){$i$} (-1,-2) node[vertex](x1){$c$} (0,-1) node[vertex](f1){$b$} (1,-2) node[vertex](y1){$d$} (2,-1) node[vertex](y0){$g$}; \path (-1,-3) node[vertex](x2){$e$} (1,-3) node[vertex](y2){$f$}; \path (-3,0) node[vertex](x3){$h$} (-1,0) node[vertex](x4){$j$}; \draw[line width=1.5pt] (f0) -- (f1); \draw[line width=1.5pt] (x2) -- (x1); \draw (x0) -- (x1) -- (f1) -- (y1) -- (y2); \draw (x3) -- (x0) -- (x4); \draw (y0) -- (y1); \path (-5,-5) node[vertex](f0){$a$}; \path (-7,-6) node[vertex](x0){$i$} (-6,-7) node[vertex](x1){$c$} (-5,-6) node[vertex](f1){$b$} (-4,-7) node[vertex](y1){$d$} (-3,-6) node[vertex](y0){$g$}; \path (-6,-8) node[vertex](x2){$e$} (-4,-8) node[vertex](y2){$f$}; \path (-8,-5) node[vertex](x3){$h$} (-6,-5) node[vertex](x4){$j$}; \draw[line width=1.5pt] (f0) -- (f1) -- (x1) -- (x2); \draw[line width=1.5pt] (x3) -- (x0) -- (x4); \draw[line width=1.5pt] (y2) -- (y1) -- (y0); \path (5,-5) node[vertex](f0){$a$}; \path (3,-6) node[vertex](x0){$i$} (4,-7) node[vertex](x1){$c$} (5,-6) node[vertex](f1){$b$} (6,-7) node[vertex](y1){$d$} (7,-6) node[vertex](y0){$g$}; \path (4,-8) node[vertex](x2){$e$} (6,-8) node[vertex](y2){$f$}; \path (2,-5) node[vertex](x3){$h$} (4,-5) node[vertex](x4){$j$}; \draw[line width=1.5pt] (f0) -- (f1) -- (y1); \draw[line width=1.5pt] (x0) -- (x1) -- (x2); \draw (x3) -- (x0) -- (x4); \draw (y0) -- (y1) -- (y2); \draw[thick, ->] (-2,-3) -- (-3,-4); \draw[thick, ->] (2,-3) -- (3,-4); \end{tikzpicture} \end{center} \caption{An instance of the recursive step in Eppstein's backtracking algorithm for the TSP~\cite{eppstein2007} for a subgraph of a larger graph $G$, with forced edges displayed in bold and branching on edge $bc$. If we force $bc$, then $b$ and $c$ are both incident to two forced edges, so $bd$ and $ci$ cannot be part of the Hamiltonian cycle and can be removed from the graph. After these edges are removed, vertices $i$ and $d$ are both of degree $2$, so in order to reach those vertices the edges $hi$, $ij$, $df$ and $dg$ must also be included in the Hamiltonian cycle. So forcing $bc$ has overall added five edges to the Hamiltonian cycle. On the other hand, if we remove edge $bc$, we find that $b$ and $c$ are vertices of degree $2$, so edges $bd$ and $ci$ must be part of the Hamiltonian cycle. Thus we have only added two more edges to the Hamiltonian cycle. \label{fig:size-decrease-by-two}} \end{figure*} \section{Quantum speedups for the Travelling Salesman Problem on bounded-degree graphs \label{sec:bd}} \label{sec:deg3} Our algorithms are based on applying the quantum algorithm for backtracking (Theorem \ref{thm:backtrack}) to Xiao and Nagamochi's algorithm~\cite{xiao2016degree3}. Before describing our algorithms, we need to introduce some terminology from~\cite{xiao2016degree3} and describe their original algorithm. The algorithm, and its analysis, are somewhat involved, so we omit details wherever possible. \subsection{The algorithm of Xiao and Nagamochi} \label{sec:xndeg3} A graph $G$ is $k$-edge connected if there are $k$ edge-disjoint paths between every pair of vertices. An edge in $G$ is said to be forced if it must be included in the final tour, and unforced otherwise. The set of forced edges is denoted $F$, and the set of unforced edges is denoted $U$. An induced subgraph of unforced edges which is maximal and connected is called a $U$-component. If a $U$-component is just a single vertex, then that $U$-component is trivial. A maximal sequence $\mathcal{C}$ of edges in a $U$-component $H$ is called a circuit if either: \begin{itemize} \item $\mathcal{C} = \{xy\}$ and there are three edge-disjoint paths from $x$ to $y$, \item or $\mathcal{C} = \{c_0, c_1,\dots,c_{m-1}\}$ such that for $0 \leq i < m-1$, there is a subgraph $B_i$ of $H$ such that the only two unforced edges incident to $B_i$ are $c_i$ and $c_{i+1}$. \end{itemize} A circuit is reducible if subgraph $B_i$ for some $i$ is incident to only two edges. In order for $B_i$ to be reached, both edges incident to $B_i$ need to be forced. Forcing one edge in the circuit then means that the other edges can be either forced or removed. The polynomial time and space process by Xiao and Nagamochi to reduce circuits, by forcing and removing alternating edges in the circuit, is known as the {\em circuit procedure} \cite{xiao2016degree3}. Note that each edge can be in at most one circuit. If two distinct circuits $\mathcal{C}, \mathcal{C}'$ shared an edge $e_i$, then there are two possibilities. The first is that there is a subgraph $B_i$ incident to unforced edges $e_i \in \mathcal{C} \cap \mathcal{C}', e_{i+1} \in \mathcal{C} - \mathcal{C}', e_j \in \mathcal{C}' - \mathcal{C}$. In this case, $B_i$ is incident to more than two unforced edges, so neither $\mathcal{C}$ nor $\mathcal{C}'$ are circuits, which is a contradiction. The second is that there is some edge $e_i$ which is incident to distinct subgraphs $B_i, B_i'$ related to $\mathcal{C}, \mathcal{C}'$, respectively. Circuits are maximal sequences, so it cannot be the case that $B_i$ is a subgraph of $B_i'$, otherwise $\mathcal{C}' \subseteq \mathcal{C}$. Now we consider the subgraphs $B_i \cap B_i'$ and $B_i - B_i'$, which must be connected by unforced edges as they are both subgraphs of $B_i$. These unforced edges are incident to $B_i'$, which is a contradiction as they are not part of $\mathcal{C}'$. Let $X$ be a subgraph. We define $\text{cut}(X)$ to be the set of edges that connect $X$ to the rest of the graph. If $|\text{cut}(X)| = 3$, then we say that $X$ is $3$-cut reducible. It was shown by Xiao and Nagamochi~\cite{xiao2016degree3} that, if $X$ is 3-cut reducible, $X$ can be replaced with a single vertex of degree $3$ with outgoing edges weighted such that the length of the shortest Hamiltonian cycle is preserved. The definition of $4$-cut reducible is more complex. Let $X$ be a subgraph such that $\text{cut}(X) \subseteq F$ and $|\text{cut}(X)| = 4$. A solution to the TSP would have to partition $X$ into two disjoint paths such that every vertex in $X$ is in one of the two paths. If $x_1, x_2, x_3$ and $x_4$ are the four vertices in $X$ incident to the four edges in $\text{cut}(X)$, then there are three ways these paths could start and end: \begin{itemize} \item $x_1 \leftrightarrow x_2$ and $x_3 \leftrightarrow x_4$, \item $x_1 \leftrightarrow x_3$ and $x_2 \leftrightarrow x_4$, \item or $x_1 \leftrightarrow x_4$ and $x_2 \leftrightarrow x_3$. \end{itemize} We say that $X$ is $4$-cut reducible if for at least one of the above cases it is impossible to create two disjoint paths in $X$ that include all vertices in $X$. Xiao and Nagamochi defined a polynomial time and space process for applying the above reductions, known as {\em $3/4$-cut reduction}~\cite{xiao2016degree3}. A set of edges $\{e_i\}$ are {\em parallel} if they are incident to the same vertices (note that here we implicitly let $G$ be a multigraph; these may be produced in intermediate steps of the algorithm). If there are only two vertices in the graph, then the TSP can be solved directly by forcing the shortest two edges. Otherwise if at least one of the edges is not forced, then we can reduce the problem by removing the longer unforced edges until the vertices are only adjacent via one edge. This is the process Xiao and Nagamochi refer to as {\em eliminating parallel edges} \cite{xiao2016degree3}. Finally, a graph is said to satisfy the parity condition if every $U$-component is incident to an even number of forced edges and for every circuit $\mathcal{C}$, an even number of the corresponding subgraphs $B_i$ satisfy that $|\text{cut}(B_i) \cap F|$ is odd. We are now ready to describe Xiao and Nagamochi's algorithm. The algorithm takes as input a graph $G = (V, E)$ and a set of forced edges $F \subseteq E$ and returns the length of the shortest Hamiltonian cycle in $G$ containing all the edges in $F$, if one exists. The algorithm is based on four subroutines: {\em eliminating parallel edges}, the {\em 3/4-cut reduction}, {\em selecting a good circuit} and the {\em circuit procedure}, as well as the following lemma: \begin{lemma}[Eppstein~\cite{eppstein2007}] \label{lem:trivial} If every $U$-component in a graph $G$ is trivial or a component of a 4-cycle, then a minimum cost tour can be found in polynomial time. \end{lemma} We will not define the subroutines here in any detail; for our purposes, it is sufficient to assume that they all run in polynomial time and space. The circuit procedure for a circuit $\mathcal{C}$ begins by either adding an edge $e \in \mathcal{C}$ to $F$ or deleting it from the graph, then performing some other operations. ``Branching on a circuit $\mathcal{C}$ at edge $e \in \mathcal{C}$'' means generating two new instances from the current instance by applying each of these two variants of the circuit procedure starting with $e$. The Xiao-Nagamochi algorithm, named $\text{TSP3}$, proceeds as follows, reproduced from~\cite{xiao2016degree3}: \begin{enumerate} \item {\bf If} $G$ is not $2$-edge-connected or the instance violates the parity condition, then return $\infty$; \item {\bf Elseif} there is a reducible circuit $\mathcal{C}$, then return $\text{TSP3}(G', F')$ for an instance $(G',F')$ obtained by applying the circuit procedure on $\mathcal{C}$ started by adding a reducible edge in $\mathcal{C}$ to $F$; \item {\bf Elseif} there is a pair of parallel edges, then return $\text{TSP3}(G',F')$ for an instance $(G',F')$ obtained by applying the reduction rule of eliminating parallel edges; \item {\bf Elseif} there is a $3/4$-cut reducible subgraph $X$ containing at most eight vertices, then return $\text{TSP3}(G',F')$ for an instance $(G',F')$ obtained by applying the $3/4$-cut reduction on $X$; \item {\bf Elseif} there is a $U$-component $H$ that is neither trivial nor a $4$-cycle, then select a good circuit $\mathcal{C}$ in $H$ and return $\min\{\text{TSP3}(G_1,F_1), \text{TSP3}(G_2,F_2)\}$, where $(G_1,F_1)$ and $(G_2,F_2)$ are the two resulting instances after branching on $\mathcal{C}$; \item {\bf Else} [each $U$-component of the graph is trivial or a $4$-cycle], solve the problem directly in polynomial time by Lemma \ref{lem:trivial} and return the cost of an optimal tour. \end{enumerate} Step $1$ of the algorithm checks that the existence of a Hamiltonian cycle is not ruled out, by ensuring that that there are at least two disjoint paths between any pair of vertices and that the graph satisfies the parity condition. Step 2 reduces any reducible circuit by initially forcing one edge and then alternately removing and forcing edges. Step $3$ removes any parallel edges from the graph, and step $4$ removes any circuits of three edges as well as setting up circuits of four edges so that all edges incident to them are forced. Step 5 is the recursive step, branching on a good circuit by either forcing or removing an edge in the circuit and then applying the circuit procedure. The algorithm continues these recursive calls until it either finds a Hamiltonian cycle or $G \setminus F$ is a collection of single vertices and cycles of length $4$, all of which are disjoint from one another, at which point the problem can be solved in polynomial time via step $6$. Xiao and Nagamochi looked at how the steps of the algorithm, and step $5$ in particular as the branching step, reduced the size of the problem for different graph structures. From this they derived a quasiconvex program corresponding to $19$ branching vectors, each describing how the problem size is reduced at the branching step in different circumstances. Analysis of this quasiconvex program showed that the algorithm runs in $O^*(2^{3n/10})$ time and polynomial space \cite{xiao2016degree3}. \subsection{Quantum speedup of the Xiao-Nagamochi algorithm} \label{sec:deg3speedup} Here we describe how we apply the quantum backtracking algorithm to the Xiao-Nagamochi algorithm. It is worth noting that the quantum backtracking algorithm will not necessarily return the shortest Hamiltonian cycle, but instead returns a randomly selected Hamiltonian cycle that it found. Adding constraints on the length of the Hamiltonian cycles to our predicate and running the quantum backtracking algorithm multiple times will allow us to find a solution to the TSP. The first step towards applying the quantum backtracking algorithm is to define the set of partial assignments. A partial assignment will be a list of edges in $G$ ordered by when they are assigned in the backtracking algorithm and paired with whether the assignment was to force or remove the edge. The assignment is denoted $A \in (\{1,\dots,m\}, \{\text{force}, \text{remove}\})^j$, where $j \leq m$. We have $m \le 3n/2$ as $G$ is degree-3. The quantum approach to backtracking requires us to define a predicate $P$ and heuristic $h$, each taking as input a partial assignment. Our predicate and heuristic make use of a reduction function, introduced in \cite{xiao2016degree3}, as a subroutine; this function is described in the next subsection. However it may be worth noting that the algorithm uses the original graph $G$, and partial assignments of it at each stage. Firstly, we describe the $P$ function, which takes a partial assignment $A = ((e_1, A_1),\dots,(e_j, A_j))$ as input: \begin{enumerate} \item Using the partial assignment $A$, apply the reduction function to $(G, F)$ to get $(G', F')$. \item If $G'$ is not $2$-edge-connected or fails the parity condition, then return false. \item If every $U$-component in $G'$ is either trivial or a $4$-cycle, then return true. \item Return indeterminate. \end{enumerate} Step $2$ matches step $1$ of Xiao and Nagamochi's algorithm. Step $3$ is where the same conditions are met as in step $6$ of Xiao and Nagamochi's algorithm, where a shortest length Hamiltonian cycle is guaranteed to exist and can be found in polynomial time classically via Lemma \ref{lem:trivial}. Step $4$ continues the branching process, which together with how the circuit is picked by $h$ and step $2$(c) of the reduction function (qv) matches step $5$ of Xiao and Nagamochi. The $h$ function is described as follows, taking as input a partial assignment $A = ((e_1, A_1),\dots,(e_j, A_j))$ of the edges of $G$: \begin{enumerate} \item Using the partial assignment $A$, apply the reduction function to $(G, F)$ to get $(G', F')$. \item Select a $U$-component in $G'$ that is neither trivial nor a cycle of length $4$. Select a circuit $\mathcal{C}$ in that component that fits the criteria of a ``good'' circuit~\cite{xiao2016degree3}, then select an edge $e_i' \in \mathcal{C}$. \item Return an edge in $G$ corresponding to $e_i'$ (if there is more than one, choosing one arbitrarily). \end{enumerate} Step $2$ applies step $5$ of Xiao and Nagamochi's algorithm, by selecting the next circuit to branch on and picking an edge in that circuit. If the reduced version of the graph results in $h$ picking an edge corresponding to multiple edges in the original graph, step $3$ ensures that we only return one of these edges to the backtracking algorithm, as step $2$(b) of the reduction function will ensure that every edge in the original graph corresponding to an edge in the reduced graph will be consistently forced or removed. The rest of the circuit will be forced or removed by step $2$(c) of the reduction function. We can now apply the backtracking algorithm (Theorem \ref{thm:backtrack}) to $P$ and $h$ to find a Hamiltonian cycle. We will later choose its failure probability $\delta$ to be sufficiently small that we can assume that it always succeeds, i.e.\ finds a Hamiltonian cycle if one exists, and otherwise reports that one does not exist. At the end of the algorithm, we will receive either the information that no assignment was found, or a partial assignment. By applying the reduction steps and the partial assignments, we can reconstruct the graph at the moment our quantum algorithm terminated, which will give a graph such that every $U$-component is either trivial or a 4-cycle. We then construct and return the full Hamiltonian cycle in polynomial time using step $6$ of Xiao and Nagamochi's algorithm~\cite{xiao2016degree3}. To solve the TSP, we need to find the shortest Hamiltonian cycle. This can be done as follows. First, we run the backtracking algorithm. If the backtracking algorithm does not return a Hamiltonian cycle then we report that no Hamiltonian cycle was found. Otherwise after receiving Hamiltonian cycle $\Gamma$ with length $L_\Gamma$, we create variables $\ell \leftarrow 0$ \& $u \leftarrow L_\Gamma$ and modify $P$ to return false if \[\sum_{e_{i,j}\in F}c_{ij} \geq \lceil(\ell + u)/2\rceil.\] If no cycle is found after running the algorithm again, we set $\ell \leftarrow \lceil(\ell + u)/2\rceil$ and repeat. Otherwise, upon receiving Hamiltonian cycle $\Gamma'$ with total cost $L_{\Gamma'}$, we set $u \leftarrow L_{\Gamma'}$ and repeat. We continue repeating until $\ell$ and $u$ converge, at which point we return the Hamiltonian cycle found by the algorithm. In the worst case scenario, where the shortest cycle is found during the first run of the backtracking algorithm, this algorithm matches a binary search. So the number of repetitions of the backtracking algorithm required to return the shortest Hamiltonian cycle is at most $O(\log L')$, where \begin{align} L' = \sum_{i = 1}^{n}\max \{c_{ij} : j \in \{1,\dots,n\} \} \label{eqn:l} \end{align} is an upper bound on the total cost of any Hamiltonian cycle in the graph. \subsection{The reduction function} \label{sec:reduction} Finally, we describe the reduction function, which takes the original graph $G$ and partial assignment $A$, and applies the partial assignment to this graph in order to reduce it to a smaller graph $G'$ with forced edges $F'$. This reduction might mean that forcing or removing a single edge in $G'$ would be akin to forcing several edges in $G$. For example, let $X$ be a $3$-reducible subgraph of at most $8$ vertices with $\text{cut}(X) = \{ax_1, bx_2, cx_3\}$ for vertices $x_1, x_2, x_3 \in V(X)$. The $3/4$-cut reduction reduces $X$ to a single vertex $x \in G'$ with edges $ax, bx, cx$. If the edges $ax$ and $bx$ are forced, this is equivalent to forcing every edge in $\Pi \cup \{ax_1, bx_2\}$, where $\Pi$ is the shortest path that starts at $x_1$, visits every vertex in $X$ exactly once, and ends at $x_2$. As we need to solve the problem in terms of the overall graph $G$ and not the reduced graph $G'$, our assigned variables need to correspond to edges in $G$. To do this, our heuristic includes a step where if the edge selected in $G'$ corresponds to multiple edges in $G$, we simply select one of the corresponding edges in $G$ to return. Likewise, if the next edge in our partial assignment is one of several edges in $G$ corresponding to a single edge in $G'$, we apply the same assignment to all of the other corresponding edges in $G$. The reduction function works as follows, using reductions and procedures from Xiao and Nagamochi \cite{xiao2016degree3}: \begin{enumerate} \item Create a copy of the graph $G' \leftarrow G$ and set of forced edges $F' \leftarrow \emptyset$. \item For each $i=1,\dots,j$ \begin{enumerate} \item Repeat until none of the cases apply: \begin{enumerate} \item If $G'$ contains a reducible circuit $\mathcal{C}$, then apply the circuit procedure to $\mathcal{C}$. \item If $G'$ contains parallel edges, then apply the reduction rule of eliminating parallel edges. \item If $G'$ contains a subgraph $X$ of at most $8$ vertices such that $X$ is $3/4$-cut reducible, then apply the $3$/$4$-cut reduction to $X$. \end{enumerate} \item Apply assignment $(e_i, a_i)$ to $(G', F')$ by adding edge $e_i$ to $F'$ if $a_i = \text{force}$, or deleting edge $e_i$ from $G'$ if $A_i = \text{remove}$. If edge $e_i$ is part of a set of edges corresponding to a single edge in $G'$, apply the same assignment to all edges in $G$ which correspond to the same edge in $G'$ by adding them all to $F'$ if $a_i = \text{force}$, or deleting them all from $G'$ if $a_i = \text{remove}$. \item Apply the circuit procedure to the rest of the circuit containing edge $e_i$. \end{enumerate} \item Run step 2(a) again. \item Return $(G', F')$. \end{enumerate} Step $2$(a)i recreates step $2$ from Xiao and Nagamochi's original algorithm by applying the circuit procedure where possible. Step $2$(a)ii recreates step $3$ of the original algorithm by applying the reduction of parallel edges. Step $2$(a)iii recreates step $4$ of the original algorithm via the $3/4$-cut reduction. Step $2$(b) applies the next step of the branching that has been performed so far, to ensure that the order in which the edges are forced is the same as in the classical algorithm. Step $2$(c) corresponds to branching on a circuit at edge $e_i$. Finally, step $3$ checks whether or not the graph can be reduced further by running the reduction steps again. One might ask if an edge could be part of two circuits, in which case our algorithm would fail as it would not be able to reduce the circuit. However, as discussed in Sec.\ \ref{sec:xndeg3}, any edge can only be part of at most one circuit. \subsection{Analysis} Steps $2$(a)i-iii of the reduction algorithm can be completed in polynomial time~\cite{xiao2016degree3}. All of these steps also reduce the size of a problem by at least a constant amount, so only a polynomial number of these steps are needed. Step 2(b) is constant time and step 2(c) can be run in polynomial time as the circuit is now reducible. All steps are only repeated $O(m)$ times, so the whole reduction algorithm runs in polynomial time in terms of $m$. Steps $2$ and $3$ of the $h$ subroutine run in polynomial time as searching for a good circuit in a component can be done in polynomial time \cite{xiao2016degree3}. Likewise, steps 2 and 3 of the $P$ function involve looking for certain structures in the graph that can be found in polynomial time. As a result, the runtimes for the $P$ and $h$ functions are both polynomial in $m$. By Theorem \ref{thm:backtrack}, the number of calls to $P$ and $h$ we make in order to find a Hamiltonian cycle with failure probability $\delta$ is $O(\sqrt{T}\poly(m)\log (1/\delta))$, where $T$ is the size of the backtracking tree, which in our case is equal to the number of times the Xiao-Nagamochi algorithm branches on a circuit. $P$ and $h$ both run in polynomial time and as a result can be included in the $\poly(m)$ term of the runtime. Because $m \leq 3n/2$, the polynomial term in this bound is also polynomial in terms of $n$. The behaviour of the $P$ and $h$ subroutines is designed to reproduce the behaviour of Xiao and Nagamochi's TSP3 algorithm~\cite{xiao2016degree3}. It is shown in~\cite[Theorem 1]{xiao2016degree3} that this algorithm is correct, runs in time $O^*(2^{3n/10})$ and uses polynomial space. As the runtime of the TSP3 algorithm is an upper bound on the number of branching steps it makes, the algorithm branches on a circuit $O^*(2^{3n/10})$ times. Therefore, the quantum backtracking algorithm finds a Hamiltonian cycle, if one exists, with failure probability at most $\delta$ in time $O^*(2^{3n/20} \log(1/\delta)) \approx O^*(1.110^n \log(1/\delta))$ and polynomial space. Finding the shortest Hamiltonian cycle requires repeating the algorithm $O(\log L')$ times, where $L'$ is given in Equation \ref{eqn:l}. By using a union bound over all the runs of the algorithm, to ensure that all runs succeed with high probability it is sufficient for the failure probability $\delta$ of each run to be at most $O(1/(\log L'))$. From this we obtain the following result, proving the first part of Theorem \ref{thm:deg34}: \begin{theorem} There is a bounded-error quantum algorithm which solves the TSP on degree-3 graphs in time $O^*(1.110^n \log L \log \log L)$, where $L$ is the maximum edge cost. The algorithm uses $\poly(n)$ space. \end{theorem} Note that we have used the bound $L' \le n L$, where the extra factor of $n$ is simply absorbed into the hidden $\poly(n)$ term. \section{Extending to higher-degree graphs \label{sec:higher-bound}} We next consider degree-$k$ graphs for $k \ge 4$. We start with degree-4 graphs by applying the quantum backtracking algorithm to another algorithm by Xiao and Nagamochi~\cite{xiao2016degree4}. We then extend this approach to graphs of higher degree by reducing the problem to degree-4 graphs. \subsection{Degree-4 graphs} Here we will show the following, which is the second part of Theorem \ref{thm:deg34}: \begin{theorem} There is a bounded-error quantum algorithm which solves the TSP for degree-4 graphs in time $O^*(1.301^n\log L \log \log L)$, where $L$ is the maximum edge cost. The algorithm uses $\poly(n)$ space. \end{theorem} As the argument is very similar to the degree-3 case, we only sketch the proof. \begin{proof}[Proof sketch] Xiao and Nagamochi's algorithm for degree-4 graphs works in a similar way to their algorithm for degree-3 graphs: The graph is reduced in polynomial time by looking for specific structures in the graph and then picking an edge in the graph to branch on. We apply the quantum backtracking algorithm as before, finding a Hamiltonian cycle with failure probability $\delta$ in $O^*(1.301^n\log(1/\delta))$ time. We then use binary search to find the shortest Hamiltonian cycle after $O(\log L)$ repetitions of the algorithm, rejecting if the total length of the forced edges is above a given threshold. To achieve overall failure probability $1/3$, the algorithm runs in $O^*(1.301^n\log L\log \log L)$ time. \end{proof} \subsection{Degree-5 and degree-6 graphs} \begin{figure*} \begin{center} \begin{tikzpicture}[scale=0.9] \tikzstyle{vertex}=[draw,shape=circle] \path (0,0) node[vertex](x1){} (1,0) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (-1,1) -- node[above] {$a$} (x1); \draw[] (-1,0) -- node[above] {$b$} (x1); \draw[] (-1,-1) -- node[above] {$c$} (x1); \draw[] (y1) -- node[above] {$d$} (2,1); \draw[] (y1) -- node[above] {$e$} (2,0); \draw[dashed] (y1) -- node[above] {$f$} (2,-1); \path (4,0) node[vertex](x1){} (5,0) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (3,1) -- node[above] {$a$} (x1); \draw[] (3,0) -- node[above] {$b$} (x1); \draw[] (3,-1) -- node[above] {$d$} (x1); \draw[] (y1) -- node[above] {$c$} (6,1); \draw[] (y1) -- node[above] {$e$} (6,0); \draw[dashed] (y1) -- node[above] {$f$} (6,-1); \path (8,0) node[vertex](x1){} (9,0) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (7,1) -- node[above] {$a$} (x1); \draw[] (7,0) -- node[above] {$b$} (x1); \draw[] (7,-1) -- node[above] {$e$} (x1); \draw[] (y1) -- node[above] {$c$} (10,1); \draw[] (y1) -- node[above] {$d$} (10,0); \draw[dashed] (y1) -- node[above] {$f$} (10,-1); \path (12,0) node[vertex](x1){} (13,0) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (11,1) -- node[above] {$a$} (x1); \draw[] (11,0) -- node[above] {$b$} (x1); \draw[dashed] (11,-1) -- node[above] {$f$} (x1); \draw[] (y1) -- node[above] {$c$} (14,1); \draw[] (y1) -- node[above] {$d$} (14,0); \draw[] (y1) -- node[above] {$e$} (14,-1); \path (16,0) node[vertex](x1){} (17,0) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (15,1) -- node[above] {$a$} (x1); \draw[] (15,0) -- node[above] {$c$} (x1); \draw[] (15,-1) -- node[above] {$d$} (x1); \draw[] (y1) -- node[above] {$b$} (18,1); \draw[] (y1) -- node[above] {$e$} (18,0); \draw[dashed] (y1) -- node[above] {$f$} (18,-1); \path (0,-3) node[vertex](x1){} (1,-3) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (-1,-2) -- node[above] {$a$} (x1); \draw[] (-1,-3) -- node[above] {$c$} (x1); \draw[] (-1,-4) -- node[above] {$e$} (x1); \draw[] (y1) -- node[above] {$b$} (2,-2); \draw[] (y1) -- node[above] {$d$} (2,-3); \draw[dashed] (y1) -- node[above] {$f$} (2,-4); \path (4,-3) node[vertex](x1){} (5,-3) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (3,-2) -- node[above] {$a$} (x1); \draw[] (3,-3) -- node[above] {$c$} (x1); \draw[dashed] (3,-4) -- node[above] {$f$} (x1); \draw[] (y1) -- node[above] {$b$} (6,-2); \draw[] (y1) -- node[above] {$d$} (6,-3); \draw[] (y1) -- node[above] {$e$} (6,-4); \path (8,-3) node[vertex](x1){} (9,-3) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (7,-2) -- node[above] {$a$} (x1); \draw[] (7,-3) -- node[above] {$d$} (x1); \draw[] (7,-4) -- node[above] {$e$} (x1); \draw[] (y1) -- node[above] {$b$} (10,-2); \draw[] (y1) -- node[above] {$c$} (10,-3); \draw[dashed] (y1) -- node[above] {$f$} (10,-4); \path (12,-3) node[vertex](x1){} (13,-3) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (11,-2) -- node[above] {$a$} (x1); \draw[] (11,-3) -- node[above] {$d$} (x1); \draw[dashed] (11,-4) -- node[above] {$f$} (x1); \draw[] (y1) -- node[above] {$b$} (14,-2); \draw[] (y1) -- node[above] {$c$} (14,-3); \draw[] (y1) -- node[above] {$e$} (14,-4); \path (16,-3) node[vertex](x1){} (17,-3) node[vertex](y1){}; \draw[line width=1.5pt] (x1) -- (y1); \draw[] (15,-2) -- node[above] {$a$} (x1); \draw[] (15,-3) -- node[above] {$e$} (x1); \draw[dashed] (15,-4) -- node[above] {$f$} (x1); \draw[] (y1) -- node[above] {$b$} (18,-2); \draw[] (y1) -- node[above] {$c$} (18,-3); \draw[] (y1) -- node[above] {$d$} (18,-4); \end{tikzpicture} \end{center} \caption{Breaking a vertex of degree 5 or 6 into two lower-degree vertices. In the degree-5 case, dashed edge $f$ is not present and the vertex is split into one vertex of degree $3$ and another of degree $4$ connected by a forced edge in bold. In the degree-6 case, dashed edge $f$ is present and the vertex is split into two vertices of degree $4$ connected by a forced edge. If edges $a$ and $b$ are included in the original graph's shortest Hamiltonian cycle, then they must not be adjacent to one another in the final graph. This holds in six of the ten ways of splitting the vertex. \label{fig:degree-5}} \end{figure*} To deal with degree-5 and degree-6 graphs, we reduce them to the degree-4 case. The complexity of the two cases turns out to be the same; however, for clarity we consider each case separately. \begin{theorem} \label{thm:deg5} There is a bounded-error quantum algorithm which solves the TSP for degree-5 graphs in time $O^*(1.680^n\log L\log \log L)$. \end{theorem} \begin{proof} Our algorithm works by splitting each vertex of degree 5 into one vertex of degree $3$ and another of degree $4$ connected by a forced edge. The forced edges can be included in our quantum algorithm by modifying step 1 of the reduction function so that $F'$ contains all the forced edges created by splitting a vertex of degree-$5$ into two vertices connected by a forced edge. Once all degree-$5$ vertices are split this way, we run the degree-$4$ algorithm. It is intuitive to think that this splitting of the vertices could increase the runtime complexity of the degree-$4$ algorithm, due to $n$ being larger. However, the addition of a forced edge incident to every new vertex means that we do not need to create more branches in the backtracking tree in order to include the new vertex in the Hamiltonian cycle. As a result, the time complexity of the degree-$4$ algorithm will remain the same. There are $10$ unique ways of splitting a vertex of degree $5$ into one vertex of degree $3$ and another of degree $4$ connected by a forced edge. These ten ways of splitting the vertex are shown in Fig.\ \ref{fig:degree-5} for a vertex incident to edges $a,b,c,d,e$. Without loss of generality, let $a$ and $b$ be the two edges which are part of the Hamiltonian cycle. In order for $a$ and $b$ to also be part of the Hamiltonian cycle in the degree-4 graph produced, $a$ and $b$ cannot be adjacent to one another. Looking at Fig.\ \ref{fig:degree-5}, the split is successful in six of the ten ways of splitting the vertex. If there are $f$ vertices of degree $5$, then there are $10^f$ possible ways of splitting all such vertices, of which $6^f$ will give the correct solution to the TSP. We can apply D\"urr and H\o yer's quantum algorithm for finding the minimum~\cite{durr1996} to find a splitting that leads to a shortest Hamiltonian cycle, or reporting if no cycle exists, after $O((10/6)^{f/2})$ repeated calls to the degree-4 algorithm. To ensure that the failure probability of the whole algorithm is at most $1/3$, we need to reduce the failure probability of the degree-4 algorithm to $O((10/6)^{-f/2})$, which can be achieved by repeating it $O(f)$ times and returning the minimum-length tour found. The overall runtime is thus \begin{align*} &O^*\left(\left(\frac{10}{6}\right)^{\frac{f}{2}}1.301^n\log L \log \log L\right)\\ = &O^*(1.680^n\log L \log \log L). \end{align*} \end{proof} It is also possible to split a vertex of degree $5$ into three vertices of degree $3$ connected by two forced edges. There are $15$ ways of performing this splitting, of which $6$ will succeed. Applying the degree-$3$ algorithm to these reduced graphs finds a runtime of \begin{align*} &O^*\left(\left(\frac{15}{6}\right)^{\frac{f}{2}}1.110^n\log L \log \log L\right)\\ = &O^*(1.754^n\log L \log \log L) \end{align*} \noindent which performs worse than Theorem \ref{thm:deg5}. We next turn to degree-6 graphs, for which the argument is very similar. \begin{theorem} There is a quantum algorithm which solves the TSP for degree-$6$ graphs with failure probability $1/3$ in time $O^*(1.680^n\log L \log \log L)$. \end{theorem} \begin{proof} We can extend the idea of Theorem \ref{thm:deg5} to degree-6 graphs by splitting vertices of degree $6$ into two vertices of degree $4$ connected by a forced edge. Because the degree of both new vertices is $4$, there are $\binom{6}{3}/2 = 10$ unique ways of partitioning the edges, of which 4 will fail. We show this in Fig.\ \ref{fig:degree-5} by including the dashed edge $f$ as the sixth edge. The overall runtime is the same as the degree-$5$ case. \end{proof} \subsection{Degree-7 graphs} We finally considered extending the algorithm to degree-7 graphs by partitioning degree-7 vertices into one of degree $5$ and another of degree $4$, connected by a forced edge. We can split a vertex of degree $7$ into a vertex of degree $4$ and another of degree $5$ in $\binom{7}{4} = 35$ ways, of which $\binom{7-2}{4-2} + \binom{7-2}{3-2} = 15$ will not preserve the shortest Hamiltonian cycle. We then use the same process as for the degree-5 and degree-6 case, halting after $O((35/20)^{k/2})$ iterations and returning either the shortest Hamiltonian cycle found or reporting if no Hamiltonian cycle exists. From this, our overall runtime is \begin{align*} &O^*\left(\left(\frac{35}{20}\right)^{k/2}1.680^n\log L \log \log L\right)\\ =&O^*(2.222^n\log L \log \log L). \end{align*} This is the point where we no longer see a quantum speedup over the fastest classical algorithms using this approach, as classical algorithms such as those of Held-Karp~\cite{held1962} and Bj{\"o}rklund et al.~\cite{bjorklund2008} run in $O^*(2^n)$ and $O^*(1.984^n)$ time, respectively. \section*{Note added} Following the completion of this work, Andris Ambainis informed us of two new related results in this area. First, a quantum backtracking algorithm whose runtime depends only on the number of tree vertices visited by the classical backtracking algorithm, rather than the whole tree \cite{ambainis2016a}. This alleviates one, though not all, of the limitations of the backtracking algorithm discussed in Section II. Second, a new quantum algorithm for the general TSP based on accelerating the Held-Karp dynamic programming algorithm \cite{ambainis2016b}. The algorithm's runtime is somewhat worse than ours for graphs of degree at most 6, and it uses exponential space; but it works for any graph, rather than the special case of bounded-degree graphs considered here. \begin{acknowledgments} DJM was supported by the Bristol Quantum Engineering Centre for Doctoral Training, EPSRC grant EP/L015730/1. AM was supported by EPSRC Early Career Fellowship EP/L021005/1. We would like to thank Andris Ambainis for bringing refs.~\cite{ambainis2016a, ambainis2016b} to our attention. \end{acknowledgments}
{ "attr-fineweb-edu": 2.919922, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbNfxK02iP1lCWnfk
\section{Introduction} \begin{table}[t] \resizebox{\linewidth}{!}{% \begin{tabular}{p{10cm}} \toprule \textbf{Article Sentences:} \\ 1. {\color[HTML]{00659a}The town is home to the prestigious Leander Club, which has trained more than 100 Olympic medal-winning rowers}. \\ \textit{- 2 sentences are abbreviated here.} \\ 4. {\color[HTML]{9a0018}The Royal Mail has painted more than 50 postboxes gold following Team GB's gold medal haul at London 2012}. \\ 5. Originally it said it was only painting them in winners home towns, or towns with which they are closely associated. \\ 6. Town mayor Elizabeth Hodgkin said: `` {\color[HTML]{00659a}We are the home of rowing} ... I feel very excited about it." \\ \textit{- 5 sentences are abbreviated here.} \\ 12. The {\color[HTML]{006601}Henley-on-Thames postbox} was painted on Friday. \\ \textit{- one sentence is abbreviated here.} \\ \midrule \textbf{Reference Summary:} {\color[HTML]{9a0018}The Royal Mail has painted a postbox gold} in the {\color[HTML]{006601}Oxford-shire town of Henley-on-Thames} - {\color[HTML]{00659a}in recognition of its medal} {\color[HTML]{00659a}winning rowing club}. \\ \midrule \textbf{BART's Summary:} {\color[HTML]{006601}A postbox in Henley-on-Thames} has been {\color[HTML]{9a0018}painted gold as part of the Royal Mail's `` Olympic gold '' campaign}. \\ \midrule \textbf{Our HierGNN's Summary:} {\color[HTML]{9a0018}A Royal Mail postbox} in {\color[HTML]{006601}Henley-on-Thames} has been {\color[HTML]{9a0018}painted gold} in {\color[HTML]{00659a}honour of the town 's Olympic rowing success}. \\ \bottomrule \end{tabular}% } \caption{Example of an article from XSum with summaries given by human-written reference, BART \cite{lewis2020bart} and our HierGNN equipped with BART. BART's summary fails to capture all information pieces as the reference (as highlighted in various colors), while HierGNN has advantages in combining the information from multiple locations in the source side.} \label{tab:summarization_illustration} \end{table} Sequential neural network architectures in their various forms have become the mainstay in abstractive summarization \cite{see-etal-2017-getTothePoint,lewis2020bart}. However, the quality of machine-produced summaries still lags far behind the quality of human summaries \cite{huang2020whatwehaveachievedinSummarization,xie-etal-2021-factual-consistency,cao-etal-2022-hallucinated-factuality,lebanoff2019scoringSentenceSingletons}. Due to their sequential nature, a challenge with neural summarizers is to capture hierarchical and inter-sentential dependencies in the summmarized document. Progress in cognitive science suggests that humans construct and reason over a latent hierarchical structure of a document when reading the text in it \cite{graesser1994constructingInferenceDuringNarrativeTextComprehension,goldman1999narrative}. Such \textit{reasoning behavior} includes uncovering the salient contents and effectively aggregating all related clues spreading across the documents to understand the document. \citet{lebanoff2019scoringSentenceSingletons} found that human editors usually prefer writing a summary by fusing information from multiple article sentences and reorganizing the information in summaries (sentence fusion), rather than dropping non-essential elements in an original sentence such as prepositional phrases and adjectives (sentence compression). Different summarization benchmarks show there are between 60-85\% summary sentences that are generated by sentence fusing. These recent findings support our motivation to make use of hierarchical document structure when summarizing a document. We present a document hierarchy-aware graph neural network (HierGNN), a neural encoder with a reasoning functionality that can be effectively incorporated into any sequence-to-sequence (seq2seq) neural summarizer. Our HierGNN first learns a latent hierarchical graph via a sparse variant of the matrix-tree computation \cite{koo2007structuredpredictionMatrixTreeTheorm,liu-etal-2019-SummarizationasTreeInduction}. It then formulates sentence-level reasoning as a graph propagation problem via a novel message passing mechanism. During decoding, a graph-selection attention mechanism serves as a source sentence selector, hierarchically indicating the attention module which tokens in the input sentences to focus on. Our experiments with HierGNN, incorporated into both pointer-generator networks \cite{see-etal-2017-getTothePoint} and BART \cite{lewis2020bart}, confirm that HierGNN substantially improves both the non-pretrained and pretrained seq2seq baselines in producing high-quality summaries. Specifically, our best HierGNN-BART achieves an average improvement of 0.55 and 0.75 points in ROUGE-1/2/L on CNN/DM and XSum. Compared with a plain seq2seq model, HierGNN encourages the summarizers to favor sentence fusion more than sentence compression when generating summaries. Modeling the hierarchical document structure via our sparse matrix-tree computation also enables HierGNN to treat long sequences more effectively. In addition, our sparse adaptive variant of the matrix-tree computation demonstrates a more powerful expressive ability over the original one \cite{koo2007structuredpredictionMatrixTreeTheorm,liu-etal-2019-SummarizationasTreeInduction}. We summarize our contributions as follows, \begin{itemizesquish}{-0.3em}{0.5em} \item We present a novel encoder architecture for improving seq2seq summarizers. This architecture captures the hierarchical document structure via an adaptive sparse matrix-tree computation, with a new propagation rule for achieving inter-sentence reasoning. \item We design a graph-selection attention mechanism to fully leverage the learned structural information during decoding in advantages over only using it in encoding. \item Results on CNN/DM and XSum demonstrates the effectiveness of HierGNN in improving the quality of summaries for both non-pretrained and pretrained baselines. An in-depth analysis confirms our module improves the integration of information from multiple sites in the input article and that it is more effective in processing long sequence inputs. \end{itemizesquish} \section{Related Work} \textbf{Neural Abstractive Summarization} \newcite{rush-etal-2015-neuralSummarization} first proposed to use a sequence-to-sequence model with an attention mechanism to perform sentence compression. \newcite{mendes-etal-2019-jointly} demonstrated the advantages and limitations of neural methods based on sentence compression. The pointer-generator networks (PGN; \citealt{see-etal-2017-getTothePoint}) enhances the attention model with a copying functionality. PGN has also been further extended to create summarization systems by incorporating the topic information \cite{topicAwarePGN}, document structural information \cite{song2018structureInfusedCopy}, semantic information \cite{hardy-vlachos-2018-AMRguidedSummarization}, and was improved by replacing the plain LSTM module with the more advanced Transformer model to overcome the difficulty in modeling long sequence input \cite{pilault-etal-2020-extractiveAbsTransformer,wang2021exploringExplainableSelectionControlAbsSummarization,fonseca2022}. For the pretrained models, BERTSum \cite{liu2019BERTSum} adopted the BERT encoder for the summarizer, with a randomly initialized decoder. \newcite{lewis2020bart} presented BART which pre-trains both the underlying encoder and decoder. \newcite{dou-etal-2021-gsum} investigated ``guidance signals'' (e.g., keywords, salient sentences) for further boosting the performances. \noindent \textbf{Graph Neural Approach for Summarization} Graph neural networks have demonstrated their ability to capture rich dependencies in documents to be summarized. \newcite{wang2020heterogeneousGraphSum} use a ``heterogeneous graph'' with sentence nodes and co-occurring word nodes to capture the sentence dependencies. \newcite{jin2020semsum} use two separate encoders to encode the input sequence with a parsed dependency graph. \newcite{cui-etal-2020-enhancing-extsum-with-topic-gnn} use a bipartite graph with a topic model to better capture the inter-sentence relationships. \newcite{kwon-etal-2021-considering-tree-structure-in-sent-extsummarization} capture both intra- and inter-sentence relationships via a nested tree structure. \newcite{zhu2021enhancingFactualbyKG} use entity-relation information from the knowledge graph to increase the factual consistency in summaries. Our approach is related to the structural attention model \cite{balachandran-etal-2021-structsum,liu-etal-2019-SummarizationasTreeInduction}, but differs in two major ways: (i) we introduce an adaptive sparse matrix-tree construction to learn a latent hierarchical graph and a novel propagation rule; (ii) we investigate to use the structure information both with the encoder and the decoder for abstractive summarization, and not just the encoder. These shows to be more effective for unsupervised learning of the latent hierarchical structure while can defeat the approach that leverages external graph constructor \cite{balachandran-etal-2021-structsum}. \iffalse \begin{equation}\label{eqt:maximum_likelihood} \mathcal{L}_{sum} = -\frac{1}{|C|}\sum_{(X,Y)\in C} log P(Y|X;\Theta). \end{equation} where $\Theta$ includes all trainable parameters. \fi \section{Hierarchy-aware Graph Neural Encoder}\label{sec:graph_construct} HierGNN learns the document structure in an end-to-end fashion without any direct structure supervision, and does not need an external parser to construct the structure, unlike previous work \citep{balachandran-etal-2021-structsum,huang-etal-2020-knowledgegraphaugmentedsummarization,wang2020heterogeneousGraphSum,cardenas2022trade}. In addition, it empirically improves over supervised graph construction, which has been a challenge \cite{balachandran-etal-2021-structsum}. Sequential summarizers encode an $N$-token article, $X = (x_1, \cdots, x_N)$ as $d$-dimensional latent vectors using an encoding function $\mathbf{h}_{enc}(x_t) \in \mathbb{R}^{d}$ and then decodes them into the target summary $Y$. (We denote by $\mathbf{h}_{enc}(X)$ the sequence of $x_t$ encodings for $t \le N$.) Our model includes four modules in addition to this architecture: 1) a sparse matrix-tree computation for inferring the document hierarchical structure, ii) a novel message-passing layer to identify inter-sentence dependencies, iii) a reasoning fusion layer aggregating the outputs of the message-passing module; and vi) a graph-selection attention module to leverage the encoded structural information. \subsection{Learning the Latent Hierarchical Structure} We first introduce our latent structure learning algorithm that makes use of a sparse variant of the matrix-tree theorem \cite{1986GraphTB,koo2007structuredpredictionMatrixTreeTheorm}. \noindent \textbf{Latent Document Hierarchical Graph.} We represent the document as a complete weighted graph, with each node representing a sentence. The edge weights are defined as the marginal probability of a directional dependency between two sentences. In addition, each sentence node has an extra probability value, the ``root probability'' which indicates the \textit{hierarchical role} of the sentence, such as the roles of \emph{the lead}, \emph{most important facts}, or \emph{other information} defined based on the inverted pyramid model for news articles \cite{po2003news,ytreberg2001moving}. Intuitively, a sentence with a high root probability (high hierarchical position) conveys more general information; namely, it is a \textit{connector}, while a sentence with a lower root probability (\textit{information node}) carries details supporting its higher connectors. The underlying graph structure is latent and not fixed, summed out in our overall probability model using the matrix-tree theorem. \begin{figure*}[] \centering \includegraphics[width=0.75\textwidth]{figures/arch.pdf} \caption{Architecture for the sequence-to-sequence model with HierGNN reasoning encoder.} \label{fig:hiergnn_layer} \end{figure*} \noindent \textbf{Sparse Matrix-Tree Computation.} For an article with $M$ sentences, we start from the sentence embeddings as the node initialization $H^{(0)} = [\mathbf{s}_1, ..., \mathbf{s}_i, ..., \mathbf{s}_M]$. We then use two independent non-linear transformations to obtain a pair of \textit{parent} and \textit{child} representation for each sentence, \begin{align} \mathbf{s}_i^{(p)} &= \sigma(W_p\mathbf{{s}}_i+b_p), \\ \mathbf{s}_i^{(c)} &= \sigma(W_c\mathbf{{s}}_i+b_c), \end{align} \noindent where $W_p, W_c, b_p, b_c$ are parameters, $\sigma$ is the ReLU activation function \cite{dahl2013ReLU}. The standard use of the matrix-tree theorem \cite{1986GraphTB} computation (MTC; \citealt{smith2007probabilistic,koo2007structuredpredictionMatrixTreeTheorm,mcdonald2007complexity}) includes the exponential function to calculate a matrix $F\in \mathbb{R}^{M\times M}$ with positive values with each element $f_{ij}$ representing the weight of the directional edge from a node $s_i$ to $s_j$; and a positive vector of root scores $\mathbf{f}^{(root)} \in \mathbb{R}^M$. However, having a dense matrix degrades our graph reasoning module by including irrelevant information from redundant $M$ sentence nodes. Inspired by the work about sparse self-attention \cite{zhang2021sparseReluAttention,correia2019adaptivelySparseTransformer}, we introduce an adaptive solution to inject sparsity into MTC. We replace the exponential scoring function with the ReLU function ($\mathrm{ReLU}(x \in \mathbb{R}) = \max \{ x, 0 \}$ and similarly coordinate-wise when $x$ is a vector) and calculate the root $f_i^{(root)}$ and edge scores $f_{ij}$ by a fully-connected layer and a bi-linear attention layer, respectively, \begin{align} f_i^{(root)} &= \textsc{ReLU}(W_r \mathbf{s}_i^{(p)} + b_r) + \varepsilon, \\ f_{ij} &= \textsc{ReLU}({{\mathbf{s}_i^{(p)}}^\top W_{bi}{\mathbf{s}_j^{(c)}} }) + \varepsilon, \end{align} \noindent where $W_{bi}, W_r, b_r$ are learnable. (We use $\varepsilon=10^{-6}$ to avoid matrix non-invertibility issues.) Compared to the exponential function, ReLU relaxes $F$ and $\mathbf{f}^{(root)}$ to be non-negative, thus being capable of assigning zero probability and pruning dependency edges and roots. We finally plug in these quantities to the standard MTC \cite{1986GraphTB} and marginalize the edge and root probabilities as the adjacency matrix $A(i,j)=P(z_{ij}=1)$ and root probability $p^{r}_{i}$ representing the hierarchical role (i.e., the likelihood to be a connector) of each sentence. \subsection{Reasoning by Hierarchy-aware Message Passing} We present a novel message-passing mechanism over the learned hierarchical graph. This mechanism realizes the inter-sentence reasoning where connectors can aggregate information from their related information nodes while propagating the information to others. For the $i$-th sentence node, the edge marginal controls the aggregation from its $K$ information nodes; and the root probability controls the neighbouring information is combined as $i$-th node's update $\mathbf{u}^{(l)}$ in the $l$-th reasoning layer, \begin{equation} \mathbf{u}^{(l)}_i = (1-p^{r}_{i}) \mathcal{F}_r(\mathbf{s}_i^{(l)}) + (p^{r}_{i}) \sum_{k=1}^K A_{ik} \mathcal{F}_n(\mathbf{s}_k^{(l)}), \end{equation} where $\mathcal{F}_r$ and $\mathcal{F}_n$ are parametric functions. Intuitively, if a sentence is a \textit{connector}, it should have strong connectivity with the related \textit{information nodes}, and aggregate more details. Each information node learns to either keep the uniqueness of its information or fuse the information from the connectors. To filter out the unnecessary information, we adopt a gated mechanism as the information gatekeeper in the node update, \begin{align} \mathbf{g}_i^{(l)} &= \sigma (\mathcal{F}_g([\mathbf{u}_i^{(l)}; \mathbf{h}_i^{(l)}])), \\ \mathbf{h}_i^{(l+1)} &= \text{LN}(\mathbf{g}_i^{(l)} \odot \mathcal{\phi}(\mathbf{u}_i^{(l)}) + (\mathbf{1}-\mathbf{g}_i^{(l)}) \odot \mathbf{h}_i^{(l)}), \end{align} where $\mathcal{F}_g$ is a parametric function and $\odot$ is the element-wise dot product. We use layer normalization (\textsc{LN}) to stabilize the output for the update function. The function $\sigma$ is the sigmoid function, and $\phi$ can be any non-linear function. \subsection{Reasoning Fusion Layer} We construct \emph{reasoning chains} that consist of $L$ hops by stacking $L$ HierGNN blocks together. To handle cases where fewer than $L$ hops are needed, we add a fusion layer to aggregate the output from each reasoning hop to produce the final output of HierGNN. A residual connection is also introduced to pass the node initialization directly to the output, \begin{equation} \mathbf{h}^{(G)}_i = (W_g[\mathbf{h}^{(1)}_i, ..., \mathbf{h}^{(L)}_i] + b_g) + \mathbf{h}^{(0)}_i, \end{equation} \noindent where $W_g, b_g$ are learnabale parameters. We use two approaches for layer use: (a) \textit{Layer-Shared Reasoning (LSR)}: we construct a shared reasoning graph first, followed by $L$ message passing layers for reasoning; (b) \textit{Layer-Independent Reasoning (LIR)}: we learn the layer-wise latent hierarchical graphs independently, where each message passing layer uses its own graph. \subsection{Graph-selection Attention Mechanism} In addition to token-level decoding attention, we propose a \textit{graph-selection attention mechanism} (GSA) to inform the decoder with learned hierarchical information, while realizing the sentence-level content selection. In each decoding step $t$, our decoder first obtains a graph context vector, $\mathbf{c}_G^t$, which entails the global information of the latent hierarchical graph. We first compute the graph-level attention distribution $\mathbf{a}_G^t$ by, \begin{align} e^t_{v_i} &= \textsc{Attn}^{(G)}(\mathbf{h}^{(L)},\mathbf{z}_t), \\ \mathbf{a}_G^t &= \textsc{Softmax}(\mathbf{e}^t), \end{align} where $\textsc{Attn}^{(G)}$ is a graph attention function. The vectors $\mathbf{h}_i^{(L)} \in \mathbb{R}^d, \mathbf{z}_t \in \mathbb{R}^d$ are the $L$-th layer node embeddings for sentence $i$ and decoding state at time $t$, respectively. The graph context vector $\mathbf{c}_G^t \in \mathbb{R}^d$ is finally obtained by summing all $\mathbf{h}_i^{(L)}$ weighted by $\mathbf{a}_G^t$. The value of $\mathbf{c}_G^t$ is used as an additional input for computing token-level attention, \begin{align} e_{i}^t &= \textsc{Attn}^{(T)}(\mathbf{h}_{enc}(X), \mathbf{z}_t,\mathbf{c}_G^t), \\ \mathbf{a}_T^t &= \textsc{Softmax}(\mathbf{e}^t), \end{align} where $\textsc{Attn}^{(T)}$ is a token-level attention function \cite{luong-etal-2015-effective,vaswani2017attention}. Again, the token-attentional context vector $\mathbf{c}_{f}^t$ is computed by summing the encoder outputs weighted by $\mathbf{a}_T^t$. The final context vector $\mathbf{c}_{f}^t$ is fused from the graph $\mathbf{c}_G^t$ and token context vectors $\mathbf{c}_T^t$ with a parametric function $g_{f}$, $\mathbf{c}_{f}^t = g_{f}(\mathbf{c}_G^t, \mathbf{c}_T^t)$. \section{Experimental Setting} \noindent\textbf{Benchmarks.} We evaluate our model on two common document summarization benchmarks. The first is the CNN/Daily Mail dataset \cite{hermann2015CNNDMdataset} in the news domain, with an average input of 45.7 sentences and 766.1 words, and a reference with an average length of 3.59 sentences and 58.2 words. We use the non-anonymized version of \newcite{see-etal-2017-getTothePoint}, which has 287,084/13,367/11,490 instances for training, validation and testing. The second dataset we use is XSum \cite{narayan2018donXSum}, a more abstractive benchmark consisting of one-sentence human-written summaries for BBC news. The average lengths for input and reference are 23.26 sentences with 430.2 words and 1 sentence with 23.3 words, respectively. We follow the standard split of \newcite{narayan2018donXSum} for training, validation and testing (203,028/11,273/11,332). \noindent \textbf{Implementations.} We experiment with the non-pretrained PGN of \newcite{see-etal-2017-getTothePoint} and the pretrained BART model \cite{lewis2020bart}. The implementation details are in Appendix~\ref{sec:model_implementations}. \begin{table}[t] \centering \scalebox{0.75}{ \begin{tabular}{lcccccc} \toprule \multicolumn{1}{l}{\textbf{Non-pretrained}} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \toprule LEAD-3 & 40.34 & 17.70 & 36.57 & - \\ PGN & 39.53 & 17.28 & 36.38 & - \\ StructSum ES & 39.63 & 16.98 & 36.72 & - \\ StructSum LS & 39.52 & 16.94 & 36.71 & - \\ StructSum (LS + ES) & 39.62 & 17.00 & \textbf{36.95} & 21.70 \\ \midrule PGN - Ours & 39.07 & 16.97 & 35.87 & 23.74 \\ HierGNN-PGN (LSR) & \textbf{39.87} & \textbf{17.77} & 36.85 & \textbf{25.64}\\ HierGNN-PGN (LIR) & 39.34 & 17.39 & 36.44 & 25.26 \\ \toprule \multicolumn{1}{l}{\textbf{Pretrained}} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \toprule BERTSUMABS & 41.72 & 19.39 & 38.76 & 29.05 \\ BERTSUMEXTABS & 42.13 & 19.60 & 39.18 & 28.72 \\ T5-Large & 42.50 & 20.68 & 39.75 & - \\ BART & 44.16 & 21.28 & 40.90 & - \\ Hie-BART & 44.35 & 21.37 & 41.05 & - \\ HAT-BART & 44.48 & 21.31 & 41.52 & - \\\midrule BART - Ours & 44.62 & 21.49 & 41.34 & 33.98 \\ BART + SentTrans. & 44.44 & 21.44 & 41.27 & 33.90 \\ HierGNN-BART (LSR) & 44.93 & 21.7 & 41.71 & 34.43 \\ HierGNN-BART (LIR) & \textbf{45.04} & \textbf{21.82} & \textbf{41.82} & \textbf{34.59} \\ \bottomrule \end{tabular}} \caption{Automatic evaluation results in ROUGE scores, BERTScore (BS) on CNN/D . The top and bottom blocks show the comparison for non-pre-training and pre-training models separately. We use \textbf{bold} to mark the best abstractive model. } \label{tab:cnndm_rouge} \end{table} \begin{table}[t] \centering \scalebox{0.78}{ \begin{tabular}{lcccc} \toprule \textbf{Non-pretrained} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \midrule LEAD-3 & 16.30 & 1.60 & 11.95 & - \\ Seq2Seq (LSTM) & 28.42 & 8.77 & 22.48 & - \\ Pointer-Generator & 29.70 & 9.21 & 23.24 & 23.16 \\ PGN + Coverage & 28.10 & 8.02 & 21.72 & - \\ \midrule HierGNN-PGN (LSR) & 30.14 & 10.21 & \textbf{24.32} & 27.24 \\ HierGNN-PGN (LIR) & \textbf{30.24} & \textbf{10.43} & {24.20} & \textbf{27.36} \\ \toprule \textbf{Pretrained} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \midrule BERTSUMABS & 38.76 & 16.33 & 31.15 & 37.60 \\ BERTSUMEXTABS & 38.81 & 16.50 & 31.27 & 38.14 \\ T5 (Large) & 40.9 & 17.3 & 33.0 & - \\ BART & 45.14 & {22.27} & {37.25} & - \\ HAT-BART & \textbf{45.92} & \textbf{22.79} & \textbf{37.84} & - \\ \midrule BART - Ours & 44.97 & 21.68 & 36.47 & 52.89 \\ BART + SentTrans. & 45.12 & 21.62 & 36.46 & 52.95 \\ HierGNN-BART (LSR) & 45.19 & 21.71 & 36.59 & 52.94 \\ HierGNN-BART (LIR) & {45.39} & {21.89} & {36.81} & \textbf{53.15} \\ \bottomrule \end{tabular}} \caption{Automatic evaluation results in ROUGE scores, BERTScore (BS) on XSum. All of our HierGNN-PGN models are trained without a coverage mechanism. We use \textbf{bold} for the best model. } \label{tab:xsum_result} \end{table} \noindent \textbf{Baselines.} We compare HierGNN with three types of baselines: 1) the base models for developing HierGNN; and 2) several strong non-pretrained and pretrained baselines; 3) abstractive summarizers boosted with the hierarchical information. We compare HierGNN-PGN with the non-pretrained baselines. We first include the \textbf{LEAD-3} \cite{nallapati2017summarunner} that simply selects the top three sentences in the article as the summary. \textbf{StructSum} \cite{balachandran-etal-2021-structsum} is a PGN-based model, which incorporates structure information by an explicit attention mechanism (ES Attn) on a coreference graph and implicit attention mechanism (IS Attn) on an end-to-end learned document structure. StructSum ES+IS Attn uses both implicit and explicit structures. We compare HierGNN-PGN with the pretrained baselines. \textbf{BERTSumAbs} and \textbf{BERTSumExtAbs} are two abstractive models by \newcite{liu2019BERTSum} based on the BERT encoder. We also incorporate a strong multitask sequence generation model, \textbf{T5-Large}. \textbf{Hie-BART} \cite{akiyama-etal-2021-hieBart} enhances BART by jointly modeling the sentence and token-level information in the self-attention layer. \textbf{HAT-BART} \cite{rohde2021hierarchicalBART} appends a sentential Transformer block on top of BART's encoder to model the sentence-level dependencies. We also develop a baseline, \textbf{BART+SentTrans.}, replacing our MTC block with a Transformer block. This baseline uses a comparable number of parameters to our HierGNN. We aim to verify the advantage of modeling the document's hierarchical information by MTC over just increasing the model size. \section{Results} \begin{table}[] \centering \scalebox{0.8}{ \begin{tabular}{ccccc} \toprule \textbf{Model} & \textbf{Rel.} & \textbf{Inf.} & \textbf{Red.} & \textbf{Overall} \\ \midrule BERTSUMABS & *-0.43 & *-0.33 & -0.11 & *-0.29 \\ T5 & 0.08 & -0.09 & 0.05 & 0.01 \\ BART & 0.15 & \textbf{0.24} & -0.04 & 0.12 \\ HierGNN-BART & \textbf{0.20} & 0.19 & \textbf{0.09} & \textbf{0.16} \\ \bottomrule \end{tabular}} \caption{Results for the human evaluation based on i) Relevance (Rel.), ii) Informativeness (Inf.), and iii) Redundancy (Red.). * indicates statistically significant improvements over the baselines with our model (*: by pair-wise t-test with $p < 0.05$, corrected using Benjamini–Hochberg method to control the False Discovery Rate \cite{benjamini1995controllingfalsedicoveryrate(fdr)} for multiple comparison). We \textbf{bold} the best results in each criteria and the overall evaluation. Detailed results are given in Appendix~\ref{sec:human_eval_appendix}.} \label{tab:human_eval} \end{table} \noindent \textbf{Automatic Evaluation.} We evaluate the quality of summaries through ROUGE F-1 scores \cite{lin-och-2004-rougeL} by counting the unigram (R-1), bigram (R-2) and longest common subsequence (R-L) overlaps. To avoid the use of pure lexical overlap evaluation \cite{huang2020whatwehaveachievedinSummarization}, we also use BERTScore \cite{zhang2019bertscore}. \begin{table}[] \centering \scalebox{0.8}{ \begin{tabular}{lcccc} \toprule \multicolumn{1}{c}{} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \midrule \textbf{Full Model} & 30.24 & 10.43 & 24.20 & 27.36 \\ \midrule w/o HierGNN Module & -0.54 & -1.22 & -0.96 & -4.20 \\ w/o Graph-select (GSA) & -0.41 & -0.41 & -0.17 & -0.27 \\ w/o Sparse MTC & -0.14 & -0.25 & +0.05 & -0.41 \\ w/o Graph Fusion & -0.94 & -0.81 & -0.77 & -1.39 \\ \bottomrule \end{tabular}} \caption{Ablation study of each modules in our HierGNN-PGN (LIR) model on XSum.} \label{tab:ablation} \end{table} We summarize the results for non-pretrained and pretrained models on CNN/DM and XSum in the upper and bottom block of Table~\ref{tab:cnndm_rouge} and Table~\ref{tab:xsum_result}, respectively. Our HierGNN module improves the performance over the PGN and BART for both CNN/DM and XSum, demonstrating the effectiveness of our reasoning encoder for the non-pretrained and pretrained summarizers. Secondly, the best model of HierGNN-PGN achieves higher scores than StructSum ES and ES+IS that explicitly construct the document-level graph representation using an external parser in pre-processing. This indicates our learned hierarchical structure can be effective and beneficial for downstream summarization without any supervision. HierGNN-BART also outperforms Hie-BART, HAT-BART and BART+SentTrans., which indicates that the MTC encoder's inductive bias is effective in modeling useful structure. \begin{table}[t] \centering \scalebox{0.70}{ \begin{tabular}{lrr} \toprule \textbf{Model} & \textbf{Coverage} ($\nearrow$) & \textbf{Copy Length} ($\searrow$) \\ \midrule Reference & \textbf{20.27 $\%$} & \textbf{5.10 } \\ \midrule Pointer-Generator & 11.78 $\%$ & 18.82 \\ Ours $w/o$ Graph Select Attn. & 13.74 $\%$ & 18.88 \\ Ours $w/$ Graph Select Attn. & \textbf{15.22} $\%$ & \textbf{16.80} \\ \bottomrule \end{tabular}} \caption{Results of average copying length of sequences and coverage of the source sentences for the CNN/DM datasets. Arrows ($\nearrow$ or $\searrow$) indicate that larger or lower scores are better, respectively. } \label{tab:copy_coverage_analysis} \end{table} \noindent \textbf{Human Evaluations.} We also invited human referees from Amazon Mechanical Turk to assess our model and additional three pure abstractive baselines including BERTSUMABS, T5-Large, BART on CNN/DM testing set. Our assessment focuses on three criteria: i) Relevance (\textit{Whether the conveyed information in the candidate summary is relevant to the article}?), ii) Informativeness (\textit{How accurate and faithful information does the candidate summary convey}?), and iii) Redundancy (\textit{Whether the sentences in each candidate summary are non-redundant with each other}?). The detailed settings for human evaluation are presented in Appendix~\ref{sec:human_eval_details}. We ask the referees to choose the best and worst summaries from the four candidates for each criterion. The overall scores in Table~\ref{tab:human_eval} are computed as the fraction of times a summary was chosen as best minus the fraction it was selected as worst. The results show that our HierGNN-BART achieves the overall best performance. Moreover, while BART has a slightly better informativeness score, HierGNN-BART produces better summaries in terms of Relevance and Redundancy. \noindent \textbf{Ablations.} We conduct an ablation study (in Table \ref{tab:ablation}) of the HierGNN encoder, graph-selection attention, sparse MTC and graph fusion layer. The ablation is done on our HierGNN-PGN LIR model trained on XSum. The ablation in HierGNN reasoning module significantly degrades the model, which suggests the positive contribution of the functionality in across-sentence reasoning. The scores without GSA also confirm the guidance of graph-level information is beneficial. By removing the graph fusion layer, we again observe the performance decreases, which proves the benefits of fusing the neighbor feature from multiple hopping distances. Finally, the results also confirm the superiority of the sparse MTC over the dense MTC for learning effective hierarchical structure for summarization. \begin{table}[t] \centering \scalebox{0.9}{ \begin{tabular}{lccc} \toprule & \textbf{R-1} & \textbf{R-2} & \textbf{BS} \\ \midrule BART & 49.41 & 21.70 & 19.12 \\ HierGNN-BART & \textbf{49.62} & \textbf{21.74} & \textbf{20.32} \\ \bottomrule \end{tabular}} \caption{Summarization performance on PubMed. We test BART and HierGNN-BART with the same hyperparameters settings.} \label{tab:pubmed-result} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/pubmed-residual.pdf} \caption{Performance gap on PubMed between HierGNN-BART with BART when summarizing articles truncated at different lengths. The gap between HierGNN and BART consistently increases with input length.} \label{fig:pubmed-res} \end{figure} \section{Discussion} \noindent \textbf{Coverage and Copy Length.} We report two metrics introduced by \newcite{see-etal-2017-getTothePoint} in Table~\ref{tab:copy_coverage_analysis}. The coverage rate measures how much information in the source article is covered by the summary, while the average copy length indicates to what extent that summarizer directly copies tokens from the source article as its output. The higher coverage rate achieved by our HierGNN indicates that it can produce summaries with much richer information in the source article. \citeauthor{balachandran-etal-2021-structsum} find that PGN tends to over-copy content from the source article thus degenerating into an extractive model, particularly with more extractive datasets such as CNN/DM. We find that the graph-selection attention significantly reduces the average copy length, indicating that it informs the decoder to stop copying by leveraging the learned structural information in the encoder and that it reduces the reliance on PGN's copying functionality \cite{see-etal-2017-getTothePoint}. We show a qualitative example for the graph-selection attention outcome in Appendix \ref{sec:gsa_analysis}. \begin{table}[] \scalebox{0.79}{ \begin{tabular}{lcccc} \toprule \textbf{CNN/DM} & \textbf{Comp.} & \textbf{2-hop} & \textbf{3-hop} & \textbf{4-hop} \\ \midrule Reference & 63.03 & 32.08 & 4.59 & 0.31 \\ \midrule BART & 79.52 & 17.81 & {2.43} & {0.24} \\ HierGNN-BART & {78.13}($\downarrow$) & {19.29}($\uparrow$) & 2.36($\downarrow$) & 0.21($\downarrow$) \\ \midrule \midrule \textbf{XSum} & \textbf{Comp.} & \textbf{2-hop} & \textbf{3-hop} & \textbf{4-hop} \\ \midrule Reference & 34.87 & 42.50 & 18.79 & 3.83 \\ \midrule BART & 28.47 & 42.51 & 23.05 & {5.98} \\ HierGNN-BART & {27.27}($\downarrow$) & {42.53}($\uparrow$) & {24.31}($\uparrow$) & 5.89($\downarrow$) \\ \bottomrule \end{tabular}} \caption{Percentages of summary sentences are synthesized by compression (information is extracted from a single source sentence) and fusion (information is combined from two or more source sentences). We use $\downarrow$ and $\uparrow$ to mark the changes between BART and HierGNN. } \label{tab:fusion-analysis} \end{table} \noindent \textbf{Layer-shared or Layer-independent Reasoning?} In Tables \ref{tab:cnndm_rouge} and \ref{tab:xsum_result}, we observe that the layer-shared reasoning (LSR) architecture for HierGNN-PGN on CNN/DM outperforms the layer-independent reasoning (LIR) architecture, with the opposite being true for XSum. We attribute this difference to the inductive bias of the base model and the essential difference between the CNN/DM and XSum datasets. PGN-based models tend to copy and degenerate the model into an extractive summarizer \cite{balachandran-etal-2021-structsum}. With a more extractive dataset like CNN/DM, a complex reasoning procedure for the PGN-based model may not be necessary; instead, learning a single hierarchical structure and selecting the sentences to be copied accordingly is sufficient. However, XSum summaries are abstractive, and the dataset emphasizes combining information from multiple document sites (see discussion by \citealt{narayan2019article}). LIR then shows its advantage by learning separate hierarchical structure in each layer. For an abstractive base model (BART), LIR consistently outperforms LSR on both CNN/DM and XSum. \noindent \textbf{Compression or Fusion?} To assess whether sentence fusion happens often, we quantify the ratio of sentence compression and sentence fusion that the model uses to generate summaries in Table~\ref{tab:fusion-analysis} \cite{lebanoff2019scoringSentenceSingletons}. In comparison to BART, HierGNN reduces the proportion of sentence compression in both CNN/DM and XSum. Furthermore, the summarization models tend to adopt sentence compression more than exists in human-written references for CNN/DM, while more sentence fusion is used for XSum. This observation reveals that mechanism learned by end-to-end for neural summarizers to produce summaries is different than that humans use. Human editors can flexibly switch between compression and fusion; the summarization models tend to adopt one of them to produce the output. \begin{figure} \captionsetup[subfigure]{labelformat=empty} \centering \begin{subfigure}{} \begin{minipage}[]{\linewidth} \includegraphics[width=1\linewidth]{figures/Intra-similarity.pdf} \\ \vspace{0.3cm} \includegraphics[width=1\linewidth]{figures/Inter-similarity.pdf} \end{minipage} \end{subfigure}% \vspace{-0.4cm} \caption{Layer-wise intra-layer diversity (top) and inter-layer diversity (bottom) for BART with 2-layer HierGNN equipped with Sparse and Dense MTC.} \label{fig:mtc-similarity-measure} \end{figure} \noindent \textbf{Effectiveness for Longer Sequence.} The performance of sequence-to-sequence models decays as the length of the input sequence increases \cite{j.2018generating-wikipedia-by-summarizaing-long-sequences} because they do not capture long-range dependencies. We hypothesize that HierGNN has a better capability in capturing such dependencies via its learned document hierarchical structure, thus enhancing the performance for long-sequence inputs. To verify this, we further conduct experiments on PubMed \cite{cohan2018ArxivPubMedBenchmark}, a long-document summarization dataset with scientific articles in the medical domain. We summarize the performance in Table \ref{tab:pubmed-result}. We notice that HierGNN improves BART by a large margin. We further evaluate the advantages of HierGNN over vanilla BART with respect to inputs of various lengths. As shown in Figure~\ref{fig:pubmed-res}, when the input is longer than 1.6K tokens, HierGNN has a positive advantage over BART. As the input length increases, the advantage of HierGNN consistently becomes larger. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/reasoning_chain.pdf} \caption{Top: the top-3 sentences with highest/lowest root probabilities, reference and summaries in article 23 in CNN/DM testing split. We underline the relevant contents; Bottom: visualizations for our sparse (Left) and the dense (Right) MTC layer for HierGNN-BART.} \label{fig:reasoning_case} \end{figure} \noindent \textbf{Sparse MTC or Dense MTC?} We also study the expressive ability of our adaptive sparse variant of the matrix tree computation. We design two quantitative metrics: 1) \textit{Intra-layer diversity} measures the diversity for the marginal distributions of roots and edges in each MTC layer, which is calculated by the range of the probability distribution; 2) \textit{Inter-layer diversity} measures the diversity for the marginal distributions of roots and edges between MTC layers, which is calculated by the average Jensen-Shannon (JS) Divergence between the marginal distributions of roots and edges in different layers \cite{zhang2021sparseReluAttention,correia2019adaptivelySparseTransformer}. We compare both intra-layer and inter-layer diversity for our adaptively sparse MTC and the original dense MTC \cite{koo2007structuredpredictionMatrixTreeTheorm,liu-etal-2019-SummarizationasTreeInduction,balachandran-etal-2021-structsum}. Figure~\ref{fig:mtc-similarity-measure} shows that our sparse variant of MTC has a higher diversity in both intra- (Top) and inter-layer (Bottom) metrics for CNN/DM and XSum, indicating that our sparse MTC has a more powerful expressive ability than dense MTC. We find that the sparsity of HierGNN is different across layers and datasets: 1) 99.66\% of HierGNN's predictions for XSum instances have at least one element that is sparsified to zero, while this proportion is 24.22\% for CNN/DM; 2) Almost all the sparsified elements in HierGNN's predictions for XSum are edges, while roots for CNN/DM; 3) 90.32\% of the elements of the edge distribution in the second MTC layer are sparsified in XSum, but no any sparsified element in the first layer. In CNN/DM, the proportion of sparsified elements in the first and second layer are almost identical. These observations reveal that sparse MTC can adaptively choose whether sparse out elements in root or edge distributions, thus boosting the richness of the structural information represented by MTC. We finally show a qualitative case with three sentences per article, having the highest or lowest root probabilities (see Figure~\ref{fig:reasoning_case}), and the heatmap visualization of the learned hierarchical structures from sparse and dense MTC. We observe that the highest-probability root sentences tend to be summary-worthy while also scattering in different positions of the article, and the lowest probability is irrelevant. The structure learned by Sparse MTC tends to be more diverse and can successfully sparsify out the sentence nodes with irrelevant contents, e.g., 18th and 20th sentence. \section{Conclusion} We propose HierGNN that can be used in tandem with existing generation models. The module learns the document hierarchical structure while being able to integrate information from different parts of the text as a form of reasoning. Our experiments verify that HierGNN is effective in improving the plain sequential summarization models. \section*{Limitations} The inductive bias of our HierGNN model has an assumption that the source article follows an ``inverted pyramid'' style of writing. This may pose limitations in the generalization of our model to other categories of input documents with no or a weak hierarchical structure. Future work includes understanding the limitations of HierGNN in different input domains (e.g., conversation summarization). Additionally, as other large-scale pretrained neural summarizers, our approach with an additional HierGNN encoder increases model complexity. To train our BART-based system, GPUs with at least 32GB of memory are required. Future work may focus on distilling the large HierGNN model into a much smaller size while retaining its original performance. \section*{Ethical and Other Considerations} \paragraph{Human evaluations.} Human workers were informed of the intended use of the provided assessments of summary quality and complied with the terms and conditions of the experiment, as specified by Amazon Mechanical Turk.\footnote{\url{https://www.mturk.com}} In regards to payment, workers were compensated fairly with the wage of \pounds9 hourly (higher than the maximum minimum wage in the United Kingdom) i.e.\ \pounds4.50 \, per HIT at 2 HITs per hour.\footnote{\url{https://www.gov.uk/national-minimum-wage-rates}} \paragraph{Computing time.} We first report the computing time for our most computationally intense HierGNN-BART (471 million parameters) using NVIDIA Tesla A100 with 40G RAM: with CNN/DM, the training takes around 81 GPU hours, and the inference takes 9.39 GPU hours. With XSum, the training takes around 32 GPU hours, and the inference takes 4.41 GPU hours. Additionally, training of HierGNN-PGN (32 million parameters) on CNN/DM takes 0.79 seconds per iteration using 1 NVIDIA V100 GPU card with 16GB. We estimate the inference time is 4.02 documents per second. \section{Introduction} \begin{table}[t] \resizebox{\linewidth}{!}{% \begin{tabular}{p{10cm}} \toprule \textbf{Article Sentences:} \\ 1. {\color[HTML]{00659a}The town is home to the prestigious Leander Club, which has trained more than 100 Olympic medal-winning rowers}. \\ \textit{- 2 sentences are abbreviated here.} \\ 4. {\color[HTML]{9a0018}The Royal Mail has painted more than 50 postboxes gold following Team GB's gold medal haul at London 2012}. \\ 5. Originally it said it was only painting them in winners home towns, or towns with which they are closely associated. \\ 6. Town mayor Elizabeth Hodgkin said: `` {\color[HTML]{00659a}We are the home of rowing} ... I feel very excited about it." \\ \textit{- 5 sentences are abbreviated here.} \\ 12. The {\color[HTML]{006601}Henley-on-Thames postbox} was painted on Friday. \\ \textit{- one sentence is abbreviated here.} \\ \midrule \textbf{Reference Summary:} {\color[HTML]{9a0018}The Royal Mail has painted a postbox gold} in the {\color[HTML]{006601}Oxford-shire town of Henley-on-Thames} - {\color[HTML]{00659a}in recognition of its medal} {\color[HTML]{00659a}winning rowing club}. \\ \midrule \textbf{BART's Summary:} {\color[HTML]{006601}A postbox in Henley-on-Thames} has been {\color[HTML]{9a0018}painted gold as part of the Royal Mail's `` Olympic gold '' campaign}. \\ \midrule \textbf{Our HierGNN's Summary:} {\color[HTML]{9a0018}A Royal Mail postbox} in {\color[HTML]{006601}Henley-on-Thames} has been {\color[HTML]{9a0018}painted gold} in {\color[HTML]{00659a}honour of the town 's Olympic rowing success}. \\ \bottomrule \end{tabular}% } \caption{Example of an article from XSum with summaries given by human-written reference, BART \cite{lewis2020bart} and our HierGNN equipped with BART. BART's summary fails to capture all information pieces as the reference (as highlighted in various colors), while HierGNN has advantages in combining the information from multiple locations in the source side.} \label{tab:summarization_illustration} \end{table} Sequential neural network architectures in their various forms have become the mainstay in abstractive summarization \cite{see-etal-2017-getTothePoint,lewis2020bart}. However, the quality of machine-produced summaries still lags far behind the quality of human summaries \cite{huang2020whatwehaveachievedinSummarization,xie-etal-2021-factual-consistency,cao-etal-2022-hallucinated-factuality,lebanoff2019scoringSentenceSingletons}. Due to their sequential nature, a challenge with neural summarizers is to capture hierarchical and inter-sentential dependencies in the summmarized document. Progress in cognitive science suggests that humans construct and reason over a latent hierarchical structure of a document when reading the text in it \cite{graesser1994constructingInferenceDuringNarrativeTextComprehension,goldman1999narrative}. Such \textit{reasoning behavior} includes uncovering the salient contents and effectively aggregating all related clues spreading across the documents to understand the document. \citet{lebanoff2019scoringSentenceSingletons} found that human editors usually prefer writing a summary by fusing information from multiple article sentences and reorganizing the information in summaries (sentence fusion), rather than dropping non-essential elements in an original sentence such as prepositional phrases and adjectives (sentence compression). Different summarization benchmarks show there are between 60-85\% summary sentences that are generated by sentence fusing. These recent findings support our motivation to make use of hierarchical document structure when summarizing a document. We present a document hierarchy-aware graph neural network (HierGNN), a neural encoder with a reasoning functionality that can be effectively incorporated into any sequence-to-sequence (seq2seq) neural summarizer. Our HierGNN first learns a latent hierarchical graph via a sparse variant of the matrix-tree computation \cite{koo2007structuredpredictionMatrixTreeTheorm,liu-etal-2019-SummarizationasTreeInduction}. It then formulates sentence-level reasoning as a graph propagation problem via a novel message passing mechanism. During decoding, a graph-selection attention mechanism serves as a source sentence selector, hierarchically indicating the attention module which tokens in the input sentences to focus on. Our experiments with HierGNN, incorporated into both pointer-generator networks \cite{see-etal-2017-getTothePoint} and BART \cite{lewis2020bart}, confirm that HierGNN substantially improves both the non-pretrained and pretrained seq2seq baselines in producing high-quality summaries. Specifically, our best HierGNN-BART achieves an average improvement of 0.55 and 0.75 points in ROUGE-1/2/L on CNN/DM and XSum. Compared with a plain seq2seq model, HierGNN encourages the summarizers to favor sentence fusion more than sentence compression when generating summaries. Modeling the hierarchical document structure via our sparse matrix-tree computation also enables HierGNN to treat long sequences more effectively. In addition, our sparse adaptive variant of the matrix-tree computation demonstrates a more powerful expressive ability over the original one \cite{koo2007structuredpredictionMatrixTreeTheorm,liu-etal-2019-SummarizationasTreeInduction}. We summarize our contributions as follows, \begin{itemizesquish}{-0.3em}{0.5em} \item We present a novel encoder architecture for improving seq2seq summarizers. This architecture captures the hierarchical document structure via an adaptive sparse matrix-tree computation, with a new propagation rule for achieving inter-sentence reasoning. \item We design a graph-selection attention mechanism to fully leverage the learned structural information during decoding in advantages over only using it in encoding. \item Results on CNN/DM and XSum demonstrates the effectiveness of HierGNN in improving the quality of summaries for both non-pretrained and pretrained baselines. An in-depth analysis confirms our module improves the integration of information from multiple sites in the input article and that it is more effective in processing long sequence inputs. \end{itemizesquish} \section{Related Work} \textbf{Neural Abstractive Summarization} \newcite{rush-etal-2015-neuralSummarization} first proposed to use a sequence-to-sequence model with an attention mechanism to perform sentence compression. \newcite{mendes-etal-2019-jointly} demonstrated the advantages and limitations of neural methods based on sentence compression. The pointer-generator networks (PGN; \citealt{see-etal-2017-getTothePoint}) enhances the attention model with a copying functionality. PGN has also been further extended to create summarization systems by incorporating the topic information \cite{topicAwarePGN}, document structural information \cite{song2018structureInfusedCopy}, semantic information \cite{hardy-vlachos-2018-AMRguidedSummarization}, and was improved by replacing the plain LSTM module with the more advanced Transformer model to overcome the difficulty in modeling long sequence input \cite{pilault-etal-2020-extractiveAbsTransformer,wang2021exploringExplainableSelectionControlAbsSummarization,fonseca2022}. For the pretrained models, BERTSum \cite{liu2019BERTSum} adopted the BERT encoder for the summarizer, with a randomly initialized decoder. \newcite{lewis2020bart} presented BART which pre-trains both the underlying encoder and decoder. \newcite{dou-etal-2021-gsum} investigated ``guidance signals'' (e.g., keywords, salient sentences) for further boosting the performances. \noindent \textbf{Graph Neural Approach for Summarization} Graph neural networks have demonstrated their ability to capture rich dependencies in documents to be summarized. \newcite{wang2020heterogeneousGraphSum} use a ``heterogeneous graph'' with sentence nodes and co-occurring word nodes to capture the sentence dependencies. \newcite{jin2020semsum} use two separate encoders to encode the input sequence with a parsed dependency graph. \newcite{cui-etal-2020-enhancing-extsum-with-topic-gnn} use a bipartite graph with a topic model to better capture the inter-sentence relationships. \newcite{kwon-etal-2021-considering-tree-structure-in-sent-extsummarization} capture both intra- and inter-sentence relationships via a nested tree structure. \newcite{zhu2021enhancingFactualbyKG} use entity-relation information from the knowledge graph to increase the factual consistency in summaries. Our approach is related to the structural attention model \cite{balachandran-etal-2021-structsum,liu-etal-2019-SummarizationasTreeInduction}, but differs in two major ways: (i) we introduce an adaptive sparse matrix-tree construction to learn a latent hierarchical graph and a novel propagation rule; (ii) we investigate to use the structure information both with the encoder and the decoder for abstractive summarization, and not just the encoder. These shows to be more effective for unsupervised learning of the latent hierarchical structure while can defeat the approach that leverages external graph constructor \cite{balachandran-etal-2021-structsum}. \iffalse \begin{equation}\label{eqt:maximum_likelihood} \mathcal{L}_{sum} = -\frac{1}{|C|}\sum_{(X,Y)\in C} log P(Y|X;\Theta). \end{equation} where $\Theta$ includes all trainable parameters. \fi \section{Hierarchy-aware Graph Neural Encoder}\label{sec:graph_construct} HierGNN learns the document structure in an end-to-end fashion without any direct structure supervision, and does not need an external parser to construct the structure, unlike previous work \citep{balachandran-etal-2021-structsum,huang-etal-2020-knowledgegraphaugmentedsummarization,wang2020heterogeneousGraphSum,cardenas2022trade}. In addition, it empirically improves over supervised graph construction, which has been a challenge \cite{balachandran-etal-2021-structsum}. Sequential summarizers encode an $N$-token article, $X = (x_1, \cdots, x_N)$ as $d$-dimensional latent vectors using an encoding function $\mathbf{h}_{enc}(x_t) \in \mathbb{R}^{d}$ and then decodes them into the target summary $Y$. (We denote by $\mathbf{h}_{enc}(X)$ the sequence of $x_t$ encodings for $t \le N$.) Our model includes four modules in addition to this architecture: 1) a sparse matrix-tree computation for inferring the document hierarchical structure, ii) a novel message-passing layer to identify inter-sentence dependencies, iii) a reasoning fusion layer aggregating the outputs of the message-passing module; and vi) a graph-selection attention module to leverage the encoded structural information. \subsection{Learning the Latent Hierarchical Structure} We first introduce our latent structure learning algorithm that makes use of a sparse variant of the matrix-tree theorem \cite{1986GraphTB,koo2007structuredpredictionMatrixTreeTheorm}. \noindent \textbf{Latent Document Hierarchical Graph.} We represent the document as a complete weighted graph, with each node representing a sentence. The edge weights are defined as the marginal probability of a directional dependency between two sentences. In addition, each sentence node has an extra probability value, the ``root probability'' which indicates the \textit{hierarchical role} of the sentence, such as the roles of \emph{the lead}, \emph{most important facts}, or \emph{other information} defined based on the inverted pyramid model for news articles \cite{po2003news,ytreberg2001moving}. Intuitively, a sentence with a high root probability (high hierarchical position) conveys more general information; namely, it is a \textit{connector}, while a sentence with a lower root probability (\textit{information node}) carries details supporting its higher connectors. The underlying graph structure is latent and not fixed, summed out in our overall probability model using the matrix-tree theorem. \begin{figure*}[] \centering \includegraphics[width=0.75\textwidth]{figures/arch.pdf} \caption{Architecture for the sequence-to-sequence model with HierGNN reasoning encoder.} \label{fig:hiergnn_layer} \end{figure*} \noindent \textbf{Sparse Matrix-Tree Computation.} For an article with $M$ sentences, we start from the sentence embeddings as the node initialization $H^{(0)} = [\mathbf{s}_1, ..., \mathbf{s}_i, ..., \mathbf{s}_M]$. We then use two independent non-linear transformations to obtain a pair of \textit{parent} and \textit{child} representation for each sentence, \begin{align} \mathbf{s}_i^{(p)} &= \sigma(W_p\mathbf{{s}}_i+b_p), \\ \mathbf{s}_i^{(c)} &= \sigma(W_c\mathbf{{s}}_i+b_c), \end{align} \noindent where $W_p, W_c, b_p, b_c$ are parameters, $\sigma$ is the ReLU activation function \cite{dahl2013ReLU}. The standard use of the matrix-tree theorem \cite{1986GraphTB} computation (MTC; \citealt{smith2007probabilistic,koo2007structuredpredictionMatrixTreeTheorm,mcdonald2007complexity}) includes the exponential function to calculate a matrix $F\in \mathbb{R}^{M\times M}$ with positive values with each element $f_{ij}$ representing the weight of the directional edge from a node $s_i$ to $s_j$; and a positive vector of root scores $\mathbf{f}^{(root)} \in \mathbb{R}^M$. However, having a dense matrix degrades our graph reasoning module by including irrelevant information from redundant $M$ sentence nodes. Inspired by the work about sparse self-attention \cite{zhang2021sparseReluAttention,correia2019adaptivelySparseTransformer}, we introduce an adaptive solution to inject sparsity into MTC. We replace the exponential scoring function with the ReLU function ($\mathrm{ReLU}(x \in \mathbb{R}) = \max \{ x, 0 \}$ and similarly coordinate-wise when $x$ is a vector) and calculate the root $f_i^{(root)}$ and edge scores $f_{ij}$ by a fully-connected layer and a bi-linear attention layer, respectively, \begin{align} f_i^{(root)} &= \textsc{ReLU}(W_r \mathbf{s}_i^{(p)} + b_r) + \varepsilon, \\ f_{ij} &= \textsc{ReLU}({{\mathbf{s}_i^{(p)}}^\top W_{bi}{\mathbf{s}_j^{(c)}} }) + \varepsilon, \end{align} \noindent where $W_{bi}, W_r, b_r$ are learnable. (We use $\varepsilon=10^{-6}$ to avoid matrix non-invertibility issues.) Compared to the exponential function, ReLU relaxes $F$ and $\mathbf{f}^{(root)}$ to be non-negative, thus being capable of assigning zero probability and pruning dependency edges and roots. We finally plug in these quantities to the standard MTC \cite{1986GraphTB} and marginalize the edge and root probabilities as the adjacency matrix $A(i,j)=P(z_{ij}=1)$ and root probability $p^{r}_{i}$ representing the hierarchical role (i.e., the likelihood to be a connector) of each sentence. \subsection{Reasoning by Hierarchy-aware Message Passing} We present a novel message-passing mechanism over the learned hierarchical graph. This mechanism realizes the inter-sentence reasoning where connectors can aggregate information from their related information nodes while propagating the information to others. For the $i$-th sentence node, the edge marginal controls the aggregation from its $K$ information nodes; and the root probability controls the neighbouring information is combined as $i$-th node's update $\mathbf{u}^{(l)}$ in the $l$-th reasoning layer, \begin{equation} \mathbf{u}^{(l)}_i = (1-p^{r}_{i}) \mathcal{F}_r(\mathbf{s}_i^{(l)}) + (p^{r}_{i}) \sum_{k=1}^K A_{ik} \mathcal{F}_n(\mathbf{s}_k^{(l)}), \end{equation} where $\mathcal{F}_r$ and $\mathcal{F}_n$ are parametric functions. Intuitively, if a sentence is a \textit{connector}, it should have strong connectivity with the related \textit{information nodes}, and aggregate more details. Each information node learns to either keep the uniqueness of its information or fuse the information from the connectors. To filter out the unnecessary information, we adopt a gated mechanism as the information gatekeeper in the node update, \begin{align} \mathbf{g}_i^{(l)} &= \sigma (\mathcal{F}_g([\mathbf{u}_i^{(l)}; \mathbf{h}_i^{(l)}])), \\ \mathbf{h}_i^{(l+1)} &= \text{LN}(\mathbf{g}_i^{(l)} \odot \mathcal{\phi}(\mathbf{u}_i^{(l)}) + (\mathbf{1}-\mathbf{g}_i^{(l)}) \odot \mathbf{h}_i^{(l)}), \end{align} where $\mathcal{F}_g$ is a parametric function and $\odot$ is the element-wise dot product. We use layer normalization (\textsc{LN}) to stabilize the output for the update function. The function $\sigma$ is the sigmoid function, and $\phi$ can be any non-linear function. \subsection{Reasoning Fusion Layer} We construct \emph{reasoning chains} that consist of $L$ hops by stacking $L$ HierGNN blocks together. To handle cases where fewer than $L$ hops are needed, we add a fusion layer to aggregate the output from each reasoning hop to produce the final output of HierGNN. A residual connection is also introduced to pass the node initialization directly to the output, \begin{equation} \mathbf{h}^{(G)}_i = (W_g[\mathbf{h}^{(1)}_i, ..., \mathbf{h}^{(L)}_i] + b_g) + \mathbf{h}^{(0)}_i, \end{equation} \noindent where $W_g, b_g$ are learnabale parameters. We use two approaches for layer use: (a) \textit{Layer-Shared Reasoning (LSR)}: we construct a shared reasoning graph first, followed by $L$ message passing layers for reasoning; (b) \textit{Layer-Independent Reasoning (LIR)}: we learn the layer-wise latent hierarchical graphs independently, where each message passing layer uses its own graph. \subsection{Graph-selection Attention Mechanism} In addition to token-level decoding attention, we propose a \textit{graph-selection attention mechanism} (GSA) to inform the decoder with learned hierarchical information, while realizing the sentence-level content selection. In each decoding step $t$, our decoder first obtains a graph context vector, $\mathbf{c}_G^t$, which entails the global information of the latent hierarchical graph. We first compute the graph-level attention distribution $\mathbf{a}_G^t$ by, \begin{align} e^t_{v_i} &= \textsc{Attn}^{(G)}(\mathbf{h}^{(L)},\mathbf{z}_t), \\ \mathbf{a}_G^t &= \textsc{Softmax}(\mathbf{e}^t), \end{align} where $\textsc{Attn}^{(G)}$ is a graph attention function. The vectors $\mathbf{h}_i^{(L)} \in \mathbb{R}^d, \mathbf{z}_t \in \mathbb{R}^d$ are the $L$-th layer node embeddings for sentence $i$ and decoding state at time $t$, respectively. The graph context vector $\mathbf{c}_G^t \in \mathbb{R}^d$ is finally obtained by summing all $\mathbf{h}_i^{(L)}$ weighted by $\mathbf{a}_G^t$. The value of $\mathbf{c}_G^t$ is used as an additional input for computing token-level attention, \begin{align} e_{i}^t &= \textsc{Attn}^{(T)}(\mathbf{h}_{enc}(X), \mathbf{z}_t,\mathbf{c}_G^t), \\ \mathbf{a}_T^t &= \textsc{Softmax}(\mathbf{e}^t), \end{align} where $\textsc{Attn}^{(T)}$ is a token-level attention function \cite{luong-etal-2015-effective,vaswani2017attention}. Again, the token-attentional context vector $\mathbf{c}_{f}^t$ is computed by summing the encoder outputs weighted by $\mathbf{a}_T^t$. The final context vector $\mathbf{c}_{f}^t$ is fused from the graph $\mathbf{c}_G^t$ and token context vectors $\mathbf{c}_T^t$ with a parametric function $g_{f}$, $\mathbf{c}_{f}^t = g_{f}(\mathbf{c}_G^t, \mathbf{c}_T^t)$. \section{Experimental Setting} \noindent\textbf{Benchmarks.} We evaluate our model on two common document summarization benchmarks. The first is the CNN/Daily Mail dataset \cite{hermann2015CNNDMdataset} in the news domain, with an average input of 45.7 sentences and 766.1 words, and a reference with an average length of 3.59 sentences and 58.2 words. We use the non-anonymized version of \newcite{see-etal-2017-getTothePoint}, which has 287,084/13,367/11,490 instances for training, validation and testing. The second dataset we use is XSum \cite{narayan2018donXSum}, a more abstractive benchmark consisting of one-sentence human-written summaries for BBC news. The average lengths for input and reference are 23.26 sentences with 430.2 words and 1 sentence with 23.3 words, respectively. We follow the standard split of \newcite{narayan2018donXSum} for training, validation and testing (203,028/11,273/11,332). \noindent \textbf{Implementations.} We experiment with the non-pretrained PGN of \newcite{see-etal-2017-getTothePoint} and the pretrained BART model \cite{lewis2020bart}. The implementation details are in Appendix~\ref{sec:model_implementations}. \begin{table}[t] \centering \scalebox{0.75}{ \begin{tabular}{lcccccc} \toprule \multicolumn{1}{l}{\textbf{Non-pretrained}} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \toprule LEAD-3 & 40.34 & 17.70 & 36.57 & - \\ PGN & 39.53 & 17.28 & 36.38 & - \\ StructSum ES & 39.63 & 16.98 & 36.72 & - \\ StructSum LS & 39.52 & 16.94 & 36.71 & - \\ StructSum (LS + ES) & 39.62 & 17.00 & \textbf{36.95} & 21.70 \\ \midrule PGN - Ours & 39.07 & 16.97 & 35.87 & 23.74 \\ HierGNN-PGN (LSR) & \textbf{39.87} & \textbf{17.77} & 36.85 & \textbf{25.64}\\ HierGNN-PGN (LIR) & 39.34 & 17.39 & 36.44 & 25.26 \\ \toprule \multicolumn{1}{l}{\textbf{Pretrained}} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \toprule BERTSUMABS & 41.72 & 19.39 & 38.76 & 29.05 \\ BERTSUMEXTABS & 42.13 & 19.60 & 39.18 & 28.72 \\ T5-Large & 42.50 & 20.68 & 39.75 & - \\ BART & 44.16 & 21.28 & 40.90 & - \\ Hie-BART & 44.35 & 21.37 & 41.05 & - \\ HAT-BART & 44.48 & 21.31 & 41.52 & - \\\midrule BART - Ours & 44.62 & 21.49 & 41.34 & 33.98 \\ BART + SentTrans. & 44.44 & 21.44 & 41.27 & 33.90 \\ HierGNN-BART (LSR) & 44.93 & 21.7 & 41.71 & 34.43 \\ HierGNN-BART (LIR) & \textbf{45.04} & \textbf{21.82} & \textbf{41.82} & \textbf{34.59} \\ \bottomrule \end{tabular}} \caption{Automatic evaluation results in ROUGE scores, BERTScore (BS) on CNN/D . The top and bottom blocks show the comparison for non-pre-training and pre-training models separately. We use \textbf{bold} to mark the best abstractive model. } \label{tab:cnndm_rouge} \end{table} \begin{table}[t] \centering \scalebox{0.78}{ \begin{tabular}{lcccc} \toprule \textbf{Non-pretrained} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \midrule LEAD-3 & 16.30 & 1.60 & 11.95 & - \\ Seq2Seq (LSTM) & 28.42 & 8.77 & 22.48 & - \\ Pointer-Generator & 29.70 & 9.21 & 23.24 & 23.16 \\ PGN + Coverage & 28.10 & 8.02 & 21.72 & - \\ \midrule HierGNN-PGN (LSR) & 30.14 & 10.21 & \textbf{24.32} & 27.24 \\ HierGNN-PGN (LIR) & \textbf{30.24} & \textbf{10.43} & {24.20} & \textbf{27.36} \\ \toprule \textbf{Pretrained} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \midrule BERTSUMABS & 38.76 & 16.33 & 31.15 & 37.60 \\ BERTSUMEXTABS & 38.81 & 16.50 & 31.27 & 38.14 \\ T5 (Large) & 40.9 & 17.3 & 33.0 & - \\ BART & 45.14 & {22.27} & {37.25} & - \\ HAT-BART & \textbf{45.92} & \textbf{22.79} & \textbf{37.84} & - \\ \midrule BART - Ours & 44.97 & 21.68 & 36.47 & 52.89 \\ BART + SentTrans. & 45.12 & 21.62 & 36.46 & 52.95 \\ HierGNN-BART (LSR) & 45.19 & 21.71 & 36.59 & 52.94 \\ HierGNN-BART (LIR) & {45.39} & {21.89} & {36.81} & \textbf{53.15} \\ \bottomrule \end{tabular}} \caption{Automatic evaluation results in ROUGE scores, BERTScore (BS) on XSum. All of our HierGNN-PGN models are trained without a coverage mechanism. We use \textbf{bold} for the best model. } \label{tab:xsum_result} \end{table} \noindent \textbf{Baselines.} We compare HierGNN with three types of baselines: 1) the base models for developing HierGNN; and 2) several strong non-pretrained and pretrained baselines; 3) abstractive summarizers boosted with the hierarchical information. We compare HierGNN-PGN with the non-pretrained baselines. We first include the \textbf{LEAD-3} \cite{nallapati2017summarunner} that simply selects the top three sentences in the article as the summary. \textbf{StructSum} \cite{balachandran-etal-2021-structsum} is a PGN-based model, which incorporates structure information by an explicit attention mechanism (ES Attn) on a coreference graph and implicit attention mechanism (IS Attn) on an end-to-end learned document structure. StructSum ES+IS Attn uses both implicit and explicit structures. We compare HierGNN-PGN with the pretrained baselines. \textbf{BERTSumAbs} and \textbf{BERTSumExtAbs} are two abstractive models by \newcite{liu2019BERTSum} based on the BERT encoder. We also incorporate a strong multitask sequence generation model, \textbf{T5-Large}. \textbf{Hie-BART} \cite{akiyama-etal-2021-hieBart} enhances BART by jointly modeling the sentence and token-level information in the self-attention layer. \textbf{HAT-BART} \cite{rohde2021hierarchicalBART} appends a sentential Transformer block on top of BART's encoder to model the sentence-level dependencies. We also develop a baseline, \textbf{BART+SentTrans.}, replacing our MTC block with a Transformer block. This baseline uses a comparable number of parameters to our HierGNN. We aim to verify the advantage of modeling the document's hierarchical information by MTC over just increasing the model size. \section{Results} \begin{table}[] \centering \scalebox{0.8}{ \begin{tabular}{ccccc} \toprule \textbf{Model} & \textbf{Rel.} & \textbf{Inf.} & \textbf{Red.} & \textbf{Overall} \\ \midrule BERTSUMABS & *-0.43 & *-0.33 & -0.11 & *-0.29 \\ T5 & 0.08 & -0.09 & 0.05 & 0.01 \\ BART & 0.15 & \textbf{0.24} & -0.04 & 0.12 \\ HierGNN-BART & \textbf{0.20} & 0.19 & \textbf{0.09} & \textbf{0.16} \\ \bottomrule \end{tabular}} \caption{Results for the human evaluation based on i) Relevance (Rel.), ii) Informativeness (Inf.), and iii) Redundancy (Red.). * indicates statistically significant improvements over the baselines with our model (*: by pair-wise t-test with $p < 0.05$, corrected using Benjamini–Hochberg method to control the False Discovery Rate \cite{benjamini1995controllingfalsedicoveryrate(fdr)} for multiple comparison). We \textbf{bold} the best results in each criteria and the overall evaluation. Detailed results are given in Appendix~\ref{sec:human_eval_appendix}.} \label{tab:human_eval} \end{table} \noindent \textbf{Automatic Evaluation.} We evaluate the quality of summaries through ROUGE F-1 scores \cite{lin-och-2004-rougeL} by counting the unigram (R-1), bigram (R-2) and longest common subsequence (R-L) overlaps. To avoid the use of pure lexical overlap evaluation \cite{huang2020whatwehaveachievedinSummarization}, we also use BERTScore \cite{zhang2019bertscore}. \begin{table}[] \centering \scalebox{0.8}{ \begin{tabular}{lcccc} \toprule \multicolumn{1}{c}{} & \textbf{R-1} & \textbf{R-2} & \textbf{R-L} & \textbf{BS} \\ \midrule \textbf{Full Model} & 30.24 & 10.43 & 24.20 & 27.36 \\ \midrule w/o HierGNN Module & -0.54 & -1.22 & -0.96 & -4.20 \\ w/o Graph-select (GSA) & -0.41 & -0.41 & -0.17 & -0.27 \\ w/o Sparse MTC & -0.14 & -0.25 & +0.05 & -0.41 \\ w/o Graph Fusion & -0.94 & -0.81 & -0.77 & -1.39 \\ \bottomrule \end{tabular}} \caption{Ablation study of each modules in our HierGNN-PGN (LIR) model on XSum.} \label{tab:ablation} \end{table} We summarize the results for non-pretrained and pretrained models on CNN/DM and XSum in the upper and bottom block of Table~\ref{tab:cnndm_rouge} and Table~\ref{tab:xsum_result}, respectively. Our HierGNN module improves the performance over the PGN and BART for both CNN/DM and XSum, demonstrating the effectiveness of our reasoning encoder for the non-pretrained and pretrained summarizers. Secondly, the best model of HierGNN-PGN achieves higher scores than StructSum ES and ES+IS that explicitly construct the document-level graph representation using an external parser in pre-processing. This indicates our learned hierarchical structure can be effective and beneficial for downstream summarization without any supervision. HierGNN-BART also outperforms Hie-BART, HAT-BART and BART+SentTrans., which indicates that the MTC encoder's inductive bias is effective in modeling useful structure. \begin{table}[t] \centering \scalebox{0.70}{ \begin{tabular}{lrr} \toprule \textbf{Model} & \textbf{Coverage} ($\nearrow$) & \textbf{Copy Length} ($\searrow$) \\ \midrule Reference & \textbf{20.27 $\%$} & \textbf{5.10 } \\ \midrule Pointer-Generator & 11.78 $\%$ & 18.82 \\ Ours $w/o$ Graph Select Attn. & 13.74 $\%$ & 18.88 \\ Ours $w/$ Graph Select Attn. & \textbf{15.22} $\%$ & \textbf{16.80} \\ \bottomrule \end{tabular}} \caption{Results of average copying length of sequences and coverage of the source sentences for the CNN/DM datasets. Arrows ($\nearrow$ or $\searrow$) indicate that larger or lower scores are better, respectively. } \label{tab:copy_coverage_analysis} \end{table} \noindent \textbf{Human Evaluations.} We also invited human referees from Amazon Mechanical Turk to assess our model and additional three pure abstractive baselines including BERTSUMABS, T5-Large, BART on CNN/DM testing set. Our assessment focuses on three criteria: i) Relevance (\textit{Whether the conveyed information in the candidate summary is relevant to the article}?), ii) Informativeness (\textit{How accurate and faithful information does the candidate summary convey}?), and iii) Redundancy (\textit{Whether the sentences in each candidate summary are non-redundant with each other}?). The detailed settings for human evaluation are presented in Appendix~\ref{sec:human_eval_details}. We ask the referees to choose the best and worst summaries from the four candidates for each criterion. The overall scores in Table~\ref{tab:human_eval} are computed as the fraction of times a summary was chosen as best minus the fraction it was selected as worst. The results show that our HierGNN-BART achieves the overall best performance. Moreover, while BART has a slightly better informativeness score, HierGNN-BART produces better summaries in terms of Relevance and Redundancy. \noindent \textbf{Ablations.} We conduct an ablation study (in Table \ref{tab:ablation}) of the HierGNN encoder, graph-selection attention, sparse MTC and graph fusion layer. The ablation is done on our HierGNN-PGN LIR model trained on XSum. The ablation in HierGNN reasoning module significantly degrades the model, which suggests the positive contribution of the functionality in across-sentence reasoning. The scores without GSA also confirm the guidance of graph-level information is beneficial. By removing the graph fusion layer, we again observe the performance decreases, which proves the benefits of fusing the neighbor feature from multiple hopping distances. Finally, the results also confirm the superiority of the sparse MTC over the dense MTC for learning effective hierarchical structure for summarization. \begin{table}[t] \centering \scalebox{0.9}{ \begin{tabular}{lccc} \toprule & \textbf{R-1} & \textbf{R-2} & \textbf{BS} \\ \midrule BART & 49.41 & 21.70 & 19.12 \\ HierGNN-BART & \textbf{49.62} & \textbf{21.74} & \textbf{20.32} \\ \bottomrule \end{tabular}} \caption{Summarization performance on PubMed. We test BART and HierGNN-BART with the same hyperparameters settings.} \label{tab:pubmed-result} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/pubmed-residual.pdf} \caption{Performance gap on PubMed between HierGNN-BART with BART when summarizing articles truncated at different lengths. The gap between HierGNN and BART consistently increases with input length.} \label{fig:pubmed-res} \end{figure} \section{Discussion} \noindent \textbf{Coverage and Copy Length.} We report two metrics introduced by \newcite{see-etal-2017-getTothePoint} in Table~\ref{tab:copy_coverage_analysis}. The coverage rate measures how much information in the source article is covered by the summary, while the average copy length indicates to what extent that summarizer directly copies tokens from the source article as its output. The higher coverage rate achieved by our HierGNN indicates that it can produce summaries with much richer information in the source article. \citeauthor{balachandran-etal-2021-structsum} find that PGN tends to over-copy content from the source article thus degenerating into an extractive model, particularly with more extractive datasets such as CNN/DM. We find that the graph-selection attention significantly reduces the average copy length, indicating that it informs the decoder to stop copying by leveraging the learned structural information in the encoder and that it reduces the reliance on PGN's copying functionality \cite{see-etal-2017-getTothePoint}. We show a qualitative example for the graph-selection attention outcome in Appendix \ref{sec:gsa_analysis}. \begin{table}[] \scalebox{0.79}{ \begin{tabular}{lcccc} \toprule \textbf{CNN/DM} & \textbf{Comp.} & \textbf{2-hop} & \textbf{3-hop} & \textbf{4-hop} \\ \midrule Reference & 63.03 & 32.08 & 4.59 & 0.31 \\ \midrule BART & 79.52 & 17.81 & {2.43} & {0.24} \\ HierGNN-BART & {78.13}($\downarrow$) & {19.29}($\uparrow$) & 2.36($\downarrow$) & 0.21($\downarrow$) \\ \midrule \midrule \textbf{XSum} & \textbf{Comp.} & \textbf{2-hop} & \textbf{3-hop} & \textbf{4-hop} \\ \midrule Reference & 34.87 & 42.50 & 18.79 & 3.83 \\ \midrule BART & 28.47 & 42.51 & 23.05 & {5.98} \\ HierGNN-BART & {27.27}($\downarrow$) & {42.53}($\uparrow$) & {24.31}($\uparrow$) & 5.89($\downarrow$) \\ \bottomrule \end{tabular}} \caption{Percentages of summary sentences are synthesized by compression (information is extracted from a single source sentence) and fusion (information is combined from two or more source sentences). We use $\downarrow$ and $\uparrow$ to mark the changes between BART and HierGNN. } \label{tab:fusion-analysis} \end{table} \noindent \textbf{Layer-shared or Layer-independent Reasoning?} In Tables \ref{tab:cnndm_rouge} and \ref{tab:xsum_result}, we observe that the layer-shared reasoning (LSR) architecture for HierGNN-PGN on CNN/DM outperforms the layer-independent reasoning (LIR) architecture, with the opposite being true for XSum. We attribute this difference to the inductive bias of the base model and the essential difference between the CNN/DM and XSum datasets. PGN-based models tend to copy and degenerate the model into an extractive summarizer \cite{balachandran-etal-2021-structsum}. With a more extractive dataset like CNN/DM, a complex reasoning procedure for the PGN-based model may not be necessary; instead, learning a single hierarchical structure and selecting the sentences to be copied accordingly is sufficient. However, XSum summaries are abstractive, and the dataset emphasizes combining information from multiple document sites (see discussion by \citealt{narayan2019article}). LIR then shows its advantage by learning separate hierarchical structure in each layer. For an abstractive base model (BART), LIR consistently outperforms LSR on both CNN/DM and XSum. \noindent \textbf{Compression or Fusion?} To assess whether sentence fusion happens often, we quantify the ratio of sentence compression and sentence fusion that the model uses to generate summaries in Table~\ref{tab:fusion-analysis} \cite{lebanoff2019scoringSentenceSingletons}. In comparison to BART, HierGNN reduces the proportion of sentence compression in both CNN/DM and XSum. Furthermore, the summarization models tend to adopt sentence compression more than exists in human-written references for CNN/DM, while more sentence fusion is used for XSum. This observation reveals that mechanism learned by end-to-end for neural summarizers to produce summaries is different than that humans use. Human editors can flexibly switch between compression and fusion; the summarization models tend to adopt one of them to produce the output. \begin{figure} \captionsetup[subfigure]{labelformat=empty} \centering \begin{subfigure}{} \begin{minipage}[]{\linewidth} \includegraphics[width=1\linewidth]{figures/Intra-similarity.pdf} \\ \vspace{0.3cm} \includegraphics[width=1\linewidth]{figures/Inter-similarity.pdf} \end{minipage} \end{subfigure}% \vspace{-0.4cm} \caption{Layer-wise intra-layer diversity (top) and inter-layer diversity (bottom) for BART with 2-layer HierGNN equipped with Sparse and Dense MTC.} \label{fig:mtc-similarity-measure} \end{figure} \noindent \textbf{Effectiveness for Longer Sequence.} The performance of sequence-to-sequence models decays as the length of the input sequence increases \cite{j.2018generating-wikipedia-by-summarizaing-long-sequences} because they do not capture long-range dependencies. We hypothesize that HierGNN has a better capability in capturing such dependencies via its learned document hierarchical structure, thus enhancing the performance for long-sequence inputs. To verify this, we further conduct experiments on PubMed \cite{cohan2018ArxivPubMedBenchmark}, a long-document summarization dataset with scientific articles in the medical domain. We summarize the performance in Table \ref{tab:pubmed-result}. We notice that HierGNN improves BART by a large margin. We further evaluate the advantages of HierGNN over vanilla BART with respect to inputs of various lengths. As shown in Figure~\ref{fig:pubmed-res}, when the input is longer than 1.6K tokens, HierGNN has a positive advantage over BART. As the input length increases, the advantage of HierGNN consistently becomes larger. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/reasoning_chain.pdf} \caption{Top: the top-3 sentences with highest/lowest root probabilities, reference and summaries in article 23 in CNN/DM testing split. We underline the relevant contents; Bottom: visualizations for our sparse (Left) and the dense (Right) MTC layer for HierGNN-BART.} \label{fig:reasoning_case} \end{figure} \noindent \textbf{Sparse MTC or Dense MTC?} We also study the expressive ability of our adaptive sparse variant of the matrix tree computation. We design two quantitative metrics: 1) \textit{Intra-layer diversity} measures the diversity for the marginal distributions of roots and edges in each MTC layer, which is calculated by the range of the probability distribution; 2) \textit{Inter-layer diversity} measures the diversity for the marginal distributions of roots and edges between MTC layers, which is calculated by the average Jensen-Shannon (JS) Divergence between the marginal distributions of roots and edges in different layers \cite{zhang2021sparseReluAttention,correia2019adaptivelySparseTransformer}. We compare both intra-layer and inter-layer diversity for our adaptively sparse MTC and the original dense MTC \cite{koo2007structuredpredictionMatrixTreeTheorm,liu-etal-2019-SummarizationasTreeInduction,balachandran-etal-2021-structsum}. Figure~\ref{fig:mtc-similarity-measure} shows that our sparse variant of MTC has a higher diversity in both intra- (Top) and inter-layer (Bottom) metrics for CNN/DM and XSum, indicating that our sparse MTC has a more powerful expressive ability than dense MTC. We find that the sparsity of HierGNN is different across layers and datasets: 1) 99.66\% of HierGNN's predictions for XSum instances have at least one element that is sparsified to zero, while this proportion is 24.22\% for CNN/DM; 2) Almost all the sparsified elements in HierGNN's predictions for XSum are edges, while roots for CNN/DM; 3) 90.32\% of the elements of the edge distribution in the second MTC layer are sparsified in XSum, but no any sparsified element in the first layer. In CNN/DM, the proportion of sparsified elements in the first and second layer are almost identical. These observations reveal that sparse MTC can adaptively choose whether sparse out elements in root or edge distributions, thus boosting the richness of the structural information represented by MTC. We finally show a qualitative case with three sentences per article, having the highest or lowest root probabilities (see Figure~\ref{fig:reasoning_case}), and the heatmap visualization of the learned hierarchical structures from sparse and dense MTC. We observe that the highest-probability root sentences tend to be summary-worthy while also scattering in different positions of the article, and the lowest probability is irrelevant. The structure learned by Sparse MTC tends to be more diverse and can successfully sparsify out the sentence nodes with irrelevant contents, e.g., 18th and 20th sentence. \section{Conclusion} We propose HierGNN that can be used in tandem with existing generation models. The module learns the document hierarchical structure while being able to integrate information from different parts of the text as a form of reasoning. Our experiments verify that HierGNN is effective in improving the plain sequential summarization models. \section*{Limitations} The inductive bias of our HierGNN model has an assumption that the source article follows an ``inverted pyramid'' style of writing. This may pose limitations in the generalization of our model to other categories of input documents with no or a weak hierarchical structure. Future work includes understanding the limitations of HierGNN in different input domains (e.g., conversation summarization). Additionally, as other large-scale pretrained neural summarizers, our approach with an additional HierGNN encoder increases model complexity. To train our BART-based system, GPUs with at least 32GB of memory are required. Future work may focus on distilling the large HierGNN model into a much smaller size while retaining its original performance. \section*{Ethical and Other Considerations} \paragraph{Human evaluations.} Human workers were informed of the intended use of the provided assessments of summary quality and complied with the terms and conditions of the experiment, as specified by Amazon Mechanical Turk.\footnote{\url{https://www.mturk.com}} In regards to payment, workers were compensated fairly with the wage of \pounds9 hourly (higher than the maximum minimum wage in the United Kingdom) i.e.\ \pounds4.50 \, per HIT at 2 HITs per hour.\footnote{\url{https://www.gov.uk/national-minimum-wage-rates}} \paragraph{Computing time.} We first report the computing time for our most computationally intense HierGNN-BART (471 million parameters) using NVIDIA Tesla A100 with 40G RAM: with CNN/DM, the training takes around 81 GPU hours, and the inference takes 9.39 GPU hours. With XSum, the training takes around 32 GPU hours, and the inference takes 4.41 GPU hours. Additionally, training of HierGNN-PGN (32 million parameters) on CNN/DM takes 0.79 seconds per iteration using 1 NVIDIA V100 GPU card with 16GB. We estimate the inference time is 4.02 documents per second.
{ "attr-fineweb-edu": 2.070312, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdMw5qX_BukO5g9C0
\section{\label{sec:intro}Introduction} Records are ubiquitous in nature. We keep hearing about record breaking events in sports, in stock prices, in the summer temperature in a given city, in the amount of rainfall in a given place or in the magnitude of earthquakes in a certain geographical zone. The studies on the theory of records were initiated in the statistics literature almost 70 years back~\cite{Chandler52,FS54,Neuts67,Scho72,ABN98,Nevzorov} and since then have found numerous applications across disciplines: in sports~\cite{Gembris2002,Gembris2007,BRV2007}, in the analysis of climate data~\cite{Hoyt81,Basset92,SZ1999,RP06,Meehl2009,WK,AK,MBK19}, in fitness models of evolutionary biology~\cite{KJ,KRUG07,PSNK15,PNK16,PK16}, in condensed matter systems such as spin glasses and high temperature superconductors~\cite{Jensen06,Oliveira2005,Sibani2006} and also in models of growing networks~\cite{GL1}. Record statistics have also been studied extensively in various random walk models~\cite{MAJ08,SMreview,SS,MSW12,EKMB13,GMS15,GMS16,MOUNAIX20} with applications to avalanches and depinning of elastic lines in disordered medium~\cite{LDW09}, to the analysis of financial data~\cite{WBK11,WMS12,SS14,Chalet17} and more recently to active particles with run and tumble dynamics~\cite{MLMS20,LM20,MLMS21}. For reviews on record statistics in the physics literature, see Refs.~\cite{Wergen13,GMS17}. In its most general setting, the record problem can be formulated as follows. Consider an infinite sequence of continuous random variables $\{x_1,\,x_2,\,x_3,\ldots\}$ representing the entries of a discrete-time series--they may be the stock prices on successive days or the daily average temperature in a given city. The random variables are distributed via a joint distribution $P(x_1,\,x_2,\,x_3,\ldots)$. A record (upper) occurs at step $k$ if the entry $x_k$ exceeds all previous entries, i.e., if \begin{equation} x_k> x_i, \quad {\rm for}\,\, {\rm all}\quad i=1,2,\cdots, k-1 \, . \label{def_record} \end{equation} By convention, the first entry is always a record. The successive record values are denoted by $\{R_1,\, R_2,\, R_3,\, \cdots\}$ and is called the associated {\em record-series} (see Fig. (\ref{fig:sequence1})). Furthermore, let $\{t_1,\,t_2,\, t_3,\cdots\}$ denote the times at which the records occur--we will call it the associated {\em record-time series}. Since the first entry is always a record by convention, we have $R_1=x_1$ and $t_1=1$. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/figure1.png} \caption{An infinite discrete-time series with entries $\{x_1,x_2,\cdots\}$. A record happens at step $k$ if $x_k>x_i$ for all $i=1,2,\cdots, k-1$. The successive record values (shown by (red) filled circles) $\{R_1,\,R_2,\, R_3,\,\cdots\}$ form the \emph{record-series}. The times at which the records occur form a \emph{record-time} series $\{t_1,\,t_2,\,t_3,\, \cdots\}$. The time gap $n_k= t_{k+1}-t_k$ between the $k$-th record and the $(k+1)$-th record is called the \emph{age} of the $k$-th record. By convention, the first entry is a record, hence $R_1=x_1$ and $t_1=1$.} \label{fig:sequence1} \end{figure} Given this time-series $\{x_1,\, x_2,\, x_3,\cdots\}$ and its underlying probability distribution $P(x_1,\, x_2,\, x_3,\cdots)$, one can investigate various observables associated to the occurrences of records. Three rather natural questions are as follows: \begin{itemize} \item How many records $M_N$ occur in the first $N$ steps? For example, given the joint distribution $P(x_1,\, x_2,\, x_3,\cdots)$, what is the average number of records $\langle M_N\rangle$ within the first $N$ steps? \item How are the record values distributed? For example, let \begin{equation} q_k(R)= {\rm Prob.}[R_k=R] \label{def_qkr} \end{equation} denote the probability density that the $k$-th record (in the infinite sequence) takes value in $[R, R+dR]$. What can we say about $q_k(R)$, given the underlying joint distribution $P(x_1,\, x_2,\, x_3,\cdots)$ of the original entries? \item Suppose that a record occurs at time $t_k$. How long does it take to break this record? Let $n_k= t_{k+1}-t_{k}$ denote the time gap between the $(k+1)$-th record and the $k$-th record--this is the {\em age} of the $k$-th record. Let \begin{equation} \pi_k(n)= {\rm Prob.}[n_k=n] \label{def_pikn} \end{equation} denote the distribution of the age of the $k$-th record. Given the joint distribution of entries $P(x_1,\, x_2,\, x_3,\cdots)$, can one compute $\pi_k(n)$? \end{itemize} \vskip 0.4cm \noindent {\bf I.I.D model.} The simplest model where one can compute exactly all three observables correspond to the case when the entries $x_i$'s are {\em uncorrelated} and each is drawn independently from a continuous distribution $f(x)$. In other words, the joint distribution factorizes \begin{equation} P(x_1,\, x_2\, x_3,\cdots)= f(x_1)\, f(x_2)\, f(x_3)\cdots \label{factor.1} \end{equation} This is usually referred to as the independent and identically distributed (I.I.D) model~\cite{Chandler52,FS54,Neuts67,Scho72,ABN98,Nevzorov}. The probability density function (PDF) $f(x)$ is normalized to unity and its cumulative distribution is defined as $F(x)=\int_{-\infty}^x f(y)\, dy$. We summarize these classical results here, and for a derivation see, e.g., the review~\cite{GMS17} with citations to the original literature~\cite{Chandler52,FS54,Neuts67,Scho72,ABN98,Nevzorov}. \vskip 0.3cm \noindent (i) It turns out that the average number of records up to first $N$ steps is universal, i.e., independent of $f(x)$ and is given by the simple formula \begin{equation} \langle M_N\rangle= \sum_{k=1}^N \frac{1}{k} \xrightarrow[N\to \infty]{} \ln N\, . \label{avg_rec.1} \end{equation} Thus the mean number of records grows very slowly (logarithmically) with increasing $N$ indicating that records get increasingly harder to break. Moreover, the full distribution of $M_N$, i.e., $P(M,N)={\rm Prob.}(M_N=M)$ is also known and is universal. In the limit $N\to \infty$, the distribution $P(M,N)$ approaches a Gaussian form with mean $\ln N$ and variance $\ln N$. \vskip 0.3cm \noindent (ii) The distribution $q_k(R)$ of the value of the $k$-th record is also known explicitly \begin{equation} q_k(R)= f(R)\, \frac{\left[-\ln (1- F(R))\right]^{k-1}}{(k-1)!}\, , \quad k=1,2,\ldots\, . \label{qkr_iid.1} \end{equation} where $F(R)$ is the cumulative distribution. Unlike the distribution of $M_N$, the record value distribution $q_k(R)$ is not universal, as it depends explicitly on $f(R)$. For example, for exponentially distributed positive entries with $f(x)= e^{-x}\,\theta(x)$ (where the Heaviside step function $\theta(x)=1$ if $x>0$ and $\theta(x)=0$ if $x\le 0$), Eq. (\ref{qkr_iid.1}) gives \begin{equation} q_k(R)= e^{-R}\, \frac{R^{k-1}}{(k-1)!}\, \quad k=1,2\cdots \label{qkr_exp_iid.1} \end{equation} In this case the average record value $\langle R_k\rangle= \int_0^{\infty} R\, q_k(R)\, dR= k$ increases linearly with $k$. Similarly, the variance of $R_k$ also grows linearly with $k$. In fact, for generic $f(x)$ in Eq. (\ref{qkr_iid.1}), one can show that $q_k(R)$ does not have a limiting distribution as $k\to \infty$. \vskip 0.3cm \noindent (iii) Finally, the age distribution $\pi_k(n)$ of the $k$-th record is also known explicitly and turns out to be universal, i.e., independent of $f(x)$. For any $k\ge 1$, it reads~\cite{GMS17} \begin{equation} \label{pikn_iid.1} \pi_k(n)=\sum_{m=0}^{n-1}\binom{n-1}{m} \frac{(-1)^m}{(2+m)^k} \underset{n \to \infty}{\simeq} \frac{1}{n^2}\, \frac{\left[\ln n\right]^{(k-1)}}{(k-1)!}\, . \end{equation} Two important points to note for the I.I.D model that will be important in this paper: (a) the distribution $q_k(R)$ depends on $k$ even when $k\to \infty$, i.e., there is no limiting {\em stationary} record value distribution as $k\to \infty$, simply because the record values grow with $k$ for generic $f(x)$ and (b) similarly, the age distribution $\pi_k(n)$ in Eq. (\ref{pikn_iid.1}) also does not have a limiting stationary distribution when $k\to \infty$. \vskip 0.3cm After introducing this basic background, we now turn to the main topic of this paper. Here our goal is to use the tools of record statistics to provide a simple model for the peculiar jerky motion observed in many disordered systems, as a response to an external driving force. In such systems the dynamics of the relevant degree of freedom typically alternates between static immobile states and periods of rapid motion called avalanches. Examples of such a behaviour are quite ubiquitous, ranging from the crack propagation in solids \cite{BB11,Bonamy17,BBR19}, the earthquake of seismic faults \cite{AGGL16} or the Barkhausen noise \cite{ABBM90,ZVS97,SDM01} appearing in the magnetisation curve as a function of the applied magnetic field--see \cite{FC2008} for a review. Avalanches are well studied in the context of the depinning of an elastic interface pulled through a disordered medium by an external force~\cite{LDW09,ABBM90,Kardar98,Fisher98}. In the absence of an external force the line is pinned by the disorder. Upon increasing the force beyond a local threshold, a portion of the interface gets depinned and its center of mass moves forward, thus creating an avalanche. Interestingly, a simple one dimensional lattice model for depinning can be mapped exactly to the record model discussed above, as we show below. In this lattice model, the elastic line is replaced by a single particle (representing its center of mass) that moves on an infinite $1$-d lattice. Under this mapping, the time series $\{x_i\}$ in Fig. (\ref{fig:sequence1}) gets mapped on to the quenched pinning forces with the horizontal axis $i$ labelling the sites of a $1$-d lattice. This defines the disorder landscape $\{x_1,x_2,\dots \}$ on which the particle moves under the effect of the applied force, $f_a(i)$ (namely the force applied at the site $i$). The particle leaves the site $i$ if $f_a(i) > x_i$. Hence, the minimal force profile allowing motion alternate between plateaus and vertical jumps (see Fig.2 ). The plateaus coincide exactly with the record series $\{R_1,\,R_2,\,R_3,\dots\}$ and the vertical jumps occur exactly at the sites $\{t_1,\,t_2,\, t_3 \dots\}$ where the record occurs in Fig.1. The age of the $k$-th record $n_k$ in Fig. (\ref{fig:sequence1}) maps on to the size of the $k$-th avalanche in the depinning model. The three observables $\langle M_N\rangle$, $q_k(R)$ and $\pi_k(n)$ have precise physical meaning in the context of depinning. For example, $\langle M_N\rangle$ is the number of jumps of the applied force profile in order to displace the position of the particle by $N$ sites, given that it started at $i=1$. Similarly, $\pi_k(n)$ represents the size distribution of the $k$-th avalanche in the depinning model. In the simple I.I.D setting, the pinning forces $\{x_i\}$ in the disordered landscape are independent and identically distributed, each drawn from a continuous PDF $f(x)$. However, as we discuss in detail in Section (\ref{sec:depinning}), this simple I.I.D model, while analytically tractable, fails to reproduce the behaviors of the three observables as seen in real systems. In addition, the spatio-temporal correlations in the applied force profile $f_a(i)$ (record values) as well as in the avalanches seen in realistic systems are also not reproduced by the I.I.D model. This calls for some amendments in this basic I.I.D model. The idea is to introduce minimal changes in the model such that it retains its analytical tractability, and yet reproduces the features observed in real systems. After discussing briefly the previous attempts of modifying the simple I.I.D model in Section \ref{sec:depinning}, we introduce a new model in this paper which we call the $c$-record model. This model has the I.I.D landscape with input $f(x)$, and one single additional parameter $c>0$ associated with the applied force profile $f_a(i)$. The model is precisely defined in Section \ref{sec:record_process}. We demonstrate in this paper that the the $c$-record model is (a) exactly solvable with a rich analytical structure and (b) it reproduces qualitatively similar features for all the three observables, as well as the spatio-temporal correlations, that one observes in realistic system of depinning. Quite remarkably, it turns out that this $c$-record model was already introduced in a different context in the statistics literature by Balakrishnan et. al., where is was called the $\delta$-{\it exceedence} record model~\cite{Bala96}. The parameter $\delta=-c<0$ is negative in our context. In addition, this $\delta$-{\em exceedence} model with a negative $\delta=-c<0$ also appeared in the {\em random adaptive walk} (RAW) model to describe biological evolution on a random fitness landscape~\cite{PSNK15,PNK16,PK16}. In fact, we use the notation $c$ for $-\delta$ in our model following Ref.~\cite{PSNK15}. In the context of the RAW model, the two observables $\langle M_N\rangle$ and $q_k(R)$, but not $\pi_k(n)$, were already studied analytically in Ref.~\cite{PSNK15}. In the notation of Ref.~\cite{PSNK15} for RAW, our $\langle M_N\rangle$ corresponds to the mean walk length of an adaptive walker (for genome size $L$) $D_{\rm RAW}(L)$ in the RAW model, with $L\sim N$ for large $L$. In particular, for exponentially distributed positive fitness landscapes $f(x)= e^{-x}\theta(x)$ where $\theta(x)$ is the Heaviside step function, Ref.~\cite{PSNK15} uncovered a striking phase transition at the critical value $c=1$, across which the asymptotic growth of $\langle M_N\rangle$, with increasing $N$, changes drastically. In this paper, we re-visit this $c$-record model in the context of depinning and avalanches. We provide a thorough analytical and numerical study of all three observables $\langle M_N\rangle$, $q_k(R)$ and $\pi_k(n)$ and also the underlying correlation structure of the record values and avalanches for a general $f(x)$, including in particular the interesting case $f(x)=e^{-x}\theta(x)$. For this particular case $f(x)=e^{-x}\theta(x)$, while our results fully agree with Ref.~\cite{PSNK15} for $c\le 1$, we show that for $c>1$ the model has a much richer structure than was reported in Ref.~\cite{PSNK15}. In particular, we show that for $c>1$ and $f(x)=e^{-x}\theta(x)$, the average number of records $\langle M_N\rangle$ grows for large $N$ as a power law \begin{equation} \langle M_N\rangle \sim N^{\lambda(c)}\, , \label{avg_rec.intro} \end{equation} with an exponent $\lambda(c)$ that depends continuously on $c$ (for $c>1$) and is given by the unique positive root of the transcendental equation \begin{equation} c= -\frac{\ln (1-\lambda)}{\lambda}\, . \label{lambda_intro} \end{equation} Thus our prediction for the asymptotic growth of $\langle M_N\rangle$ in Eq. (\ref{avg_rec.intro}) for $c>1$ differs from Ref.~\cite{PSNK15} where $\langle M_N\rangle \sim O(N)$ was reported. We also show that for $c>1$, the record value distribution $q_k(R)$ approaches a stationary distribution as $k\to \infty$, which is given by a pure exponential behaviour for all $R\ge 0$ \begin{equation} q_{k\to \infty}(R)= \lambda(c)\, e^{-\lambda(c)\, R}\, . \label{qkr_intro} \end{equation} In addition, the avalanche size distribution $\pi_k(n)$ also approaches a stationary distribution as $k\to \infty$ (for $c>1$) with a power-law tail \begin{equation} \pi_{k\to \infty}(n)\sim n^{-\lambda(c)}\, \quad {\rm as}\quad n\to \infty \label{pikn_intro} \end{equation} where the exponent is the same $\lambda(c)$ as in Eq. (\ref{lambda_intro}). The rest of our paper is organised as follows. In Section \ref{sec:depinning}, we recall the mapping between the $1$-d lattice model of depinning and the record model and also discuss previously studied models that go beyond the simple I.I.D record model. In Section \ref{sec:record_process}, we define the $c$-record model precisely, and provide a detailed summary of our results. For the particular case $f(x)= e^{-x}\theta(x)$, we also provide a detailed comparison of our results to that of Ref.~\cite{PSNK15}. In Section \ref{recursion} we set up the exact recursion relations and derive the non-local differential equations for the three main observables, respectively in Subsections \ref{sec:recursive_rels_MN}, \ref{sec:recursive_rels_qR}, and \ref{sec:recursive_rels_pi}. In Section \ref{sec:exp_case}, we provide the full exact solution of all three observables in the $c$-record model and demonstrate the phase transition at $c=1$. In Section \ref{stretched} we discuss in detail the criterion for stationarity of the record value distribution $q_k(R)$ as $k\to \infty$ for a stretched exponential family of $f(x)$. Section \ref{other_dist} considers other fanilies of $f(x)$, including an exact solution for the uniform distribution over $[0,1]$ and numerical results for the Weibull class of $f(x)$. In Section \ref{sec:generalization} we show some possible generalizations of the $c$-record process. Section \ref{sec:conclusion} is dedicated to the conclusion. Finally, the derivation of the asymptotic results, rather long and tedious, are relegated to Appendices (A-G). \section{\label{sec:depinning} Depinning, avalanches and record statistics} To understand how record statistics of a discrete-time series, discussed in the introduction, can be used to study the avalanches associated with the depinning of an elastic interface, we consider a very simple one dimensional model where one replaces the extended interface by a point representing its center of mass~\cite{LDW09}. The model is defined on an infinite one dimensional lattice where the lattice sites are labelled by $i=1,\,2,\,3,\cdots$. At each site $i$, we assign a positive random variable $x_i$, drawn independently from a continuous $f(x)$, representing the local pinning force at site $i$. The time-series $\{x_i\}$ in Fig. (\ref{fig:sequence1}) then defines the quenched random landscape, with the horizontal axis $i$ labelling the lattice sites. The associated record-series $\{R_1,\,R_2,\, R_3,\cdots\}$ in Fig. (\ref{fig:sequence1}) now defines the record values of this pinning force landscape. We then launch a single particle on this quenched landscape at site $i=1$ and apply an external force $f_a$ at site $i=1$ that tries to drag the particle from $i=1$ to the neighbouring site $i=2$. The force $f_a$ is increased continuously with time at a constant rate. As long as the value of $f_a$ is less than the local pinning force $x_1$, the particle does not move from site $1$. Upon increasing the force $f_a$, when it just exceeds $x_1=R_1$, the particle suddenly jumps to site $2$. Let $f_a(1)$ denote the applied force just when the particle leaves the site $i=1$. The value of the applied force remains the same, i.e., $f_a=f_a(1)=x_1=R_1$ when the particle is moving. When the particle arrives at site $i=2$, if the current force $f_a(1)$ is less than $x_2$, i.e., $x_2$ is a record, the particle gets stuck again at site $2$ and the applied force needs to increase to exceed the pinning force $x_2$. However, if $x_2$ is not a record, the current force $f_a(1)$ is bigger than $x_2$ and the particle hops from $i=2$ to $i=3$. Essentially the particle keeps moving forward to the right till it encounters the next record value of the landscape. We then have to increase the force $f_a$ to exceed the current record value and the process continues. The number of sites the particle moves forward following a depinning (till it gets pinned again) is precisely the size of an avalanche. Let $f_a(i)$ denote the value of the applied force at site $i$ just when the particle leaves the site $i$. In this simple model, we thus see that the applied force profile $f_a(i)$ essentially has a staircase structure alternating between plateaus and vertical jumps (see Fig.~(\ref{fig:sequence2})). The plateau values of the force $f_a(i)$ are precisely the record values $\{R_1,\, R_2,\, R_3,\cdots\}$ of the underlying landscape and the jumps of $f_a(i)$ occur exactly at the sites where records occur in the quenched landscape, i.e, they coincide with record-time series $\{t_1,\, t_2,\, t_3\, \cdots\}$ in Fig. (\ref{fig:sequence1}). Consequently, the ages $n_k$ of the records coincide exactly with the sizes of successive avalanches. Thus the three observables introduced before in Fig. (\ref{fig:sequence1}) are also very relevant in the context of depinning: (i) $\langle M_N\rangle$ measures the average number of jumps the external force has to undergo in order to displace the particle from site $i=1$ to site $i=N$, (ii) $q_k(R)$ represents the distribution of the height of the $k$-th plateau of the applied force and (iii) $\pi_k(n)$ is precisely the distribution of the size of the $k$-th avalanche. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/figure2.png} \caption{The applied force profile $f_a(i)$ in the $1$-d depinning model has a staircase structure (shown by the solid black line), alternating between plateaus and vertical jumps. The plateau values coincide exactly with the record-series $\{R_1,\,R_2,\,R_3\,\cdots\}$, while the jumps occur precisely at the sites where the records occur, i.e., at sites $\{t_1,\,t_2,\,t_3,\cdots\}$ of figure (\ref{fig:sequence1}). The age of the $k$-the record $n_k$ coincides exactly with the size of the $k$-th avalanche in the depinning model.} \label{fig:sequence2} \end{figure} How realistic is this simple depinning model with an I.I.D landscape? It turns out that there are three important features in real systems that the I.I.D model fails to reproduce faithfully: \vskip 0.3cm \noindent (1) In real systems, the distribution $q_k(R)$ of the height of the $k$-th plateau in the force profile typically approaches a stationary distribution as $k\to \infty$. In contrast, in the I.I.D model, as seen from Eq. (\ref{qkr_iid.1}), there is no limiting stationary distribution for $q_k(R)$ as $k\to \infty$. \vskip 0.3cm \noindent (2) In real systems, the avalanche size distribution $\pi_k(n)$ not only approaches a stationary distribution $\pi(n)$ as $k\to \infty$, but the stationary distribution also has a pure power-law tail, $\pi(n)\sim n^{-\tau}$ as $n\to \infty$~\cite{ZVS97,BCDS02,BDHDB18} (e.g., the celebrated Gutenberg-Richter law for earthquake magnitude). In the I.I.D model, the result in Eq. (\ref{pikn_iid.1}) indicates that $\pi_k(n)$ neither approaches a stationary distribution as $k\to \infty$, nor does it have a pure power-law tail. \vskip 0.3cm \noindent (3) Real systems exhibit very interesting correlations between the record ages, $n_k$, as well as between the record values, $R_k$. For example, after a large earthquake occurs, one observes a cascade of large aftershocks followed by long periods of quiescent activity characterized by events of small size. It turns out that in the I.I.D model the record values $R_k$'s increase monotonically with $k$ and a similar trend is observed for the ages, leaving no scope for observing cascade of events of large sizes, followed by quiescent activity. \vskip 0.3cm An important improvement over the I.I.D model is represented by the well known Alessandro-Beatrice-Bertotti-Montorsi (ABBM) model introduced to study the avalanches in Barkhausen noise~\cite{ABBM90}. In the ABBM model, the I.I.D landscape is replaced by a correlated one where $x_i$ represents the position of a one dimensional random walk~\cite{ABBM90,FC2008}. In this case, the avalanche size distribution $\pi_k(n)$ coincides with the return time distribution of a Brownian motion in $1$-d, and hence $\pi_k(n)$ is stationary (independent of $k$) and does have a pure power law tail, $\pi_k(n)\sim n^{-\tau}$ with $\tau=3/2$. A similar analysis also follows from the study of record statistics for a random walk sequence~\cite{MAJ08}. However, in this model, $q_k(R)$ does not approach a stationary distribution as $k\to \infty$ (the point (1) above) and the sequence of record ages is uncorrelated at variance with what it is seen in real systems (the point (3) above). Another modification of the simple I.I.D record model is the so called linear-trend model~\cite{LDW09}. In this model, the landscape of pinning forces $\{x_i\}$ remains I.I.D, but the applied force profile $f_a(i)$ changes from Fig. (\ref{fig:sequence2}) in a simple way (see Fig. (\ref{fig:schema}) upper panel). Just after the particle is deblocked from a pinning site, one assumes that the applied force $f_a(i)$ decreases linearly, $f_a(i+1)=f_a(i)-c$ (with $c>0$) with increasing $i$ till the particle gets blocked again. Thus, as opposed to the horizontal plateaus between two successive records in the I.I.D model (as in Fig. (\ref{fig:sequence2})), the force profile in the linear-trend model has a linear behavior with a negative slope, as shown schematically in Fig. (\ref{fig:schema}) upper panel. The physical rationale behind the decrease of the applied force is the dissipation during the avalanche motion (in particular we have a linear decrease if $f_a(i)$ is the elastic force between the particle and a drive that moves at a constant slow rate). In this model, at the end of an avalanche when the particle gets stuck at a new site, the force profile $f_a(i)$ jumps again to the corresponding record value of the landscape at that site and the process continues. The sequence of the force values at the beigining of an avalanche coincides with the record series $\{R_1,\, R_2,\, R_3,\,\cdots\}$ of the landscape (see Fig. (\ref{fig:schema}) upper panel). Interestingly, it turns out that this linear-trend model was also introduced originally in the statistics literature by Ballerini and Resnick~\cite{BR85,BR87}, and has since been studied extensively with numerous applications~\cite{B99,FWK10,WFK11,FWK12}. The analysis of the linear-trend model with $c>0$ shows that the average number of records grows linearly with $N$ for large $N$, i.e., $\langle M_N\rangle\approx a(c)\, N$ where the prefactor $a(c)$ is nontrivial and nonuniversal, i.e., depends on $f(x)$~\cite{BR85,FWK10,WFK11}. The record value distribution $q_k(R)$ (or equivalently the distribution of the applied forces at the begining of the $k$-th avalanche) does approach a stationary distribution as $k\to \infty$ (as in realistic systems) that depends on the tail of $f(x)$. Similarly, the avalanche size distribution $\pi_k(n)$ also approaches a stationary distribution $\pi(n)$ as $k\to \infty$, however this stationary distribution $\pi(n)$ does not have a power-law tail (rather an exponential cut-off) for large $n$ as expected in real systems. In summary, the linear-trend model does reproduce some features of avalanches in realistic depinning systems, but not all. In this paper, we introduce a simple modification of the linear-trend model, which we call the $c$-record model. The model is defined more precisely in the next section where we also provide a summary of our main results. This model allows exact solutions for the three observables $\langle M_N\rangle$, $q_k(R)$ and $\pi_k(n)$. We show that these observables as well as the correlation structure between records and their ages reproduce the features observed in realistic systems. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/figure3.png} \caption{Schemes for modified records. Upper panel: linear trend records. Lower panel: $c$-records.} \label{fig:schema} \end{figure} \section{\label{sec:record_process} $c$-record model: the definition and a summary of main results} \noindent{\bf The model.} The $c$-record model that we study in this paper is defined as follows. Once again, we consider an infinite I.I.D landscape $\{x_1,\, x_2,\, x_3,\cdots\}$ of quenched pinning forces defined on a $1$-d lattice, where each entry is chosen independently from a continuous PDF $f(x)$. Let $\{R_1,\, R_2,\, R_3,\cdots \}$ denote the record-series of this landscape. As in the simple I.I.D model, the particle starts from site $i=1$ and it can leave the this site when the local applied force $f_a$ exceeds the pinnning force $x_1$. The only difference in our model from that of the linear-trend model discussed in the previous section, is how the force profile $f_a(i)$ behaves between two avalanches. In the linear-trend model, the force profile following a record decreases linearly till the next record (as in Fig. (\ref{fig:schema}) upper panel). In contrast, in the $c$-record model, we assume that following a record, the force decreases by $c$ only in the first step, but after that it stays flat till it encounters the next record (see Fig. (\ref{fig:schema}) lower panel). More precisely, between two successive record values $R_k$ at $t_k$ and $R_{k+1}$ at $t_{k+1}$ we now have \begin{equation} f_a(i)= R_k-c,\, {\rm for}\,\, i=t_k+1, t_k+2,\cdots, t_{k+1}-1\, . \label{fa1.c_record} \end{equation} At $i=t_{k+1}$, the force $f_a(i)$ undergoes a jump to the associated record value $R_{k+1}$ of the landscape. The physical rationale behind this new model is that the dissipation in the force profile that occurs just after depinning is short-ranged in time, e.g., occurs only during the first hopping but stays constant afterwards. The formation of records in this $c$-record depinning model can be alternatively phrased in the language of standard time-series discussed in the introduction. Consider, as before, an infinite I.I.D sequence of entries $\{x_1,\,x_2,\,x_3\,\ldots\}$ each drawn from $f(x)$. Here a record series $\{R_1,\,R_2,\, R_3,\ldots\}$ is formed recursively, in the presence of a single parameter $c>0$, as follows. If a record occurs at some step with record value $R$, a subsequent entry will be a record only if its value exceeds $(R-c)\,\theta(R-c)$. Clearly, for $c=0$, this is the standard record model discussed in the introduction. For $c>0$, this is precisely the $\delta$-{\em exceedence} model introduced by Balakrishnan et. al.~\cite{Bala96} with $\delta=-c<0$. As already discussed in the introduction, this model with $c>0$ was also studied in Ref.~\cite{PSNK15} as the random adaptive walk (RAW) model of biological evolution in a random fitness landscape. In this paper, we have studied the three observables $\langle M_N\rangle $, $q_k(R)$ and $\pi_k(n)$ in the $c$-record model, both analytically and numerically, for a class of $f(x)$'s. From the motivation of the depinning pheomenon, our main interest is to determine if, when $k \to \infty$, the $c$-record process becomes `stationary' or not. We say that it is stationary if the distributions $q_k(R)$ and $\pi_k(n)$ have a well defined limit as $k \to \infty$, namely: \begin{flalign} \label{eqn:stat_condition} & \lim_{k \to \infty} q_k(R) = q(R) \\ & \lim_{k \to \infty} \pi_k(n) = \pi(n) \nonumber \end{flalign} Below we summarize our main results. \vskip 0.3cm \noindent{\bf The summary of main results.} We find that for $c>0$, the behavior of all three observables depend explicitly on $f(x)$ and $c$. For simplicity, we consider positive random variables, i.e., $f(x)$ with a positive support. In particular, three cases can be distinguished: \begin{enumerate} \item If $f(x)$ decays slower than any exponential function as $x\to \infty$, then the record process doesn't reach a stationary limit for any $c>0$, i.e., $q_k(R)$ and $\pi_k(n)$ do not have a $k$-independent limiting distribution as $k\to \infty$. The average number of records $\langle M_N\rangle$ grows logarithmically with $N$ as in the standard record problem $c=0$ (but with a different subleading constant): \begin{equation} \label{eqn:MN_logN_slower} \langle M \rangle_N = \ln N + O(1) \end{equation} \item If $f(x)$ decays faster than any exponential function as $x\to \infty$ (this includes bounded distribution), then the average number of records grows linearly with $N$ at the leading order: \begin{equation} \label{eqn:MN_N_faster} \langle M \rangle_N = A_1(c) N + O(N) \end{equation} where the amptitude $A_1(c)$ can be analytically determined in some cases, e.g., for uniform $f(x)$ over the interval $[0,1]$ (see Eq. (\ref{eqn:exp_MN_uniform})). In this case, the record process also reaches a stationary limit for all $c>0$. We compute the stationary record value distribution $q(R)$ and the stationary age distribution $\pi(n)$, analytically and numerically, in several examples of $f(R)$. In particular, for distributions with a finite support, we show that $\pi(n)$ decays exponentially with $n$ for large $n$. Finally, for distributions with unbounded support and $f(x) \to e^{-x^\gamma}$ as $x\to \infty$ with $\gamma>1$, we show that $\pi(n)$ still has a power-law tail with an exponent larger than $2$. \end{enumerate} The case $f(x)=\exp(-x)$ that separates these two behaviors turns out to be marginal, with a striking phase transition at $c=1$. While for $0\le c\le 1$, there is no limiting stationary distribution for $q_k(R)$ and $\pi_k(n)$ as $k\to \infty$, we show that for $c>1$, they do approach stationary distributions. In particular, we find that for $c>1$ \begin{eqnarray} \label{eqn:exp_qR} \label{qR_pi_stat_exp} & q(R) = \lambda(c) e^{-\lambda(c)\, R} \\ & \nonumber \\ & \pi(n\to \infty) = \frac{\lambda(c)(1-\lambda(c))}{n^{1+\lambda(c)}} \end{eqnarray} where $\lambda(c)$ is the unique positive root of the transcendental equation: \begin{equation} \label{eqn:c_vs_lmbd} c = -\frac{\ln(1-\lambda)}{\lambda} \end{equation} The average number of records $\langle M_N\rangle$ is computed exactly in Eq. (\ref{eqn:exp_MN}) and its large $N$ behavior also exhibits a phase transition at $c=1$. we show that \begin{equation} \label{eqn:recordsnumber} \langle M \rangle_N = \begin{cases} \frac{1}{1-c}\ln N - \mu(c) +O(\frac{1}{N})& 0\le c < 1 \\ \\ \ln^2 N + O(\ln N) & c=1 \\ \\ A_0(c) N^{\lambda(c)} + \frac{1}{1-c} \ln N +O(1) & c > 1 \end{cases} \end{equation} where the exponent $\lambda(c)$ depends continuously on $c$ and is given in Eq. (\ref{eqn:c_vs_lmbd}). Note that $\lambda(c)$ increases monotonically with increasing $c\ge 1$: $\lambda(c)\to 0$ as $c\to 1^{+}$, while $\lambda(c)\to 1$ only when $c\to \infty$. The explicit expression of the constant $A_0(c)$ is given in equation (\ref{eqn:A0_solution}). The constant $\mu(c)$ can be evaluated using the method explained in appendix (\ref{app:exp_case_MN_asymp}). We also provide careful numerical checks for all our analytical formulae. As mentioned in the introduction, precisely this marginal case $f(x)=e^{-x}$ was also studied in Ref.~\cite{PSNK15} in the context of RAW model in evolutionary biology and indeed, this striking phase transition at $c=1$ was already noticed there. In Ref.~\cite{PSNK15}, two of the three observables namely $\langle M_N\rangle$ and $q_k(R)$ (but not $\pi_k(n)$) were studied in detail, but using different notations, language and method. In order to compare our results to those of Ref. \cite{PSNK15}, it is useful to provide a dictionary of notations for the reader. In the limit of large genome size $L$, the RAW model studied in \cite{PSNK15} becomes equivalent, in the ensemble sense, to our $c$-record model with $N\sim L$. In this limit, our average number of records $\langle M_N\rangle$ for large $N$ then translates to the mean length of the adaptive walk $D_{\rm RAW}(L\sim N)$ in \cite{PSNK15} for large $L$. Furthermore, our record value distribution $q_k(R)$ is precisely $Q_{l=k}(y=R,L\to \infty)$, the probability for the adaptive walker to take $l=k$ steps to arrive at a local fitness $c(l-L)+y$ in the limit of large genome size $L\to \infty$. Hence, to summarize, for large $L$ (or equivalently for large $N$ in our notation) \begin{eqnarray} & & N\sim L\,; \quad \langle M_N\rangle\equiv D_{\rm RAW}(L\sim N)\,; \nonumber \\ & & q_k(R)\equiv Q_{l=k}(y=R,L\to \infty)\, . \label{dictionary} \end{eqnarray} With the precise translation in Eq. (\ref{dictionary}) we can now compare our results with those of Ref.~\cite{PSNK15}. We start with the asymptotic large $N$ behavior of the average number of records $\langle M_N\rangle$. For $c\le 1$, our leading order large $N$ results for $\langle M_N\rangle$ in Eq. (\ref{eqn:recordsnumber}) agree with that of Ref.~\cite{PSNK15}, though the subleading constant $-\mu(c)$ for $0\le c<1$ was not computed in \cite{PSNK15}. However, for $c>1$, our result in Eq. (\ref{eqn:recordsnumber}) is much richer (characterized by a power law growth of $\langle M_N\rangle$ with an exponent depending continuously on the parameter $c$) and different from that of \cite{PSNK15} where $\langle M_N\rangle \sim O(N)$ was reported (see Eq. (15) of \cite{PSNK15}). We find that the growth of $\langle M_N\rangle$ with increasing $N$ becomes linear only when $c\to 1$, since only in that limit $\lambda(c)\to 1$ in Eq. (\ref{eqn:recordsnumber}). Next we turn to $q_k(R)$ for $c>1$. In this case, an exact summation formula for $q_k(R)$ was derived for all $c$ in \cite{PSNK15} (see their Eq. (8)). In the limit $k\to \infty$, this sum is convergent only for $c>1$. In Ref. \cite{PSNK15}, this sum was not analysed in the limit $k\to \infty$, since they didn't need it in their problem. In fact, it can be checked that their Eq. (8), in the limit $k\to \infty$ and for $c>1$, satisfies the fixed point differential equation, $q'(R)= q(R+c)-q(R)$ (see later in Eq. (\ref{eqn:exp_qR_stationary_diff_eq})), whose solution is precisely a single pure exponential $q(R)= \lambda(c)\, e^{-\lambda(c)\, R}$, with $\lambda(c)$ given by the positive root of Eq. (\ref{eqn:c_vs_lmbd}). This is a rather nontrivial confirmation that two different methods lead to the same solution. Let us finally conclude this summary section by mentioning that we have also studied the correlations between record values in the stationary state when it exists. In that case, the sequence of records display remarkable clustering properties (see Fig.~\ref{fig:exp_records_sequence} upper panel)): after a large record value, we observe other large record values followed by a swarm of smaller record values. In the exponential case $f(x)=e^{-x}$, we show that the correlation between record values $\langle R_k R_{k+\tau}\rangle$ decreases exponentially with increasing $\tau$. The value of $c$ controls the correlation length: when $c \to 1$ from above, the correlation lengths diverges as $\sim 1/(c-1)^2$ reminiscent of the critical phenomena (see Fig.~\ref{fig:corr_exp})). We also study the clustering property of record ages that correspond to avalanches (see Fig.~\ref{fig:exp_records_sequence} lower panel)). This property is qualitatively similar to what is observed in seismic catalogs. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/figure4.pdf} \caption{Typical sequence of $c$-record values (upper panel) and their ages (lower panel) computed from a series of exponential number and using $c=1.5$. Note that large record values and large ages are organized in well defined clusters.} \label{fig:exp_records_sequence} \end{figure} \vskip 0.3cm \vskip 0.3cm \section{Setting up the recursion relations for the three observables} \label{recursion} In this section, we show how to set up the basic recursion relations to compute the three observables in the $c$-record model: (i) the mean number of records $\langle M_N\rangle$ up to $N$ steps, (ii) the distribution $q_k(R)$ of the value of the $k$-th record and (iii) the distribution $\pi_k(n)$ of the time interval $n_k$ between the $k$-th and the $(k+1)$-th record. For simplicity, we will assume throughout that we have an infinite series of I.I.D entries $\{x_1,\, x_2,\,x_3,\,\cdots\}$, each drawn from a continuous $f(x)$ which has a positive support, i.e., the entries are non-negative random variables. \subsection{\label{sec:recursive_rels_MN} The Number of Records } In this subsection we derive the exact recursion relation that we have used to compute $\langle M \rangle_N$. It turns out that the main object that we need for this computation is the joint probability density $P_N(M,R_M=R)$ that in the first block of size $N$ of the infinite series of random entries there are exactly $M$ records and that the last record has value $R$. This quantity $P_N(M,R)$ satisfies a closed recursion relation \begin{multline} \label{eqn:recurrence_PMNR} P_N(M,R) = P_{N-1}(M,R) \theta(R-c) \int_{0} ^{R-c} f(x) dx + \\ +f(R) \int_0^{R+c} P_{N-1}(M-1,R') dR' \end{multline} with $\theta(x)$ the Heaviside step function. It is straightforward to understand Eq. (\ref{eqn:recurrence_PMNR}): the first term on the right hand side (r.h.s) accounts for the event when the $N$-th entry is not a record, while the second term corresponds to the event when the $N$-th entry is a record. In the first case, given that the last record has value $R$, the value of the $N$-th entry $x_N$ must be less than $R-c$ which happens with probability $\int_{0} ^{R-c} f(x) dx$. In the second case, the $N$-th entry is a record with value $R$, hence the previous record $R'$ must be less than $(R+c)$ explaining the second term on the r.h.s in Eq. (\ref{eqn:recurrence_PMNR}). The recursion relation (\ref{eqn:recurrence_PMNR}) starts from the initial condition \begin{equation} P_1(M,R)= \delta_{M,1}\, f(R)\, , \label{in_cond.1} \end{equation} since the first entry, by convention, is a record. The recursion relation (\ref{eqn:recurrence_PMNR}), starting from (\ref{in_cond.1}), is nontrivial to solve since it relates $P_N(M,R)$ at $R$ to its integral up to $R+c$ at step $N-1$, making it a non-local integral equation for any $c>0$. In order to compute $\langle M\rangle_N$, namely the average number of records as a function of $N$, we introduce $\tilde{Q}(R,z,s)$ as the double generating function of $P_N(M,R)$: \begin{equation} \label{eqn:q_tilde_definition} \tilde{Q}(R,z,s)=\sum_{N=1}^\infty \sum_{M=1}^\infty P_N(M,R) z^N s^M \end{equation} Using equation (\ref{eqn:recurrence_PMNR}) with the initial condition (\ref{in_cond.1}), we obtain the following equation for (\ref{eqn:q_tilde_definition}): \begin{multline} \label{eqn:recurrence_QR} \tilde{Q}(R) \left[1-zF(R-c) \theta(R-c) \right] = \\ zs f(R) \left[1+ \int_0^{R+c} \tilde{Q}(R') dR'\right] \end{multline} For simplicity, we omitted the arguments $z$ and $s$ of $\tilde Q$. Evidently, Eq. (\ref{eqn:recurrence_QR}) is also non-local with respect to $R$ for any $c>0$. Even though the full joint distribution $P_N(M,R)$ of the number of records $M$ and the last record value $R$ is of interest as it contains several interesting informations, we will focus on the simplest quantity, namely the mean number of records $\langle M \rangle_N$. To compute this, we need to extract $P_N(M)$, the distribution of the number of records $M$ up to $N$ steps. This is obtained by integrating over the value $R$ of the last record up to $N$ steps \begin{equation} \label{eqn:PNM_definition} P_N(M)=\int_0^\infty P_N(M,R) dR \, . \end{equation} The double generating function of $P_N(M)$ is then related to $\tilde{Q}(R,s,z)$ simply by \begin{equation} \label{eqn:PNM_definition_gf} \sum_{N=1}^\infty \sum_{M=1}^\infty P_N(M) z^N s^M = \int_0^\infty \tilde{Q}(R,z,s) dR \, \end{equation} where $\tilde{Q}(R,z,s)$ is the solution of the integral equation (\ref{eqn:recurrence_QR}). Using Eq. (\ref{eqn:PNM_definition_gf}), the average number of records can be obtained from the relation \begin{eqnarray} \label{eqn:MN_by_derivatives} \langle M \rangle_N & = & \sum_{M=1}^{\infty} M\, P_N(M) \nonumber \\ & = &\frac{1}{N!}\, \partial^N_z \partial_s \int_0^\infty \tilde{Q}(R,z,s) dR \, |_{z=0,s=1}\, . \end{eqnarray} Solving the recursion relation (\ref{eqn:recurrence_QR}) for arbitrary $f(x)$ seems hard. However, we were able to compute $\tilde{Q}$ and from it $\langle M_N\rangle$ explicitly for two special cases: the exponential distribution $f(x)= e^{-x}\,\theta(x)$ and the uniform distribution $f(x)=\mathbb{I}_{[0,1]}(x)$ with $\mathbb{I}_{[a,b]}(x)$ denoting the indicator function which is $1$ if $x$ belongs to the interval $[a,b]$ and zero otherwise. \subsection{\label{sec:recursive_rels_qR} The record value distribution } In this subsection we derive an exact recursion relation for the $k$-th record value distribution $q_k(R)$ in the $c$-record model. We start from the joint conditional probability $q(R,n|R')$ that, given a record with value $R'$ has occurred at some instant in the infinite sequence, the next record has value $R$ and the age of the current record with value $R'$ is $n$, i.e, there are $n$ steps separating the current record and the next record. This conditional probability can be very simply computed \begin{equation} \label{eqn:Rn_cond_prop} q(R,n|R')=\begin{cases} f(R)\, \delta_{n,1}\, & R'<c \\ & \\ f(R)\, \left[F(R'-c)\right]^{n-1} \, \theta(R-R'+c)\, & R'>c \end{cases} \end{equation} where we recall that $F(R)= \int_{0}^{R} f(y)\, dy$ is the cumulative distribution of each entry of the underlying time-series. Eq. (\ref{eqn:Rn_cond_prop}) is easy to interpret: when the previous record $R'$ is smaller than $c$, the very next entry with any positive value $R>0$ will be a record, indicating that $n=1$ is the only possible value. In contrast, when $R'>c$, assuming that there are exactly $n-1$ entries separating two successive records with values $R'$ and $R$ (with $R>R'-c$), each of these intermediate entries must be less that $R'-c$ (in order that none of them is a record), explaining the factor $\left[F(R'-c)\right]^{n-1}$. The probability that the $n$-th entry is a record is simply $f(R)\, \theta(R-R'+c)$. By summing over all possible age values $n \geq 1$ we obtain the distribution of the next record value conditioned on the previous one: \begin{eqnarray} \label{eqn:R_cond_prop} q(R|R') &= & \sum_{n=1}^{\infty} q(R,n|R') = f(R)\, \theta(c-R') \nonumber \\ &+& \frac{f(R)}{1-F(R'-c)}\, \theta(R-R'+c)\, \theta(R'-c) \nonumber \\ \end{eqnarray} Equations (\ref{eqn:Rn_cond_prop}) and (\ref{eqn:R_cond_prop}) are particularly useful when simulating directly the $c$-record process (see appendix \ref{app:simulation}). A recursion relation for $q_k(R)$ can then be set up using this conditional probbaility $q(R|R')$ as \begin{eqnarray} \label{eqn:recurrence_qRk} q_{k+1}(R)& = & \int_0^{\infty} q_k(R')\, q(R|R')\, dR' = f(R) \int_0^{c} q_k(R') dR' \nonumber \\ &+& f(R) \int_c^{R+c} \frac{q_k(R')}{1-F(R'-c)} dR'\, , \nonumber \\ \end{eqnarray} with the initial condition $q_1(R)=f(R)$. This relation can be easily understood as follows. The first term takes into account the event when the record $R'$ is in the range $0\le R'\le c$. In this case, the entry immediately after this record is a record with probability $f(R)$. The second term on the r.h.s in Eq. (\ref{eqn:recurrence_qRk}) accounts for the contributions coming from the case when $c\le R'\le R+c$. In this case, one needs to use $q(R|R')$ from Eq. (\ref{eqn:R_cond_prop}) and integrate over all allowed values of $R'$. This relation (\ref{eqn:recurrence_qRk}) was also derived in \cite{PSNK15} using different notations and for different observables, in the context of the RAW model of evolutionary biology. Furthermore, for $c=0$, this relation was used in Ref.~\cite{RP06} for studying the global temperature records. When the stationary limit of $q_k(R)$ exists, $q(R)=\lim_{k \to \infty} q_k(R)$ has to satisfy the following fixed point equation \begin{eqnarray} \label{eqn:qR_sc} q(R) & = & f(R)\, \int_0^c q(R') dR' \nonumber \\ & + & f(R)\, \int_c^{R+c} \frac{q(R')}{1-F(R'-c)}\, dR' \, . \end{eqnarray} This is also a non-local differential equation for any $c>0$ and is hard to solve for general $f(x)$. Again, we will see later that for $f(x)=e^{-x}\theta(x)$ (with $c>1$) and for the uniform distribution, it is possible to obtain explicitly the fixed point stationary solution of Eq. (\ref{eqn:qR_sc}). \subsection{\label{sec:recursive_rels_pi} The age distribution of records } In this subsection, we derive a recursion relation for $\pi_k(n)={\rm Prob.}(n_k=n)$ denoting the distribution of the age $n_k$ of the $k$-th record, in an infinite I.I.D series. The distribution $\pi_k(n)$ can be obtained from the previously defined conditional probability $q(R,n|R')$ (\ref{eqn:Rn_cond_prop}) and the record value distribution $q_k(R)$ as follows. \begin{eqnarray} \label{eqn:pi_k_definition} \pi_k(n) &= &\int_0^\infty \int_0^\infty q(R,n|R')\, q_k(R')\, dR\, dR' \nonumber \\ & = & \delta_{n,1}\, \int_0^c q_k(R')\, dR' \nonumber \\ &+ & \int_c^\infty (1-F(R'-c)) F^{n-1}(R'-c) q_k(R') dR' \nonumber \\ \end{eqnarray} where we used $F(R'-c)=0$ if $R'-c \leq 0$. The stationary distribution $\pi(n)$, when it exists, is obtained by taking the limit $\lim_{k \to \infty} \pi_k(n)$ and using $q(R)=\lim_{k \to \infty} q_k(R)$: \begin{eqnarray} \label{eqn:pi_stationary} \pi(n) & = & \delta_{n,1}\, \int_0^c q(R')\, dR' \nonumber \\ & +& \int_c^\infty (1-F(R'-c)) F^{n-1}(R'-c)\, q(R')\, dR' \, . \nonumber \\ \end{eqnarray} Thus, knowing the fixed point distribution $q(R)$ of the record value when it exists, one can compute the stationary age distribution using Eq. (\ref{eqn:pi_stationary}). Later, we will compute $\pi(n)$ explicitly for the two solvable cases, namely the exponential distribution $f(x)=e^{-x}\theta(x)$ with $c>1$ and the case of the uniform distribution over $[0,1]$. \section{\label{sec:exp_case}Exponential case} In this section we study in detail the exponential case with $f(x)=\exp(-x)\,\theta(x)$. \subsection{\label{sec:exp_case_1}Number of records} For the exponential distribution equation (\ref{eqn:recurrence_QR}) reduces to: \begin{multline} \label{eqn:QR_exp_case} \tilde{Q}(R) \left[1-z(1-e^{-(R-c)}) \theta(R-c) \right] = \\ z\,s\, e^{-R}\, \left[1+ \int_0^{R+c} \tilde{Q}(R') dR'\right] \end{multline} The non-locality manifest in Eq. (\ref{eqn:recurrence_QR}) makes it hard to find its general solution. Remarkably, for the exponential $f(x)$, this is possible. Performing a rather involved calculation, reported in appendix (\ref{app:exp_case_PNM}), we computed the generating function for $P_N(M)$ defined in Eq. (\ref{eqn:PNM_definition_gf}). Its explicit form is given in Eq. (\ref{eqn:a0_exp_first_form}) from which we extracted the average number of records $\langle M \rangle_N$, obtaining \begin{equation} \label{eqn:exp_MN} \langle M \rangle_N = N! \sum_{m=0}^{N-1} (-1)^m \frac{\prod_{k=1}^m (k-1+e^{-kc})}{(m+1)!^2 (N-m-1)!} \end{equation} This result for $\langle M_N\rangle$ is exact for all $N\ge 1$. For example, the first few values of $N$ yield \begin{flalign} \label{eqn:exp_MN_some_values} & \langle M \rangle_1 = 1 \\ & \langle M \rangle_2 = 2 \left(1-\frac{1}{4}e^{-c}\right) \\ & \langle M \rangle_3 = 3 \left[1-\frac{4}{9}e^{-c} + \frac{1}{18} e^{-3c}\right] \end{flalign} A non trivial check of equation (\ref{eqn:exp_MN}) is for $c=0$. By plugging $c=0$ the term $\prod_{k=1}^m (k-1+e^{-kc})$ simplifies to $k!$. Thus equation (\ref{eqn:exp_MN}) becomes: \begin{equation} \label{eqn:MN_at_c_0} \langle M \rangle_N = \sum_{m=0}^{N-1} \binom{N}{m+1} (-1)^m \frac{1}{(m+1)} \end{equation} The above expression can be shown to coincide with the well known result $\langle M \rangle = 1 + 1/2+1/3+\dots+1/N$. This can be done by considering the difference $\langle M \rangle_N - \langle M \rangle_{N-1}$. While the exact formula (\ref{eqn:exp_MN}) is useful to compute the result for moderate values of $N$, the asymptotic behavior of $\langle M \rangle_N$ for large $N$ is difficult to extract from it. Indeed, to extract the large $N$ behvaior, we use a different approach reported in appendix (\ref{app:exp_case_MN_asymp}) which leads to our main result in Eq. (\ref{eqn:recordsnumber}) and shows the existence of a phase transition at $c=1$. The presence of the phase transition at $c=1$ in the asymptotic behavior of $\langle M_N\rangle$ in Eq. (\ref{eqn:recordsnumber}) is also related to the fact that both $q_k(R)$ and $\pi_k(n)$ has stationary limiting distributions as $k\to \infty$, only for $c>1$ as shown in the next subsection \ref{sec:exp_case_2}. To check the validity of our analytical prediction we also performed direct numerical simulations using $N=10^6$ random variables and averaging over $10^4$ realizations. The results are shown in Fig. (\ref{fig:M_N_exp_supercrit}) for $c=1.5$ and in Fig. (\ref{fig:M_N_exp_subcrit}) for $c=0.5$, as well as for the marginal case $c=1$. \begin{figure} \includegraphics[width=\linewidth]{Figures/figure5.pdf} \caption{ Exponential case: mean number of $c$-records for $c=1.5$. Blue symbols correspond to numerical data. Black dashed line corresponds to the analytical prediction $A_0(c)N^{\lambda(c)}$. The values of the constants are $A_0(1.5)=3.4376$, $\lambda(1.5)=0.5828$ (obtained from equations (\ref{eqn:c_vs_lmbd}) and (\ref{eqn:A0_solution}) using Mathematica). Inset: the subleading behavior $\langle \Delta M \rangle_N =A_0(c)N^{\lambda(c)}-\langle M \rangle_N$. Black dashed line corresponds to $-\frac{1}{1-c} \ln N$. } \label{fig:M_N_exp_supercrit} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{Figures/figure6.pdf} \caption{ Exponential case: mean number of $c$-records for $c=0.5$ (red circles) and for $c=1$ (orange circles). Circles are numerical data. Black dashed lines correspond to the analytical predictions of Eq.(\ref{eqn:recordsnumber}). For $c=0.5$ we find $\mu(c)=1.42878\ldots$ computed using Mathematica by the method of appendix (\ref{app:exp_case_MN_asymp}).} \label{fig:M_N_exp_subcrit} \end{figure} \subsection{\label{sec:exp_case_2}Stationary record and age statistics} The solution of Eq. (\ref{eqn:recurrence_qRk}) for a given $k$ has been studied by Park et al. in \cite{PSNK15} for the case $f(x)=\exp(-x)\, \theta(x)$. Here we focus instead on the stationary limit and we seek the solution of Eq. (\ref{eqn:qR_sc}). By taking a derivative with respect to (w.r.t.) $R$ of equation (\ref{eqn:qR_sc}) we get \begin{equation} \label{eqn:qR_sc_derivative} q'(R)=\frac{f(R)}{1-F(R)} q(R+c) + \frac{f'(R)}{f(R)} q(R) \end{equation} For the exponential case, using $f(R)=\exp(-R)$ and $1-F(R)=\exp(-R)$, equation (\ref{eqn:qR_sc_derivative}) reduces to: \begin{equation} \label{eqn:exp_qR_stationary_diff_eq} q'(R)=q(R+c)-q(R) \end{equation} To solve this equation we use the ansatz: \begin{equation} \label{eqn:exp_qR.0} q(R)=\lambda\, e^{-\lambda R} \end{equation} Equation (\ref{eqn:exp_qR_stationary_diff_eq}) becomes \begin{equation} \label{eqn:exp_c_lmbd} 1-\lambda=e^{-\lambda c} \end{equation} A positive solution for $\lambda=\lambda(c)$ exists only for $c>1$. This means that for $c \leq 1$ there is no stationary regime. This conclusion was already pointed out in \cite{PSNK15} based on scaling arguments and numerical simulations. Here we perform an explicit calculation which is also confirmed by the independent calculation of the large $N$ expansion of $\langle M \rangle_N$ report in the appendix (\ref{app:exp_case_MN_asymp}). The stationary average record value is $1/\lambda(c)$ (see inset of Fig. (\ref{fig:gap_distr_exp})) and has the following limiting behaviors: \begin{equation} \label{eqn:avg_R_exp} \langle R \rangle(c) = 1/\lambda(c) = \begin{cases} 1/2(c-1) & c \to 1^+ \\ 1 & c \to \infty \end{cases} \end{equation} The divergence as $c \to 1^+$ is one of the fingerprints of the absence of a stationary state for $c \leq 1$. We now turn to the age distribution $\pi_k(n)$. As in the case of $q_k(R)$, a stationary distribution $\pi(n)$ in the limit $k\to \infty$ exists only for $c>1$. For $c\le 1$, $\pi_k(n)$ depends explicitly on $k$ even in the $k\to \infty$ limit. To derive the stationary age distribution $\pi(n)$ for $c>1$, we substitute $q(R)$ from Eq. (\ref{eqn:exp_qR.0}) into Eq. (\ref{eqn:pi_stationary}). Upon carrying out the integration explicitly, we obtain the following exact expression of $\pi(n)$ in the stationary state valid for all $n\ge 1$ and $c>1$ \begin{equation} \label{eqn:pi_exp} \pi(n) = \lambda \delta_{n,1} + (1-\lambda) \frac{\lambda \Gamma(n)\Gamma(\lambda+1)}{\Gamma(n+\lambda+1)}\, , \end{equation} where $\Gamma(z)=\int_0^{\infty} e^{-x}\, x^{z-1}\, dx$ is the standard Gamma function. The stationary age (or avalanche size) distribution $\pi(n)$ is then the sum of two contributions: a delta peak at $n=1$ and the Yule-Simon form for $n \geq 1$. Using the asymptotic behavior, $\Gamma(n)\Gamma(\alpha)/\Gamma(n+\alpha) \to 1/n^\alpha$ for large $n$, we unveil a beautiful power law behavior of the stationary age distribution (see Fig.~(\ref{fig:gap_distr_exp})) \begin{eqnarray} \label{eqn:pi_exp_limit_inf} \pi(n \to \infty) = \frac{\lambda(1-\lambda)}{n^{1+\lambda}} \end{eqnarray} where $\lambda\equiv \lambda(c)$ is given by the positive root of Eq. (\ref{eqn:exp_c_lmbd}) for $c>1$. Note that the power law exponent $(1+\lambda(c))$ continuously varies with $c$. When $c \to \infty$, the exponent $\lambda(c)\to 1$, thus strengthening the amplitude of the delta peak and in addition, for large $n$, the power law tail behaves as $\pi(n)\sim 1/n^2$. We now show that the power law decay of $\pi(n)\sim n^{-(1+\lambda(c))}$ for large $n$ in Eq. (\ref{eqn:pi_exp_limit_inf}) for $c>1$ is completely consistent with the result $\langle M_N\rangle\sim N^{\lambda(c)}$ for large $N$ in the third line of Eq. (\ref{eqn:recordsnumber}). To see this, we use a simple scaling argument. Given $\pi(n)\sim n^{-(1+\lambda(c))}$ for large $n$ and the length of the series $N$, the mean inter-record distance (mean age of a record) scales, for large $N$, as \begin{equation} \langle n\rangle= \sum_{n=1}^N n\, \pi(n)\sim N^{1-\lambda(c)}\, . \label{mean_age.1} \end{equation} Consequently, the mean number of records, which is identical to the mean number of inter-record intervals, up to step $N$ scales for large $N$ as \begin{equation} \langle M_N\rangle \sim \frac{N}{\langle n\rangle}\sim N^{\lambda(c)}\, , \label{meanr.1} \end{equation} which reproduces the third line of Eq. (\ref{eqn:recordsnumber}) for $c>1$. In appendix (A), this result is proved more rigorously. \begin{figure} \includegraphics[width=\linewidth]{Figures/figure7.pdf} \caption{ Exponential case: age statistics of the stationary $c$-records process for $c=1.5$ (blue circles) and for $c=1.1$ (green circles). Numerical data are averaged over $10^4$ realizations. Black dashed line corresponds to the analytical predictions of Eq.(\ref{eqn:pi_exp}). The inset shows $\lambda(c)$ as a function of $c$. } \label{fig:gap_distr_exp} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{Figures/figure8.pdf} \caption{Upper panel: record correlation function numerically computed as in equation (\ref{eqn:record_corr_func}). Lower panel: correlation length $\xi(c)$ as a function of $c-1$. We fitted an exponent $\nu \approx 2$. } \label{fig:corr_exp} \end{figure} \subsection{\label{sec:exp_case_3}Record correlations} One of the most interesting features of the $c$-records statistics is shown in Fig. (\ref{fig:exp_records_sequence}). We focus on a sequence of records when the stationary regime is already reached. The sequence tends to cluster in patterns where record values are high, followed by events of smaller value. The corresponding sequence of ages shows a similar behavior: when the record values are high we observe large ages, while the age is of the order $1$ when the records are small.\\ To understand this behavior we can first observe that in the classical record case $c=0$, even if there is no stationary regime, all the records values are strongly correlated. As a fingerprint of this, the sequence of record values is strictly increasing. For $c>0$ this is not always the case. For example we know that the $c$-record process of an exponential series with $c>1$ has a stationary state and the correlations have a finite range (which corresponds to the typical size of the correlated patterns in Fig.~(\ref{fig:exp_records_sequence})). To characterize this behaviour we focus on the record values and study the following correlation function: \begin{equation} \label{eqn:record_corr_func} \rho_c(\tau) \equiv \frac{\text{Cov}(R_k R_{k+\tau})}{\text{Var}(R_k)} \end{equation} By definition $\rho_c(0)=1$. In appendix (\ref{app:corr_exp}) we compute the correlation between two successive records, namely $\rho_c(1)$ which reads \begin{equation} \label{eqn:record_corr_func_1} \rho_c(1) = (1-\lambda(c))(1-\ln(1-\lambda(c)))\, . \end{equation} When $c \to \infty$, from Eq.~(\ref{eqn:c_vs_lmbd}), $\lambda(c) \to 1$ and $\rho_c(1) \to 0$. This means that when $c \to \infty$ the records become uncorrelated. Indeed every random variable $\{ x_1, x_2, \dots \}$ becomes a record. On the other hand as $c \to 1$, $\lambda(c) \to 0$ and $\rho_c(1) \to 1$. This result signals that the correlations are very strong and we study numerically how fast they decay for the exponential case. In Fig. (\ref{fig:corr_exp}) we compute the correlation function (\ref{eqn:record_corr_func}) of a stationary record sequence for the exponential distribution and for different values of $c$. As in standard critical phenomena, the correlation length diverges when one approaches the critical point $c=1$. The numerical curves in Fig.~(\ref{fig:corr_exp}) are well fitted by an exponential law as $\rho_c(\tau) = \exp(-\tau/\xi(c))$. Using this fit we estimate the correlation length and find that it diverges as, $\xi(c) \sim 1/(c-1)^\nu$, with $\nu \approx 2$ (see Fig.~(\ref{fig:corr_exp})). This result is coherent with the estimation of $\xi(c)$ via $\xi(c) \sim -1/\ln \rho_c(1)$ coming from $\exp(-1/\xi(c))=\rho_c(1)$. Indeed as $c \to 1$, $\lambda(c) \sim 2(c-1)$ and $-1/\ln \rho_c(1) \sim (c-1)^{-2} $. \section{Stretched Exponential $f(x)$} \label{stretched} Let us recall from the summary of results in Section \ref{sec:record_process} that a stationary limiting distribution for $q_k(R)$ and $\pi_k(n)$ exist for any $c>0$, if $f(x)$ decays faster than $e^{-x}$ for large $x$. In the complementary case when $f(x)$ has a slower than exponential tail, there is no stationary limiting distribution. In the borderline case $f(x)=e^{-x}$ one has a phase transition at $c=1$, separating the non-stationary phase ($c\le 1$) and the stationary phase $(c>1)$, as demonstrated in detail in the previous section. In this section, we will investigate the two complementary cases of $f(x)$: respectively with a `faster' and a `slower' than exponential tail. We will do so by choosing $f(x)$ from the stretched exponential family, defined on the positive real axis $x\ge 0$, \begin{equation} \label{stretched_f.1} f_{\rm stretched}(x)=\frac{\gamma}{\Gamma \left(\frac{1}{\gamma}\right)}\, e^{-x^\gamma}\, \quad {\rm for}\,\, \gamma>0 \end{equation} with cumulative $F_{\text{stretched}}(x)=1- \frac{\Gamma\left(\frac{1}{\gamma}, x^\gamma\right)}{\Gamma \left(\frac{1}{\gamma}\right) }$, where $\Gamma(s,t)=\int_t^\infty x^{s-1} e^{-x} dx $ the incomplete gamma function. This family of $f(x)$ in Eq. (\ref{stretched_f.1}) includes the `borderline' exponential as the special case $\gamma=1$. Furthermore, the case $\gamma>1$ would correspond to a `faster' than exponential, while $\gamma<1$ would correspond to `slower' than exponential tail. The reason why $\gamma=1$ is a borderline case, i.e., why the presence of a finite $c>0$ afftects the record statistics differently for $\gamma<1$ and $\gamma>1$ can be understood intuitively using the extreme value statistics as follows. Consider first $c=0$. Let us consider the first $N$ steps of the infinite I.I.D sequence. The value of the last record in this series of size $N$ then coincides, for $c=0$, with the global maximum $X_{\max}$ up to $N$ steps. For the stretched exponential $f(x)$ in Eq. (\ref{stretched_f.1}), it is well known from the theory of extreme value statistics (for a recent review see~\cite{MPS20}) that while the mean value $\langle X_{\max}\rangle \sim (\ln N)^{1/\gamma}$ for large $N$, the variance scales as \begin{equation} \sigma^2=\langle X_{\max}^2\rangle-\langle X_{\max}\rangle^2 \sim (\ln N)^{2(1-\gamma)/\gamma}\, . \label{var_M} \end{equation} Hence, for $0<\gamma<1$, the width of the fluctuation grows with increasing $N$, while for $\gamma>1$ it decreases for large $N$. Now, imagine switching on a small $c>0$. If $0<\gamma<1$, an addition of a finite offset $c$ will not affect the record statistics, since it is much smaller than the fluctuation of the record value for large $N$. In contrast, for $\gamma>1$ where the width is of $O(1)$ for large $N$, the record value and its statistics will obviously be more sensitive to a finite offset $c$. The case $\gamma=1$ is thus marginal. In fact, this change of behavior at $\gamma=1$ for the stretched exponential family was also noticed in the asymptotic behavior of the density of near extreme events, i.e., the density of entries near the global maximum in an I.I.D series~\cite{SM2007}. Below, we will provide a more precise derivation of this change of behavior in the record statistics at $\gamma=1$ due to a nonzero $c>0$. In this section, for simplicity we focus only on one observable, namely the record value distribution $q_k(R)$. Our main goal here is to understand the criteria for having a stationary distribution for $q_k(R)$ in the limit $k\to \infty$. The other two observables $\langle M_N\rangle$ and $\pi_k(n)$ can also be studied in principle, but we will skip them here to keep the paper shorter. For the $c$-record model with $f(x)$ belonging to this class, we then have two parameters $(\gamma,c)$. Our goal is to find in which region in the $(\gamma\ge 0,\, c\ge 0)$ quadrant, we have a stationary solution for $q_k(R)\to q(R)$ as $k\to \infty$. We will show that this leads to an interesting phase diagram shown in Fig.~(\ref{fig:phase_diagram}). \begin{figure} \centering \includegraphics [width=\linewidth] {Figures/figure9.pdf} \caption{Phase diagram for the stretched exponential case. For $\gamma < 1$ no stationary limit exists for any $c>0$ while for $\gamma > 1$ it always exists. The exponential distribution $\gamma=1$ corresponds to the marginal case, for which a stationary state exists only for $c>1$. } \label{fig:phase_diagram} \end{figure} We start from the general recursion relation for $q_k(R)$ in Eq. (\ref{eqn:recurrence_qRk}), valid for general $f(x)$. We assume that there exists a stationary solution $q(R)$ which would then satisfy the integral equation (\ref{eqn:qR_sc}). Our strategy would be to find, for $f(x)$ given in Eq. (\ref{stretched_f.1}), if Eq. (\ref{eqn:qR_sc}) allows a normalizable solution $q(R)$. If it does not, there is no stationary solution. For later analysis, it is first convenient to define \begin{equation} q(R)= f(R)\, G(R) \label{def_GR} \end{equation} and rewrite Eq. (\ref{eqn:qR_sc}) as \begin{equation} G(R)=\int_0^c G(R')\, f(R')\, dR' + \int_c^{R+c} \frac{G(R') f(R')}{1-F(R'-c)} dR' \label{GR.1} \end{equation} where we recall $F(R)=\int_0^R f(y)dy$. By taking a derivative with respect to $R$ one gets \begin{equation} \label{eqn:GR_diff_eq} G'(R)=\frac{f(R+c)}{1-F(R)}\, G(R+c) \, . \end{equation} This is a first-order non-local differential equation. We need only one `boundary' condition, i.e., the value of $G(R)$ at some point $R$, to fix the solution uniquely. To find such a condition, we note that the solution of this differential equation need, in addition, to satisfy the original integral equation (\ref{GR.1}). Substituting, e.g., $R=0$ in Eq. (\ref{GR.1}) gives a condition \begin{equation} G(0)= \int_0^c G(R')\, f(R')\, dR' \, . \label{G0.1} \end{equation} This shows that $G(0)$ is a constant and the solution must satisfy this condition (\ref{G0.1}) self-consistently. Another compatibility condition follows by investigating the large $R$ limit of Eq. (\ref{GR.1}). If the limit $G(\infty)$ exists, one obtains \begin{equation} G(\infty)= \int_0^c G(R')\, f(R')\, dR' + \int_c^{\infty} \frac{G(R') f(R')}{1-F(R'-c)} dR' \, . \label{GR.2} \end{equation} For arbitrary $f(x)$, it is hard to find a general solution to Eq. (\ref{eqn:GR_diff_eq}) with boundary condition (\ref{G0.1}) or (\ref{GR.2}). Hence, below we focus on the stretched exponential class in Eq. (\ref{stretched_f.1}), for which the first-order equation (\ref{eqn:GR_diff_eq}) reduces to \begin{equation} \label{eqn:GR_diff_eq_sexp} G'(R)= \frac{e^{-(R+c)^\gamma}}{\left[\int_R^\infty dR' e^{-{R'}^\gamma}\right]}\, G(R+c)\, . \end{equation} Note that for $\gamma=1$, Eq. (\ref{eqn:GR_diff_eq_sexp}) reduces to $G'(R)= e^{-c}\, G(R+c)$ and this leads to the nontrivial solution $q(R)= \lambda(c)\, e^{-\lambda(c)\, R}$ for all $R\ge 0$ with $\lambda(c)$ given in Eq. (\ref{eqn:exp_c_lmbd}), as discussed in the previous section. \vskip 0.3cm \noindent {\em The case $c=0$ and arbitrary $\gamma>0$.} Let us first start with $c=0$ case with arbitrary $\gamma>0$. In this case, Eq. (\ref{eqn:GR_diff_eq_sexp}) becomes local in $R$ whose general solution can be easily found \begin{equation} G(R)= G(R_0)\, \frac{\int_{R_0}^{\infty}\,e^{-x^{\gamma}}\, dx}{\int_{R}^{\infty}\, e^{-x^{\gamma}}\, dx}\, , \label{GR_sol.1} \end{equation} where $R_0$ is arbitrary. The condition (\ref{G0.1}) says that for $c=0$, $G(0)=0$ identically. If we choose $R_0=0$ in Eq. (\ref{GR_sol.1}), then using $G(0)=0$, we see that the only possible solution for $G(R)$ is just $G(R)=0$ for all $R$. Consequently, using Eq. (\ref{def_GR}), we get $Q(R)=0$ which evidently can not be normalized to unity. This indicates that there is no stationary solution $q(R)$ for $c=0$ and arbitrary $\gamma>0$. In other words, there is no stationary solution on the horizontal axis $c=0$ in the phase diagram in Fig. (\ref{fig:phase_diagram}). \vskip 0.3cm \noindent {\em The case $0<\gamma<1$ and arbitrary $c> 0$.} In this case, we want to show that there is no limiting stationary solution $q(R)$ (see the (red) shaded region in the phase diagram in Fig. (\ref{fig:phase_diagram})). In other words, we will show that a solution to Eq. (\ref{eqn:GR_diff_eq}) satisfying the condition (\ref{GR.2}) does not exist for $\gamma<1$ with $c\ge 0$ arbitrary. To show this, it is sufficient to investigate Eq. (\ref{eqn:GR_diff_eq}) for large $R$, keeping $c\ge 0$ fixed. For large $R$, let us first assume that the limit $G(\infty)$ in Eq. (\ref{GR.2}) exists. Since $G(R)$ approaches a constant as $R\to \infty$, it follows that for any arbitrary $c\ge 0$ and large $R$, we must have $G(R+c)\to G(\infty)$ on the r.h.s of Eq. (\ref{eqn:GR_diff_eq}). This gives, for large $R$, \begin{equation} G'(R) \approx G(\infty)\, \frac{e^{-(R+c)^\gamma}}{\left[\int_R^\infty dR'\, e^{-{R'}^\gamma}\right]} \, . \label{GR_left.1} \end{equation} Now consider first the large $R$ behavior of the denominator on the r.h.s of Eq. (\ref{GR_left.1}). It is easy to show that, to leading order for large $R$, \begin{equation} \int_R^{\infty} e^{-{R'}^\gamma}\, dR' \approx \frac{1}{\gamma}\, R^{\gamma-1}\, e^{-R^{\gamma}}\, . \label{GR_left.2} \end{equation} Substituting this in Eq. (\ref{GR_left.1}) one obtains for large $R$ and for any $\gamma>0$ \begin{eqnarray} G'(R) &\approx & G(\infty)\, \gamma\, R^{\gamma-1}\, e^{-(R+c)^{\gamma}+ R^{\gamma}} \nonumber \\ &\approx& G(\infty)\,\gamma\, R^{\gamma-1}\, e^{-\gamma\, c\, R^{\gamma-1}}\, . \label{GR_larger.1} \end{eqnarray} Consider now the case $0<\gamma<1$. In this case, the argument of the exponential on the r.h.s of Eq. (\ref{GR_larger.1}) vanishes, i.e., $e^{-\gamma\, c\, R^{\gamma-1}}\to 1$ as $R\to \infty$. Consequently, integrating Eq. (\ref{GR_larger.1}), one finds that $G(R)\sim R^{\gamma}$ actually grows with increasing $R$ for $\gamma<1$. But this is incompatible with the condition (\ref{GR.2}) and our starting assumption $G(\infty)$ is finite. In fact, this is also incompatible with the original integral equation (\ref{GR.1}). As $R\to \infty$, the l.h.s of Eq. (\ref{GR.1}) grows as $R^{\gamma}$, while the r.h.s approaches a constant since the integral on the r.h.s is convergent as $R\to \infty$. Hence we conclude that for $0<\gamma<1$ and $c\ge 0$, there is no stationary solution for $G(R)$, and equivalently for $q(R)$ leading to the (red) shaded area of the phase diagram in Fig. (\ref{fig:phase_diagram}). \vskip 0.3cm \noindent {\em The case $\gamma>1$ and arbitrary $c> 0$.} Let us first check that in this case there is no obvious incompatibility between the large $R$ behavior in Eq. (\ref{GR_larger.1}) and the original integral equation (\ref{GR.1}). Indeed, for $\gamma>1$, the exponential factor $e^{-\gamma\, c\, R^{\gamma-1}}$ on the r.h.s of Eq. (\ref{GR_larger.1}) decays rapidly, and integrating over $R$ we find that $G(R)$ approaches a constant as $R\to \infty$, which is perfectly compatible with Eq. ({\ref{GR.1}) in the large $R$ limit, or equivalently with Eq. (\ref{GR.2}). This already indicates that there is a normalizable stationary solution for any $c>0$ and $\gamma>1$. To compute explicitly this solution for arbitrary $c>0$ (and $\gamma>1$) still seems rather hard. Below, we compute this stationary distribution for $\gamma>1$ in two opposite limits: (i) $c\to 0^+$ and (ii) $c\to \infty$ limit. \vskip 0.3cm \noindent{\em The limit $c\to 0^+$ and $\gamma>1$ arbitrary.} We start with the $c\to 0$ limit with fixed $\gamma>1$. We fix $R$ in Eq. (\ref{eqn:GR_diff_eq_sexp}) and take the limit $c\to 0^+$. To leading order for small $c$, we can approximate $G(R+c)\approx G(R)$ to make Eq. (\ref{eqn:GR_diff_eq_sexp}) local and also expand $(R+c)^{\gamma}\approx R^{\gamma}+ \gamma\, c\, R^{\gamma-1}$ up to $O(c)$ with $R$ fixed. Eq. ({\ref{eqn:GR_diff_eq_sexp}) then reduces to \begin{equation} \frac{1}{G(R)}\, \frac{d G(R)}{dR}\approx \frac{e^{-R^{\gamma}-c\,\gamma\, R^{\gamma-1}} }{\left[\int_R^{\infty} dR'\, e^{-{R'}^{\gamma}}\right]}\equiv g_c(R)\, . \label{GR_smallc.1} \end{equation} Note that only $c$-dependence appears through the factor $c\,gamma\, {R'}^{\gamma-1}$ inside the exponential in $g_c(R)$. For $\gamma>1$, this term contributes significantly for large $R$, even when $c\to 0^+$. Hence, we can not neglect this $c$-dependent term, especially for large $R$. One can then easily integrate Eq. (\ref{GR_smallc.1}) and obtain the solution as \begin{equation} \label{eqn:GR_stretched_exp_small_c_solution} G(R) \approx G(0)\, \exp \left[\int_0^{R} g_c(x)\, dx\right]\, , \end{equation} where $G(0)$ is a constant and $g_c(x)$ is defined in Eq. (\ref{GR_smallc.1}). It is easy to show that for large $x$, $g_c(x)$ behaves as $g_c(x)\approx \gamma\, x^{\gamma-1}\, e^{-\gamma\, c\, x^{\gamma-1}}$. Hence for $\gamma>1$, the integral $\int_0^\infty g_c(x)\, dx$ is perfectly convergent and is just a constant. Consequently, we find from Eq. (\ref{eqn:GR_stretched_exp_small_c_solution}) that $G(R)\to G(\infty)=G(0)\,\exp\left[\int_0^\infty g_c(x)\, dx\right]$ as $R\to \infty$. Thus the stationary solution $q(R)$ in this $c\to 0^+$ limit is given by \begin{equation} q(R)\approx \frac{\gamma}{\Gamma \left(\frac{1}{\gamma}\right)}\, e^{-R^{\gamma}}\, G(R) \approx f(R)\, G(R) \label{qr_smallc.1} \end{equation} where the function $G(R)$, given in Eq. (\ref{eqn:GR_stretched_exp_small_c_solution}), is well defined for any $R$ as long as $c\to 0^+$ (nonzero) and $\gamma>1$. Thus, in the $c\to 0$ limit, the stationary record value distribution $q(R)$ gets modified considerably from the parent distribution $f(R)$ by the multiplicative factor $G(R)$. \vskip 0.3cm \noindent{\em The limit $c\to \infty$ and $\gamma>1$ arbitrary.} Next we consider the opposite limit $c \to \infty$. When $c \to \infty$ every random variable in the series $\{x_1,\,x_2,\,x_3,\,\cdots\}$ is a record, hence $q(R) = f(R)$ and $G(R)=1$ for all $R\ge 0$. Now consider $c$ large, but not strictly infinite. In this case, the function $G(R)$ will change from its flat value $G(R)=1$ that is valid strictly for $c\to \infty$. However, we expect that $G(R)$, in the limit $R\to \infty$, is not very sensitive to $c$, i.e., $G(\infty)=1$ even for finite but large $c$. However, for finite but fixed $R$, we expect that $G(R)$ will deviate from its flat value $1$. To find this change in $G(R)$ for fixed $R$, to leading order for large $c$, we can use the approximation $G(R+c)\approx 1$ on the r.h.s of Eq. (\ref{eqn:GR_diff_eq_sexp}) and solve the resulting first-order local equation, leading to the solution \begin{equation} \label{GR_sol.2} G(R) = 1 - \int_R^\infty \frac{e^{-(R'+c)^\gamma}}{\left[\int_{R'}^\infty e^{-x^\gamma} dx\right]}\, dR'\, , \end{equation} where we used the expected boundary condition $G(\infty)=1$ mentioned above. For fixed $R$ and large $c$, we can approximate $(R+c)^{\gamma}\approx c^{\gamma}+\gamma\, c^{\gamma-1}\, R$. Using this approximation in the numerator of the integrand in Eq. (\ref{GR_sol.2}) and rescaling $z= \gamma\, c^{\gamma-1}\, R'$, we find that for $\gamma>1$ and in the scaling limit $c\to \infty$, $R\to 0$ such that the product $c^{\gamma-1}\, R$ is fixed \begin{equation} G(R)\approx 1- \frac{e^{-c^\gamma}}{c^{\gamma-1}\, \Gamma(\frac{1}{\gamma})}\, e^{-\gamma c^{\gamma-1}\,R}\, . \label{GR_sol.3} \end{equation} This is clearly compatible with the starting ansatz that $G(\infty)=1$. Finally, using this in Eq. (\ref{def_GR}) we get the stationary solution for $\gamma>1$ and large $c$ limit \begin{equation} q(R)\approx f(R)\, \left[1- \frac{e^{-c^\gamma}}{c^{\gamma-1}\, \Gamma(\frac{1}{\gamma})}\, e^{-\gamma c^{\gamma-1}\,R}\right]\, . \label{qR_sol.3} \end{equation} Thus, in the $c\to \infty$ limit, $q(R)$ approaches $f(R)$ with a small additive corection term as given in Eq. (\ref{qR_sol.3}). Summarizing, for $\gamma>1$ in the phase diagram in Fig. (\ref{fig:phase_diagram}), the record value distribution becomes stationary for large $k$, for any $c>0$. In the two limits $c\to 0$ and $c\to \infty$, the stationary distribution $q(R)$ is given respectively in Eqs. (\ref{qr_smallc.1}) and (\ref{qR_sol.3}). We have checked these results numerically in Fig. (\ref{fig:sexp_qR}) for $\gamma=2$. In this figure, we see that as $c$ increases, $q(R)$ progressively approaches $f(R)$. \begin{figure} \includegraphics[width=\linewidth]{Figures/figure12.pdf} \caption{Stretched exponential case: stationary record distributions $q(R)$ at $\gamma=2$ and for different $c$. As $c$ gets bigger, $q(R)$ approaches $f(R)$.} \label{fig:sexp_qR} \end{figure} We have also computed numerically the mean number of records $\langle M_N\rangle$ up to $N$ steps. As indicated in the phase diagram in Fig.~(\ref{fig:phase_diagram}), we expect that for $0<\gamma<1$ the record process is non-stationary, i.e., the effect of $c$ is insignificant and the system behaves similar to $c=0$. Hence in this regime ((red) shaded region in the phase diagram in Fig.~(\ref{fig:phase_diagram}), we expect $\langle M_N\rangle \simeq \ln N$ for large $N$, for any $c>0$. This prediction is verified numerically in Fig. (\ref{fig:M_N_sexp_log}), for $\gamma=1/2$ and for several values of $c$. In contrast, for $\gamma>1$ and $c>0$ ((blue) shaded region in the phase diagram in Fig.~(\ref{fig:phase_diagram})), we have a stationary phase. In this case, we expect a linear growth $\langle M_N\rangle \simeq A_1(c)\, N$ for large $N$, with an $c$-dependent amplitude $A_1(c)\le 1$. In the limit $c\to \infty$, we expect $A_1(c)\to 1$, since every entry becomes a record in this limit. In Fig. (\ref{fig:M_N_sexp_lin}), we verify this prediction for fixed $\gamma=2$ and for several values of $c$. \begin{figure} \includegraphics[width=\linewidth]{Figures/figure10.pdf} \caption{Stretched exponential case: logarithmic growth of the mean number of $c$-records for different $c$ and $\gamma=0.50$. The dashed line is a guide to the eye.} \label{fig:M_N_sexp_log} \end{figure} \begin{figure}[h!] \includegraphics[width=\linewidth]{Figures/figure11.pdf} \caption{Stretched exponential case: linear growth of the mean number of $c$-records for different $c$ and $\gamma=2$.} \label{fig:M_N_sexp_lin} \end{figure} \section{Other distributions} \label{other_dist} In this section, we study other classes of $f(x)$. First we consider the uniform distribution of $f(x)$ over the bounded interval $[0,1]$ for which we present exact analytical results for all three observables $\langle M_N\rangle$, $q_k(R)$ and $\pi_k(n)$. We then consider more general bounded distributions. Bounded distributions belong to the family of $f(x)$ with a `faster than' exponential tail, hence we anticipate and demonstrate below that both $q_k(R)$ and $\pi_k(n)$ allow stationary limiting distributions as $k\to \infty$ for bounded $f(x)$. We then consider another class of unbounded distribution, which we call the Weibull class \begin{equation} f_{\text{Weibull}}(x) = \gamma\, x^{\gamma-1}\, e^{-x^\gamma}\, \quad {\rm for}\,\, \gamma>0 \label{weibull.1} \end{equation} with $F_{Weibull}(x)=1-e^{-x^\gamma}$. It turns out that this $f(x)$ is easier to sample numerically using the inverse Transform method, as explained in the appendix (\ref{app:simulation}). We present detailed numerical results for all three observables $\langle M_N\rangle$, $q_k(R)$ and $\pi_k(n)$ for this case. \subsection{\label{sec:bounded_case}Bounded distributions} The $c$-record process associated to I.I.D time series drawn from bounded distributions has a well defined stationary limit for any $c>0$. For simpicity we restrict the interval to the segment $[0,1]$. This implies that for any $c > 1$, every entry $x_i$ of the time series is a record. \subsection{\label{sec:bounded_case_1} Uniform distribution } We first consider the uniform distribution $f(x)=\mathbb{I}_{[0,1]}(x)$ and show that the mean number of records, the stationary record distribution and their age distribution can be explicitly computed for $1/2\le c \le 1$. For $0<c < 1/2$. the calculations are more cumbersome and we rely on Monte Carlo simulations. To computed the mean number of records we study equation (\ref{eqn:recurrence_QR}) for the uniform distribution. Its expression simplifies to: \begin{multline} \label{eqn:QR_unif} \tilde{Q}(R) \left[1-z(R-c) \theta(R-c) \right] =\\ =z\,s+ z\,s\, \int_0^{\min(1,R+c)} \tilde{Q}(R') dR' \end{multline} This equation is solved in appendix (\ref{app:bounded_case_PNM}) for $1/2\le c \le 1$ and the exact mean number of records reads: \begin{equation} \label{eqn:exp_MN_uniform} \langle M \rangle_N = (2-c+\ln(c))N + \frac{1-c}{c} + \ln(c) \end{equation} Note that when $c=1$, we get $ \langle M \rangle_N =N$ as expected. In Fig. (\ref{fig:M_N_unif}) we numerically computed $\langle M \rangle_N$ for different values of $c$ and included the analytical predictions for $1/2\le c \le 1/2$. The stationary record distribution $q(R)$ satisfies Eq. (\ref{eqn:qR_sc_derivative}) together with the condition that $q(R)=0$ for $R>1$: \begin{eqnarray} \label{eqn:qR_unif_diff_eq} q'(R) = \begin{cases} 0 &\quad \; 1-c<R<1 \\ &\\ \frac{q(R+c)}{1-R} & \quad \; 0< R<1-c \end{cases} \end{eqnarray} Eq. (\ref{eqn:qR_unif_diff_eq}) is valid for any $0 \leq c \leq 1$. For $c \geq 1/2$, it can be simply solved. In particular, imposing the global normalization, one gets: \begin{equation} \label{eqn:qR_unif_distr} q(R) = \frac{1}{2-c+\ln(c)} \begin{cases} 1 & 1-c<R<1 \\ 1-\ln \frac{1-R}{c} & 0 < R < 1-c \end{cases} \end{equation} In Fig.~(\ref{fig:uniform_case}) (upper panel) we show $q(R)$ for different values of $c$ along with the analytical predictions for $1/2\le c \le 1$. The stationary age distribution $\pi(n)$ (for $1/2\le c \le 1$) follows from Eqs. (\ref{eqn:pi_stationary}) and (\ref{eqn:qR_unif_distr}): \begin{multline} \label{eqn:pn_unif_distr} \pi(n)=\frac{1+\ln(c)}{2+\ln(c)-c} \delta_{n,1} + \\ +\frac{1}{2+\ln(c)-c} \frac{(1-c)^n}{n} \left[1-\frac{n(1-c)}{n+1}\right] \end{multline} $\pi(n)$ shows a delta peak at $n=1$, as in the exponential case. For large $n$, $\pi(n)$ decays exponentially over a characteristic scale $-1/\ln(1-c)$ that gets smaller as $c \to 1$. The fact that $\pi(n)$ has a well defined first moment is compatible with the scaling of the average number of records, namely $\langle M \rangle_N \propto N$ as $N \to \infty$. \begin{figure} \includegraphics[width=\linewidth]{Figures/figure13.pdf} \caption{Uniform case: mean number of $c$-records for different $c$. For $c \geq 1/2$ we include the analytical predictions in Eq. (\ref{eqn:exp_MN_uniform}) as shown by dashed lines. The agreement is excellent.} \label{fig:M_N_unif} \end{figure} \begin{figure}[h!] \includegraphics[width=\linewidth]{Figures/figure14.pdf} \caption{ Record (upper panel) and age (lower panel) distributions for various $c$ for the uniform distribution. Analytical results of Eqs. (\ref{eqn:qR_unif_distr}) and (\ref{eqn:pn_unif_distr}) are shown by solid lines in the upper panel.} \label{fig:uniform_case} \end{figure} \subsection{Generic bounded distributions} The $c$-record process associated to a generic bounded distribution is more difficult to characterize analytically. However some of the features that we found for the case of the uniform distribution remain valid. In particular for any value of $c$ the mean number of records grows linearly with $N$ (at large $N$) and the stationary distribution of ages displays an exponential cutoff. As an illustration we studied numerically the family of distribution wth cumulative $F(x)=1-(1-x)^\nu$. The uniform distribution corresponds to $\nu=1$. In Fig. (\ref{fig:bounded2_case}) we report our results for $\nu=2$: in the upper panel the study of the stationary record distribution $q(R)$ and in the lower panel the age distribution $\pi(n)$. \begin{figure}[h!] \includegraphics[width=\linewidth]{Figures/figure15.pdf} \caption{ Record (upper panel) and ages (lower panel) distributions for various $c$ for the bounded case with cumulative distribution $F(x)=1-(1-x)^{2}$ with $0\le x\le 1$.} \label{fig:bounded2_case} \end{figure} \subsection{Numerical results for the Weibull family} In this section we briefly summarize the numerical results for $\langle M \rangle_N$ $q(R)$, $\pi(n)$ for the Weibull family. In Fig. (\ref{fig:M_N_weibull}) we show the mean number of records $\langle M \rangle_N$ as a function of $N$ for $10^3$ realizations of the $c$-record process. For large enough $N$ the scaling $\langle M \rangle_N \propto N$ is recovered. In Fig.~(\ref{fig:weibull_record_distr}) we show the stationary record distribution for the Weibull distribution at $\gamma=2$. We find a stationary distribution for any $c>0$. As $c$ gets bigger, $q(R)$ approaches $f(R)$, as expected. In Fig. (\ref{fig:weibull_avg_record}) we show the average record $\langle R \rangle_\gamma (c)$ as a function of $c$ for different $\gamma$'s. We also insert the scaling as $c \to 0^+$ of the average record value obtained using the argument in appendix \ref{app:avg_record}: \begin{equation} \label{eqn:avg_record_argument_weibull} \langle R \rangle_\gamma (c) \approx (c \gamma)^{\frac{1}{\gamma-1}} \end{equation} Finally Fig.~(\ref{fig:gap_distr_weibull}) shows the age of record distributions at $\gamma=2$ for different values of $c$. The distributions have a power law decay as $n \to \infty$ with an exponent $\tau \geq 2$, i.e., $\pi(n) \sim n^{-\tau}$. This numerical result is compatible with the scaling $\langle M \rangle_N \propto N$ of the average number of records since a power law with $\tau \geq 2$ has a well defined first moment. \begin{figure}[h!] \includegraphics[width=\linewidth]{Figures/figure16.pdf} \caption{Weibull case with $\gamma=2$: mean number of $c$-records for various $c$. As $c$ gets smaller, the converges to $\langle M \rangle_N \propto N$ is } \label{fig:M_N_weibull} \end{figure} \begin{figure}[h!] \includegraphics[width=\linewidth]{Figures/figure17.pdf} \caption{Record distributions for the Weibull family with $\gamma=2$ for various $c$.} \label{fig:weibull_record_distr} \end{figure} \begin{figure}[h!] \includegraphics[width=\linewidth]{Figures/figure18.pdf} \caption{ Average record value for various $\gamma$ and $c$ for the Weibull family. We plot with the dashed black line the expected scaling of the average for $c \to 0^+$ (see Eq. (\ref{eqn:avg_record_argument_weibull})).} \label{fig:weibull_avg_record} \end{figure} \begin{figure}[h!] \includegraphics[width=\linewidth]{Figures/figure19.pdf} \caption{ Age of record numerical distributions for the Weibull family at $\gamma=2$ and varying $c$. All the distributions show a power law tail with exponent $\geq 2$.} \label{fig:gap_distr_weibull} \end{figure} \section{\label{sec:generalization}Generalizations of the record process} Before the conclusion we would like to discuss some possible generalizations of the $c$-record problem. For simplicity here we focus only on the conditions for the existence of a stationary record distribution and on its form. We let the calculations of the mean number of records and of the age statistics to future works. Few protocols can be considered as a straightforward generalization of the $c$-record process: \begin{itemize} \item The constant $c$ can be promoted to be a positive random variable with distribution $g(c)$. For $f(x)=e^{-x} \theta(x)$ the fixed point equation for the stationary record distribution, averaged over all possible values of $c$ (annealed average), writes: \begin{equation} q'(R)=\int_0^\infty g(c) q(R+c) dc - q(R) \end{equation} Remarkably this equation admits an exponential solution $q(R)=\lambda e^{-\lambda R}$ if the equation \begin{equation} 1-\lambda = \tilde{g}(\lambda) \equiv \int_0^\infty e^{-\lambda c} g(c) dc \end{equation} has a positive solution $\lambda$. For example if $g(c)$ is an exponential distribution, a stationary state is reached if its mean is bigger than $1$. \item The definition of the $c$-record can be extended with a function $c(R)$, namely $R_k$ is a record if $R_k > R_{k-1} - c\,(R_{k-1})$. As a concrete example one can consider $c(R)=c\, R$ with $c<1$ (for $c \geq 1$ all the values of the time series are records): \begin{equation} R_k > (1-c)\, R_{k-1} \end{equation} The fixed point equation for the stationary record distribution $q(R)$ satisfies: \begin{flalign} \label{eqn:qR_sc_generalized_diff} & q'(R)= \frac{f(R)}{1-F(R)} \frac{1}{1-c} q\left(\frac{R}{1-c}\right) + \\ & +\frac{f'(R)}{f(R)} q(R) \nonumber \end{flalign} Equation (\ref{eqn:qR_sc_generalized_diff}) becomes simple for a Pareto distribution $f(x)=\frac{\alpha}{x^{\alpha+1}} \theta(x-1)$: \begin{equation} \label{eqn:qR_sc_generalized_diff_pareto} q'(R)= \frac{\alpha}{R(1-c)} q\left( \frac{R}{1-c} \right) - \frac{\alpha+1}{R} q(R) \end{equation} The stationary state exists for $c > 1-e^{-\frac{1}{\alpha}}$ and the solution of (\ref{eqn:qR_sc_generalized_diff_pareto}) still a Pareto distribution $q(R)=\frac{\beta}{R^{\beta+1}} \theta(R-1)$ with $\beta$ the unique positive solution of the trascendental equation \begin{equation} 1-\frac{\beta}{\alpha}=(1-c)^\beta \end{equation} The records generated by this process are equivalent to the $c$-records discussed in this paper via the map $R_k \to \ln R_k$. Under this mapping the Pareto distribution becomes the exponential distribution. \item Finally we consider the following $k$-dependent record condition: \begin{equation} R_k > R_{k-1} - c(k+1)^{b-1} \end{equation} for a constant $b>0$. This protocol has been considered in~\cite{PK16} in the context of evolutionary biology: the quantity $c(k+1)^{b-1}$ is called \textit{handicap} and the analysis is carried out for both increasing $b>1$ and decreasing $b<1$ handicaps. The case $b=1$ coincides with the $c$-record process. We refer the reader to the original work \cite{PK16} for details. \end{itemize} \section{\label{sec:conclusion}Conclusion} In this paper, we have shown that a simple record model of an I.I.D series, which we call the $c$-record model, can be successfully used to understand and explain several realistic features of avalanche statistics in disordered systems, an example being the earthquake dynamics in seismicity. This model has a single parameter $c\ge 0$ and the other input is the distribution $f(x)$ of an entry. We have focused on three natural observables: (i) the mean number of records $\langle M_N\rangle$ up to step $N$ in an infinite series (ii) the distribution $q_k(R)$ of the value of the $k$-th record and (iii) the distribution $\pi_k(n)$ of the time interval $n$ between the $k$-th and the $(k+1)$-th record. One of our main conclusions is that if $f(x)$ decays, for large $x$, slower than an exponential, both $q_k(R)$ and $\pi_k(n)$ do not have stationary limits as $k\to \infty$ and $\langle M_N\rangle \sim \ln N$ for large $N$, as in the $c=0$ case. Thus the effect of $c$ is not very significant for $f(x)$ with a slower than exponential tail. In contrast, if $f(x)$ has a faster than exponential tail, both $q_k(R)\to q(R)$ and $\pi_k(n)\to \pi(n)$ approach stationary limiting forms as $k\to \infty$. In particular, we show that $\pi(n)$ decays faster than $1/n^2$ for large $n$ (indicating that $\langle n\rangle$ is finite). Additionally, in this case, the mean number of records grows linearly as $\langle M_N\rangle \sim A_1(c)\, N $ for large $N$ with $A_1(c)\le 1$. Thus, for $f(x)$ decaying faster than exponential, the statistics of these three observables for finite $c$ are fundamentally different from the standard $c=0$ case. When $f(x)$ has an exponential tail, it turns out to be a marginal case where there is a phase transition at a critical value $c_{\rm crit}$. For $c<c_{\rm crit}$, the observables have qualitaively similar behavior as the $c=0$ case. In contrast for $c>c_{\rm crit}$, both $q_k(R)$ and $\pi_k(n)$ have stationary limits as $k\to \infty$. We have illustrated by an explicit calculation for $f(x)=e^{-x}\theta(x)$ for which $c_{\rm crit}=1$. In this case we have shown that for $c>1$, the stationary avalanche size distribution $\pi(n)\sim n^{-1-\lambda(c)}$ has a power law tail for large $n$ with $\lambda(c)\le 1$, indicating that the first moment diverges. Remarkably, the exponent $\lambda(c)$ depends continuously on $c$ and is given by the root of the transcendental equation, $c=- \ln(1-\lambda)/\lambda$. We have also computed exactly the stationary record value distribution $q(R)$ for $c>1$ and shown that it is a pure exponential, $q(R)= \lambda(c)\, e^{-\lambda(c)\, R}$, for all $R\ge 0$. An important feature of this $c$-record model is a nontrivial correlation structure between record values, as well as between record intevals. In this paper we have explored only partially this structure, and it would be interesting to characterize this correlation structure in a more complete fashion. We have also provided some generalisations of this simple $c$-record model, where the criteria for record formation, i.e., the offset $c_k$ depends on the record value $R_k$ as well as on the record index $k$. In all these cases, the offset $c_k$ remains constant (albeit $k$-depenent) as in the lower panel of Fig. (\ref{fig:schema}). Previously, the linear trend model was studied where the offset decreases linearly with time during an avalanche (as in the upper panel of Fig. (\ref{fig:schema})). One can then ask how the record statistics gets modified for a general decreasing offset function during an avalanche. Finally, in this paper, we have considered a series of I.I.D variables as a model for the pinning force landscape. It is natural to consider a model where the landscape is correlated. For example, in the ABBM model, the entries of the series correspond to the positions of a random walker. This remains a challenging open problem. \acknowledgements We thank O. Giraud for an illuminating discussion. We are grateful to J. Krug and S.-C. Park for very useful exchanges and correspondences.
{ "attr-fineweb-edu": 2.087891, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdxQ5qoTBAkU3VO8E
\section{Introduction} \label{sec:introduction} The 2014 FIFA World Cup championship in Brazil expected to attract the attention of more than 3 billion people worldwide is the quadrennial soccer tournament since 1930. Whereas, the most popular design for the FIFA soccer ball, the Telstar, which was used in the official logo for the 1970 World Cup, consists of twelve black pentagonal and 20 white hexagonal panels, a truncated icosahedron belonging icosahedral point group\cite{soccer_ball,kotshick:05}. Since then, a number of different designs have appeared, some with small variations such as the Fevernova (2002) and the Teamgeist (2006) which still have icosahedral symmetry but with low-symmetry tetrahedral patterns painted on the ball; some balls only show lower polyhedral patterns such as Jabulani (2012) with tetrahedral symmetry without icosahedral symmetry superimposed. However, the soccer ball, ``Brazuca'' ball, used for the World Cup this summer in Brazil has a new design based on octahedral symmetry. Basically, the ``Brazuca'' ball is composed of six bonded polyerethane panels with four-arm clover-shaped panels that interlock like a jigsaw puzzle smoothly on a sphere\cite{brazuca,asai:14}. In 1985, it was discovered that, in addition to diamond and graphite, carbon atoms can have a third new allotrope consisting of 60-atom spherical molecules, $\ce{C60}$, sometimes nicknamed molecular soccer ball because the shape of this molecule is identical to the standard soccer ball, with $60$ atoms located at $60$ identical vertices\cite{Smalley85}. More generally, this molecule belongs to a family of sp$^2$-hybridized pure carbon systems now called fullerenes that contain only five- and six-membered rings. Since then, structures of fullerenes have been extensively studied experimentally and theoretically. Under this constraint, considerable effort has been devoted to detailed enumerations of possible structures. For instance, a complete list of fullerenes with less than or equal to 60 carbon atoms and all fullerenes less than and equal to $100$ carbon atoms that satisfy the isolated pentagon rule (IPR) is tabulated in the monograph by Fowler and Manolopolous\cite{Fowler07}. Among all these fullerenes $C_{N}$, $N\le 100$, the possible symmetry point groups for fullerenes are $C_1$, $C_s$, $C_i$, $C_m$, $C_{mv}$, $C_{mh}$, $S_{2m}$, $D_n$, $D_{nd}$, $D_{nh}$, $T$, $T_d$, $T_h$, $I$ and $I_h$, where $m$ can be $2$ or $3$ and $n$ can be $2$, $3$, $5$ or $6$. However, only two out of three Platonic polyhedral groups, namely tetrahedral and icosahedral groups, seems to be possible for fullerenes. So the question is, can we have fullerenes with octahedral symmetry just like the ``Brazuca'' ball? If possible, what are the general construction and classification rules for this family of octahedral fullerenes? To answer this question, we start with the construction process of fullerenes with polyhedral symmetries through a simple cut-and-patch procedure as shown in Figure~\ref{Fig:Goldberg}. For instance, constructing a fullerene with icosahedral symmetry can be done by cutting 20 equivalent equilateral triangles from graphene and pasting them onto the triangular faces of an icosahedron. This will create twelve pentagons sitting at twelve vertices of the icosahedron\cite{Goldberg37,Caspar62,Fowler92}. Similar cut-and-patch procedure can be used to construct fullerenes with tetrahedral and octahedral symmetries, too (Figure~\ref{Fig:Goldberg}). However, the non-hexagons such as triangles and squares will appear at the vertices of the template tetrahedron and octahedron, which are in contradiction to the definition of fullerenes. In the case of tetrahedral fullerenes, we can replace the template tetrahedron with a truncated tetrahedron. This makes it possible to the construction of tetrahedral fullerenes without triangles by a suitable cut-and-patch construction scheme\cite{Fowler88}. But this technique is not applicable to octahedral fullerenes\cite{Fowler93,Kardos07}. \begin{figure}[h] \centering \includegraphics[width=12cm]{goldberg.pdf} \caption{Goldberg polyhedra by the cut-and-patch construction. Here we cut a equilateral triangle which can be specified by an vector $(2,1)$ (also known as Goldberg vector) from graphene and then patch the triangle onto different platonic solids to construct fullerenes with different polyhedral symmetries. The famous $\ce{C60}$, can be also constructed in this way using icosahedron as the template with Goldberg vector (1,1). \label{Fig:Goldberg}} \end{figure} Albeit the appearance of squares in these caged octahedral fullerenes leads to energetically unstable molecules, one can still find in literatures that some studies have been carried out on the geometric, topological\cite{Fowler01}, and electronic structures\cite{Huang96,Ceulemans05,Ceulemans05_2,Dunlap94} of fullerenes with octahedral symmetry by introducing squares on a template octahedron (Figure~\ref{Fig:Goldberg}). In addition to the pure carbon allotropes, the octahedral boron-nitride systems have also been vigorously investigated\cite{Jiao04,Scuseria06,Rogers00,Benson00,Dunlap04}. In this paper, we present a general cut-and-patch construction and classification scheme for fullerenes with octahedral symmetry by systematically introducing some other non-hexagons such as octagons with a cantellated cube as the template. The octahedral fullerenes previously considered in literatures are included as limiting cases in our general construction scheme\cite{Fowler01,Huang96,Ceulemans05,Ceulemans05_2,Dunlap94}. We also like to point out that the cut-and-patch method is a simple and powerful method for building various kinds of fullerenes and graphitic structures. For instance, we have applied this method successfully to many other template polyhedral tori and concluded general structural rules of carbon nanotori\cite{Chuang09_1,Chuang09_3}. From there, structural relations for a whole family of topologically nontrivial fullerenes and graphitic structures such as carbon nanohelices, high-genus fullerenes, carbon Schwarzites and so on can be derived\cite{Chuang09_2, jin:2013,jin:2010x,jin:2011a}. \section{Requirement of Octahedral Fullerenes} We start by briefly describing the icosahedral fullerenes that consist only of hexagons and pentagons. The simplest icosahedral fullerene that satisfies IPR is $\ce{C60}$, which can also be viewed as a truncated icosahedron, one of the thirteen Archimedean solids if we ignore the slight variation in bond lengths. In a truncated icosahedron, there are exactly twelve pentagons and twenty hexagons. This structure can be derived from a regular icosahedron by truncating the twelve vertices away appropriately. We will show that this is the only possibility if we want to construct an icosahedral fullerene with pentagons and hexagons only. Using the Euler's polyhedron formula, $V-E+F=2$ for a polyhedron with $V$ vertices, $E$ edges and $F$ faces, and the condition $3V=2E$ for trivalent carbon atoms in a fullerene, we can find easily the condition $\sum_{n}(6-n)F_{n}=12$, where $F_{n}$ is the number of $n$-gons. If we assign each face a topological charge $6-n$, the Euler's polyhedron formula states that the sum of topological charges of a trivalent polyhedron must be twelve. Therefore, fullerenes that contain only pentagons and hexagons must have twelve pentagons, i.e. $F_{5}=12$, while there is no constraint on the number of hexagons, $F_{6}$, except the case with only one hexagon, $F_6=1$, is forbidden. This conclusion is general and can be applied to any fullerene regardless of its symmetry. An arbitrary icosahedral fullerene can be classified by its chiral vector $(h, k)$, where $h$ and $k$ satisfy the inequality $h\ge k\ge 0\land h>0$, according to the Goldberg construction \cite{Goldberg37}. For instance, $\ce{C60}$ corresponds to the fullerene with chiral vector $(1,1)$. Interestingly, these twelve pentagons are located at the high-symmetry points along the six fivefold rotational axes of the icosahedral symmetry group. Suppose that these pentagons are not located at the high symmetry points, there should be five pentagons around each of these points in order to satisfy the symmetry requirement. Therefore, there must be $12\times 5 = 60$ pentagons in total. However, the condition, $\sum_n (6-n)F_n=12$, will require some $n$-gons where $n>6$ to compensate the extra topological charges introduced by these pentagons. So, we conclude that exactly twelve pentagons must be located at the high symmetry points along the six fivefold rotation axes. We apply the above analysis to the requirement for octahedral fullerenes. First, there are three fourfold axes, four threefold axes, and six twofold axes in the octahedral group. Pentagons are not compatible with any of the high symmetry points of octahedral groups. Therefore, there is no high symmetry point where pentagons can be located. The best we can do is to put clusters of pentagons around, for example, the three fourfold axes. Then we need to put twenty-four pentagons together with $n$-gons where $n>6$ to balance the topological charges. A simple way is to put six octagons at the six high symmetry points along three fourfold axes, so that the condition, $\sum_{n}(6-n)F_{n}=12$, is satisfied. The twofold or the fourfold axes can also be chosen\cite{Kardos07}, but the resulting fullerenes are considerably more energetically unfavored because additional non-hexagons need to be introduced. To illustrate this idea, we present a simple construction procedure using the cut-and-patch scheme as shown in Figure~\ref{fig:OF_Cut_Patch_Scheme}. We first cut the polygonal region as defined by the solid thick line from graphene (Figure~\ref{fig:OF_Cut_Patch_Scheme}(a)) and then patch twenty-four replica of it on a cantellated cube as shown in Figure~\ref{fig:OF_Cut_Patch_Scheme}(b). The points, $O$, $A$, and $B$ in Figure~\ref{fig:OF_Cut_Patch_Scheme}(a) overlap with vertices of the cantellated cube while $P_3$ and $P_4$ the centers of the triangle and the square faces, respectively. In this process, every four identical isosceles triangles, $\bigtriangleup OP_{4}A$, cover one square face (Figure~\ref{fig:OF_Cut_Patch_Scheme}(b)). Since the angular deficit at $P_{4}$ is $2\pi-4\times 2\pi/3=-2\pi/3$, it must correspond to the location of an octagon. On the other hand, the angular deficit at $O$ is $2\pi-(\pi+2\pi/3)=\pi/3$. Therefore a pentagon will be generated at $O$ by this cut-and-patch process. \begin{figure} \includegraphics[width=15cm]{figure123.pdf} \caption{Cut-and-patch procedure for constructing an octahedral fullerene. Points, $P_{2}$, $P_{3}$ and $P_{4}$, represent the high symmetry points of the octahedral symmetry respectively. $\bigtriangleup OP_{4}A$ ($\bigtriangleup OP_{3}B$) is one-third of a regular triangle (the dotted triangle in (a)) and $P_{4}$ ($P_{3}$) is the corresponding triangle center on graphene. Points $O$, $A$, and $B$ become positions where twenty-four equivalent pentagons are located at, while point $P_{4}$ becomes the position for one of six equivalent octagons after they are patched onto the cantellated cube. The two base vectors, $\protect\overrightarrow{OA}=(i,j)$ and $\protect\overrightarrow{OB}=(k,l)$, can in general be any two vectors such that $P_{4}$ does not coincide with an atom ({\it i.e.}, $i-j=3n$). $\{i,j,k,l\}=\{1,1,-2,2\}$ in this example. (c) The 3D geometry of an octahedral fullerene specified according to its topological coordinates \cite{Fowler92}. \label{fig:OF_Cut_Patch_Scheme}} \end{figure} We will define the area inside the solid thick line as shown in Figure~\ref{fig:OF_Cut_Patch_Scheme}(a) as the fundamental polygon. Note that the two base vectors, $\overrightarrow{OA}=(i,j)$ and $\overrightarrow{OB}=(k,l)$, in the fundamental polygon become the edges of the square and the regular triangle on the cantellated cube, respectively, as shown in Figure~\ref{fig:OF_Cut_Patch_Scheme}(b). For convenience, we refer to $(i,j)$ as the square base vector and $(k,l)$ the triangular base vector from now on. Using these two vectors, we can uniquely specify a scalene triangle with four integers $\{i,j,k,l\}$, which we will simply call the indices of octahedral fullerenes later. In additional to this scalene triangle, we also need to incorporate two extra triangles, $\bigtriangleup OP_{4}A$ and $\bigtriangleup OP_{3}B$, corresponding to one-third of the regular triangles which share the same edges with the scalene triangle. The numbers of carbon atoms inside $\bigtriangleup OP_{4}A$, $\bigtriangleup OP_{4}A$, and $\bigtriangleup OAB$ are $(i^2+ij+j^2)/3$, $(k^2+kl+l^2)/3$, and $|il-jk|$, respectively. After patching twenty-four fundamental polygons onto a cantellated cube, we get an octahedral fullerene with $8(i^2+ij+j^2+k^2+kl+l^2)+24|il-jk|$ carbon atoms. The octahedral fullerenes can be catagorized into two groups according to the sign of the angle $\theta$ formed by $\overrightarrow{OA}$ and $\overrightarrow{OB}$. Octahedral fullerenes with $\pi>\theta>0$ are in category $\alpha$, $\{i,j,k,l\}_{\alpha}$, and octahedral fullerenes $-\pi<\theta<0$ are in category $\beta$, $\{i,j,k,l\}_{\beta}$. This criterion is equivalent to determining the sign of $il-jk$, which stands for the signed area enclosed by the parallelogram spanned by the two base vectors up to a positive factor. Here we can take one step further to include the degenerate cases, {\it i.e.} when $\bigtriangleup OAB$ degenerates into a line, which can be considered as limiting cases when $\theta$ approaches to the boundaries of its range in each category. It is worthwhile to note that in general $\lim_{\theta\to0^+}\{i,j,k,l\}_{\alpha}$ is inequivalent to $\lim_{\theta\to0^-}\{i,j,k,l\}_{\beta}$ and $\lim_{\theta\to\pi^-}\{i,j,k,l\}_{\alpha}$ is inequivalent to $\lim_{\theta\to\pi^+}\{i,j,k,l\}_{\beta}$, as shown in Figure~\ref{fig:Degenerate}. On the other hand, the category letter in the subscript can be omitted when there is no ambiguity. We will elaborate in later sections. \begin{figure} \includegraphics[width=15cm]{figureDen.pdf} \caption{Four degenerate cases of octahedral fullerenes. (a) $\{4,1,8,2\}_{\alpha}$, (b) $\{4,1,8,2\}_{\beta}$, (c) $\{4,1,-8,-2\}_{\alpha}$ and (d) $\{4,1,-8,-2\}_{\beta}$. We have $\{4,1,8,2\}_{\alpha}\neq\{4,1,8,2\}_{\beta}$ and $\{4,1,-8,-2\}_{\alpha}\neq\{4,1,-8,-2\}_{\beta}$. However, $T_2\{4,1,8,2\}_{\alpha}=\{4,1,-8,-2\}_{\beta}$ and $T_2\{4,1,8,2\}_{\beta}=\{4,1,-8,-2\}_{\alpha}$. The $T_2$ transformation will be discussed in later sections. \label{fig:Degenerate}} \end{figure} Following the above cut-and-patch scheme, we can define a scalene triangle and thus the fundamental polygon, given the two base vectors $(i,j)$ and $(k,l)$ that satisfy the condition, $i-j=3n$. Each of these fundamental polygons uniquely defines an octahedral fullerene in non-degenerate case. When the two base vectors are parallel to each other, it is necessary to further specify the category explicitly. It is worthwhile to note that if the condition $i-j=3n$ is not satisfied, $P_4$ will coincide with a carbon atom, which is not allowed because this implies that the carbon atom is tetravalent. At first sight, one might think that there exists a one-to-one correspondence between an index, $\{i,j,k,l\}_{X}$, and an octahedral fullerene. But this is not true since it is possible that the octahedral fullerenes built from two different scalene triangles are in fact identical. We will study this issue in details in the next section. Finally we can identify three limiting situations if one of the three sides of the scalene triangle vanishes (see Fig.~\ref{fig:limiting_cases}). \begin{enumerate} \item The first limiting situation corresponds to a vanishing triangular base vector, $(k,l)=(0,0)$, which is referred to as type I octahedral fullerenes later on. The indices for this case have the form $\{i,j,0,0\}$. Thus, the length of the triangular base vector $\overrightarrow{OB}$ vanishes and all triangles in the cantellated cube shrink to single points. And the template polyhedron reaches the corresponding limit of the cantellation, namely the cube. Note also that three pentagons fuse to form a triangle at each corner of the cube, while the octagons remain at the centers of the faces of the cube. Thus, there are eight triangles and six octagons in the resulting octahedral fullerene. \item The second limiting situation corresponds to a vanishing square base vector, $(i,j)=(0,0)$, which we denote as type II. The indices for type II fullerenes are given by $\{0,0,k,l\}$. In this limit, the length of the square base vector $\overrightarrow{OB}$ vanishes and each square shrinks to a point. Thus, the template polyhedron reaches another limit of the cantellation, namely the octahedron. This case is identical to the Goldberg polyhedron illustrated in Figure~\ref{Fig:Goldberg}(c) and Figure~\ref{Fig:Goldberg}(f) . Four pentagons and one octagon fuse to form a square at each corner of the octahedron. Therefore, we have six squares in a type II octahedral fullerene. \item The last limiting situation, denoted as type III, is when the length of the third side of $\bigtriangleup OAB$, $\overrightarrow{AB}$, vanishes. In other words, $\overrightarrow{OA}$ is equal to $\overrightarrow{OB}$, i.e. $(i,j)=(k,l)$. One can show that $(i,j)=-(k,l)$ also corresponds to the same limiting case. $\{i,j,i,j\}$ and $\{i,j,-i,-j\}$ can be transformed to each other via additional symmetry transformations, $T_3$ or $T_4$, which will be introduced in the next section. The indices for this type are $\{i,j,i,j\}$ or $\{i,j,-i,-j\}$ and the template polyhedron in this limit is a cuboctahedron. Two pentagons at $A$ and $B$ fuse to become one square, and there are six octagons and twelve squares in total in this limiting case. Other collinear cases do not make the third side vanish though and pentagons will not fuse at all. In fact we can use $T_3$ or $T_4$ introduced later to make these two base vectors nonparallel. \end{enumerate} When none of the sides of the scalene triangle vanishes, the corresponding octahedral fullerenes will be denoted as type IV. \begin{figure}[h] \includegraphics[width=15cm]{figureLim.pdf} \centering \caption{Three limiting cases of octahedral fullerenes. (a)-(c) Type I octahedral fullerene with with $\{2, 2, 0, 0\}$; (d)-(f) Type II octahedral fullerene with with $\{0, 0, 1, 2\}$; (g)-(i) Type III octahedral fullerene with $\{2, 2, 2, 2\}$. In this case points $A$, $B$, and $P_2$ are coincident.} \label{fig:limiting_cases} \end{figure} \section{Index Symmetry} In the previous section, we showed that an octahedral fullerene can be constructed by cutting a fundamental polygon specified by a four-component index and its category, $\{i,j,k,l\}_{X}$ and patching twenty-four replica of this fundamental polygon onto a cantellated cube. We also pointed that this correspondence is not one-to-one, but many-to-one, since there are some symmetry relationships in this indexing scheme. In other words, we mean that there exist different indices $\{i,j,k,l\}_{X}$ that correspond to the same molecular structure. This section is devoted to find a systematic way to eliminate all such redundancies and fully characterize the nature of the index symmetry. In the limiting cases of octahedral fullerenes which belong to the types I to III, we only need one independent two-component vector to specify their indices. It is obvious that the index transformation arising from the geometric symmetry of graphene will lead to the same octahedral fullerene. For instance, a $\pi/3$ rotation about point $O$ will transform the index from $\{i,j,k, l\}_{X}$ to $\{-j,i+j,-l,k+l\}_{X}$ without altering the resulting octahedral fullerene. Therefore these two indices correspond to the same molecular structure and should only be counted once. In fact, this applies to all twelve symmetry operations belonging to the point group $C_\text{6v}$ of graphene. Here, we ignore symmetry operation $\sigma_h$ that lies in the plane of graphene because it does not move any carbon atom at all. So, all indices that can be related through these symmetry operations produce the same octahedral fullerene. This set of indices is called an orbit in group theory\cite{fujita}. So to enumerate octahedral fullerene is equivalent to enumerate different orbits of all possible indices. Indices belonging to the same orbit correspond to the same octahedral fullerene. In other words, only one out of the set of indices comprising an orbit is needed to represent an octahedral fullerene uniquely. In these three limiting situations, we can restrict the indices with the inequality, $i\ge j\ge 0\land i>0$ for type I, $k\ge l\ge0\land k>0$ for type II, and $i\ge j\ge 0\land i>0$ ($k=i$ and $l=j$) for type III to remove all redundancies arising from the $C_\text{6v}$ symmetry operations. The situation for type IV octahedral fullerenes is more complicated. In addition to the twelve symmetry operations from the point group $C_{6v}$, there are three more symmetry operations, $T_{2}$, $T_{3}$, and $T_{4}$ arising from different ways of dissecting each of the three different kinds of faces of a cantellated cube into fundamental polygons. For each dissection scheme, different squares or regular triangles are drawn, and the square or triangular base vectors will change respectively. Detailed description of these three symmetry operations will be described later. These extra symmetry operations introduce redundancies which cannot be removed by introducing inequalities of indices like the situations of types I to III. Although the redundancies produced by these three $T$-type symmetry operations cannot be removed by such index restrictions, the parts of redundancies originating from the sixfold rotational symmetry of graphene can be eliminated by introducing the canonical criterion, $i>0\land j\ge0$. This is because that these rotational operations commute with the three $T$-type operations, {\it i.e.} $\left[C_{6}^{n}, T_y\right]=0,$ where $y=2$, $3$ or $4$. Here, we do not impose the restriction, $i\ge j$, to remove the redundancies produced by the six mirror symmetries $M_{x}$. This will be discussed with the $T_2$ symmetry in the next section. \subsection{$T_{2}$ symmetry} The symmetry operation, $T_{2}$, comes from the two different ways to decompose a parallelogram as shown in Figure~\ref{fig:T2}. The $T_2$ operation stands for performing a local $C_{2}$ operation which rotate one of base vectors by $180^\circ$. Thus the index $\{i,j,-k,-l\}$ will generate the same octahedral fullerene with $\{i,j,k,l\}$. We can define $T_2$ explicitly with the following matrix notation \begin{align*} T_2: \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{X} \to \begin{pmatrix} i'\\j'\\k'\\l' \end{pmatrix}_{X'}= \begin{pmatrix} 1&0&0&0\\0&1&0&0\\0&0&-1&0\\0&0&0&-1 \end{pmatrix} \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{X}, \end{align*} where $X\neq X'$. Unlike usual matrix multiplications, we need to specify the category of the index before and after $T_2$ transformation. Since $il-jk$ stands for the signed area enclosed by the parallelogram spanned by these two vectors up to a positive factor, it is clear that under the transformations, $T_2$ or $M_x$, the signed area changes sign and hence the category. This is also true in the degenerate case. Therefore, enumerating indices only in a single category can remove redundancies produced by $T_2$ and $M_x$, but not those produced by $M_xT_2=T_2M_x$. \begin{figure} \includegraphics[width=15cm]{figureT2.pdf} \caption{The $T_{2}$ symmetry operation illustrated with the example $T_2\{4,1,-1,3\}_{\alpha}=\{4,1,1,-3\}_{\beta}$. If we choose $\{\protect\overrightarrow{OA}, \protect\overrightarrow{OB}\}_{\alpha}$ as the index, the corresponding fundamental polygon is $OP_{4}AP_{2}BP_{3}$. On the other hand, if we choose the index $\{\protect\overrightarrow{BC},\protect\overrightarrow{BO}\}_{\beta}=\{\protect\overrightarrow{OA}, -\protect\overrightarrow{OB}\}_{\beta}$, the fundamental polygon becomes $BP'_4CP_2OP'_3$. These two fundamental polygons essentially give the same octahedral fullerene with different ways of dissecting the parallelogram.\label{fig:T2}} \end{figure} \subsection{$T_{3}$ symmetry } The symmetry operation $T_{3}$ involves different ways of dissecting the equilateral triangles of the cantellated cube as shown in Figure~\ref{fig:T3}. For instance, one possible choice of the two base vectors for the scalene triangle is $\{\overrightarrow{OA}, \overrightarrow{OB}\}_{\alpha}$. However, there is another choice, $\{\overrightarrow{OA},\overrightarrow{OF}\}_{\alpha}$, which produce the same octahedral fullerene, but with a different way of dissecting the triangles of the cantellated cube. The $T_3$ transformation only changes the triangular base vectors. Unlike $T_2$ and $M_x$, the $T_3$ transformation does not change the category. Moreover, for the $T_{3,{\alpha}}$ transformation, which operates on octahedral fullerenes belonging to the category $\alpha$, we also need to impose an additional constraint on the domain $i'l'-j'k'\ge 0 \Rightarrow - ik-jk-jl\ge i^2+ij+j^{2}$. The explicit form of $T_{3,{\alpha}}$ can be written as \begin{align*} T_{3,{\alpha}}: \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{\alpha} \to \begin{pmatrix} i'\\j'\\k'\\l' \end{pmatrix}_{\alpha}= \begin{pmatrix} 1&0&0&0\\0&1&0&0\\2&1&1&1\\-1&1&-1&0 \end{pmatrix} \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{\alpha}. \end{align*} We may obtain $T_{3,{\beta}}$ easily by $T_{3,{\beta}}=M_xT_{3,{\alpha}}M_x$ and its domain by similar method. The inverse of $T_3$ transformation, namely $T_3^{-1}$, may be found by the usual matrix inversion, \begin{align*} T_{3,{\alpha}}^{-1}: \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{\alpha} \to \begin{pmatrix} i'\\j'\\k'\\l' \end{pmatrix}_{\alpha}= \begin{pmatrix} 1&0&0&0\\0&1&0&0\\-1&1&0&-1\\-1&-2&1&1 \end{pmatrix} \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{\alpha} \end{align*} Its domain can also be found by requiring that the category remains unchanged, $i'l'-j'k'\ge 0 \Rightarrow ik+il+jk\ge i^2+ij+j^{2}$ and so we have the identity, $T_{3,{\beta}}^{-1}=M_xT_{3,{\alpha}}^{-1}M_x$. In addition, as shown in Figure~\ref{fig:T3}, the $T_3$ transformation always decrease $|\theta|$ by more than $\pi/3$; while $T_3^{-1}$ always increase $|\theta|$ by more than $\pi/3$. \begin{figure} \includegraphics[width=15cm]{figureT3.pdf} \caption{ An illustration of $T_{3}$ symmetry. $T_{3}$ transform the partition $\{\protect\overrightarrow{OA},\protect\overrightarrow{OB}\}_{\alpha} =\{1,1,-4,0\}_{\alpha}$ to $\{\protect\overrightarrow{OA},\protect\overrightarrow{OF}\}_{\alpha} =\{1,1,-1,4\}_{\alpha}$, which can be also written as $T_{3,{\alpha}}\{1,1,-4,0\}_{\alpha}=\{1,1,-1,4\}_{\alpha}$. Similarly we have $T_3^{-1}\{1,1,-1,4\}_{\alpha}=\{1,1,-4,0\}_{\alpha}$. \label{fig:T3}} \end{figure} \subsection{$T_{4}$ symmetry } Similar to the symmetry operations $T_2$ and $T_3$, the operation $T_4$ involves different ways of assigning fundamental polygons on the cantellated cube as shown in Figure~\ref{fig:T4}. In this case, we can see that two different fundamental polygons given by indices $\{\overrightarrow{OA},\overrightarrow{OB}\}_{\alpha}$ and $\{\overrightarrow{OF},\overrightarrow{OB}\}_{\alpha}$ are essentially equivalent in constructing an octahedral fullerene. The transformation $T_4$ does not change the category just like the transformation $T_3$. We can interchange $T_{4,{\alpha}}$ and $T_{4,{\beta}}$ by sandwiching them between the mirror transformation $M_x$. On the other hand, in contrast to the transformation $T_3$, $T_4$ changes the square base vector only. Therefore, both $T_3$ and $T_4$ will decrease $|\theta|$ by more than $\pi/3$. In other words, the square base vector will be rotated by more than $\pi/3$ and will not satisfy the canonical criterion $i>0\land j\ge 0$ any longer. However the whole index can be rotated back to satisfy the canonical criterion again whenever necessary. The explicit form for the symmetry operation $T_{4,{\alpha}} $ can be written as \begin{align*} T_{4,{\alpha}} : \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{\alpha} \to \begin{pmatrix} i'\\j'\\k'\\l' \end{pmatrix}_{\alpha}= \begin{pmatrix} 0&-1&1&-1\\1&1&1&2\\0&0&1&0\\0&0&0&1 \end{pmatrix} \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{\alpha}. \end{align*} Again, an constraint on domain $-ik-jk-jl\ge k^2+kl+l^2$ is necessary to ensure that the category stays unchanged. The inverse $T_{4,{\alpha}} ^{-1}$ can be defined as follows \begin{align*} T_{4,{\alpha}} ^{-1}: \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{\alpha} \to \begin{pmatrix} i'\\j'\\k'\\l' \end{pmatrix}_{\alpha}= \begin{pmatrix} 1&1&-2&-1\\-1&0&1&-1\\0&0&1&0\\0&0&0&1 \end{pmatrix} \begin{pmatrix} i\\j\\k\\l \end{pmatrix}_{\alpha}, \end{align*} and the constraint on the domain is $ik+il+jl\ge k^2+kl+l^2$. In summary, $C_6^n$, $T_3$, $T_4$ and their inverses do not change the categories, but $M_x$ and $T_2$ do. \begin{figure} \includegraphics[width=15cm]{figureT4.pdf} \caption{An illustration of $T_{4}$ symmetry. $T_{4}$ transform the partition $\{\protect\overrightarrow{OA},\protect\overrightarrow{OB}\}_{\alpha} =\{2,2,-2,0\}_{\alpha}$ to $\{\protect\overrightarrow{OF},\protect\overrightarrow{OB}\}_{\alpha} =\{-4,2,-2,0\}_{\alpha}$, which can be also written as $T_{4,{\alpha}}\{2,2,-2,0\}_{\alpha}=\{-4,2,-2,0\}_{\alpha}$. Two points connected by a grey line should be patch into one point. Four shaded triangle in (a) will merge into square $BCDE$ in (b) and four $P_4$ points in (a) will become one $P_4$ in (b) after patching. Note that since $P_4$ always carries a topological charge, the vector $\protect\overrightarrow{OF}$ does not correspond to $(-4,2)$ in (a).\label{fig:T4}} \end{figure} Although $T$-type symmetry operations are defined for type IV octahedral fullerenes, they can also be applied to three limiting cases. When $T$-type symmetry operations are applied to type I and type II octahedral fullerenes, they reduce to the geometric rotation $C_6^n$. And when they are applied to type III octahedral fullerenes, we have following identities, \begin{align*} T_2\{i,j,i,j\}_{X}&=\{i,j,-i,-j\}_{X'}\quad(X\neq X')\\ T_3^{-1}\{i,j,i,j\}_{X}&=\{i,j,-i,-j\}_{X}\\ C_6^3T_4^{-1}\{i,j,i,j\}_{X}&=\{i,j,-i,-j\}_{X}. \end{align*} These formulae will give a torus-like orbit. The details for the enumeration of these orbits are included in supporting information. \section{Conclusion} \label{sec:conclusion} In conclusion, we have developed a systematic cut-and-patch method to generate arbitrary fullerenes belonging to the octahedral point group. A unique four-component vector satisfying certain constraints and symmetry rules can be used to specify these octahedral fullerenes. This work on the octahedral fullerenes fits in the final piece of the jigsaw puzzle of all possible high symmetry caged fullerenes based on Platonic solids. Further investigation on the stability, elastic properties and electronic structures of these octahedral fullerenes and the possibility of using them to build periodic carbon Schwarzites are currently undergoing in our group\cite{tomanek,jin:2010x}. Finally, we also want to point out two observations: the ``Brazuca'' ball used in the World Cup is close to a very round octahedral sphere, while the fullerenes discussed in this paper are still far from a round sphere. The explanation for the first observation is given in a more general context by Delp and Thurston in a paper about the connection between clothing design and mathematics in the Bridges meeting three years ago.\cite{thurston} The most important factor that makes it possible to wrap six clover-shaped panels used in the ``Brazuca'' around a sphere smoothly is that the curved seams created by these interlocked 4-long-arms panels are quite evenly distributed on the sphere. Readers interested in this problem should go to that paper for details. The observation on the shape of octahedral fullerenes is also interesting. All of the three-dimensional geometries shown in this paper are obtained through their topological coordinates derived from the lowest three eigenvectors with single nodes by diagonalizing the corresponding adjacency matrices\cite{Fowler07}. Further investigations to rationalize how the distribution of the non-hexagons affects the shapes of octahedral fullerenes in order to obtain a round nanoscale ``Brazuca'' ball based on either elastic theory or quantum chemical calculations should be worth pursuing in the future.\cite{Fowler07,tomanek,siber,nelson} \section*{acknowledgements} The research was supported by the Ministry of Science and Technology, Taiwan. B.-Y. Jin thanks Center for Quantum Science and Engineering, and Center of Theoretical Sciences of National Taiwan University for partial financial supports. We also wish to thank Chern Chuang and Prof. Yuan-Chung Cheng for useful discussions and comments on this paper.
{ "attr-fineweb-edu": 2.832031, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdUE5qWTBLg9oTGkh
\section*{Abstract} We consider all Test matches played between $1877$ and $2010$ and One Day International (ODI) matches played between $1971$ and $2010$. We form directed and weighted networks of teams and also of their captains. The success of a team (or captain) is determined by the `quality' of wins and not on the number of wins alone. We apply the diffusion based PageRank algorithm on the networks to access the importance of wins and rank the teams and captains respectively. Our analysis identifies {\it Australia} as the best team in both forms of cricket $-$ Test and ODI. {\it Steve Waugh} is identified as the best captain in Test cricket and {\it Ricky Ponting} is the best captain in the ODI format. We also compare our ranking scheme with the existing ranking schemes which include the Reliance ICC Ranking. Our method does not depend on `external' criteria in ranking of teams (captains). The purpose of this paper is to introduce a revised ranking of cricket teams and to quantify the success of the captains. \section{Introduction} The study of social networks, representing interactions between humans or groups, is a subject of broad research interest. In recent years, tools from network analysis have been applied to sports. For example, \cite{duch10} developed a network approach to quantify the performance of individual players in soccer. \cite{onody04} studied the complex network structure of Brazilian soccer players. \cite{heuer10} introduced a general model-free approach to elucidate the outcome of a soccer match. Network analysis tools have been applied to football (\cite{girvan02}; \cite{naim05}), baseball (\cite{petersen08}; \cite{sire09}) and basketball (\cite{naim07}; \cite{skinner10}). \cite{saavedra09} studied the head-to-head matchups between Major League Baseball pitchers and batters as a bipartite network (\cite{yellen}). The advantage of a network representation of any real system is that it gives the global view of the entire system and the interaction between individuals reflecting self-emergent phenomena. In this paper we apply tools of social network analysis to cricket. Cricket is a popular sport around the world and is played mostly in the erstwhile English colonies. Its popularity is the highest in the Indian subcontinent. Despite series of controversies involving match fixing, spot fixing and ball tampering, the sport has managed to maintain international attention as well research interests(\cite{Bailey2004}, \cite{Vani2010}, \cite{Bracewell2009}). Currently there are ten countries that have been granted Test status by International Cricket Council (ICC) - Australia (AUS), Bangladesh (BAN), England (ENG), India (IND), New Zealand (NZ), Pakistan (PAK), South Africa (SA), Sri Lanka (SL), West Indies (WI) and Zimbabwe (ZIM). The Reliance ICC Rankings is the official guide used to evaluate the performance of teams as well as the players. Ranking schemes are based on points that are acquired by a team after a tournament. As mentioned by \cite{Vani2010}, due to the opacity of the ranking schemes, the methods used by ICC are still not comprehensible. Again in cricket the captain is responsible for the team. Before the game starts the home captain tosses the coin and the touring captain calls heads or tails. The captain chooses the batting order, sets up fielding positions and shoulders the responsibility of on-field decision-making. Thus the outcome of a match depends on the captain's decisions. Additionally, the captain is also responsible at all times for ensuring that play is conducted within the Spirit of the Game as well as within the Laws \footnote{http://www.lords.org/laws-and-spirit/laws-of-cricket/preamble-to-the-laws,475,ar.html}. In this sense, the success of a team depends on the captain. However, currently there exist no ranking schemes to rank the cricket captains. In this paper we numerically estimate the success of a team as well as the captain by analyzing the network of interaction of competing teams and also the captains. The primary goal of the paper is to elucidate the impact of network structure on rankings of teams and also that of the cricket captains. While the number of wins is a natural measure for success of a team, it does not provide a full picture of the `quality' of win. We are thus motivated to study an alternative method to assess the quality of a win. For example, a win against Australia or South Africa carries more importance than a win against a lesser team. This is analogous to the citation networks in which the effect of citation coming from an important paper is greater than that coming from a less popular one. The PageRank algorithm (\cite{brin1}), a network-diffusion-based algorithm has emerged as leading method to rank scientists (\cite{radicchi09a}), papers (\cite{chen07}). More recently \cite{radicchi11} applied PageRank algorithm to rank tennis players. In this paper we apply the PageRank algorithm to rank cricket teams and also identify the most successful cricket captain. The rest of the paper is organized as follows. In Section 2, we define and characterize the cricket-team network and provide a description of the PageRank algorithm that we employ as a ranking scheme across eras and also in the history of cricket ($1877-2010$). In Section 3, we discuss the results and we conclude in Section 4. \section{Network of Cricket Teams} \begin{figure}[!ht] \begin{center} \includegraphics[width=4in]{Figure0_demo.eps} \end{center} \caption{{\bf The network of three competing cricket teams.} Three teams {\bf A}, {\bf B} and {\bf C} compete against each other. If {\bf A} defeats {\bf B}, a directed link is established from {\bf B} to {\bf A}. The thickness of the link is proportional to the fraction of wins between {\bf A} and{\bf B}. Thus if we consider all the competing teams a weighted and directed network is established. } \label{fig1} \end{figure} Data were collected from the website of cricinfo ({\it http://www.espncricinfo.com/}). We downloaded the information of results and also the captains who led their respective teams from the score-cards. For a single match, the score-card keeps track of information about the teams, the runs scored by batsmen, wickets taken by bowlers, the names of captains who led their respective teams and the result of a match. We collected the data for Test matches ($1877 - 2010$) and One Day International (ODI) cricket ($1971 - 2010$). In our analysis we have excluded the matches with no results and matches which were abandoned. We analyze the network of cricket teams by analyzing the head-to-head encounter of competing teams. A single match is represented by a link between two opponents. Thus if team $i$ wins against team $j$, a directed link is drawn from $j$ to $i$ ( Figure~\ref{fig1} ). A weighted representation of the directed network is obtained by assigning a weight $w_{ji}$ to the link, where $w_{ji}$ is equal to the fraction of times team $j$ wins against team $i$. We quantify the relevance of matches with the use of a complex network approach equivalent to the one used for the computation of the PageRank score. Mathematically, the process is described by the following set of equations \begin{equation} p_i = \left(1-q\right) \sum_j \, p_j \, \frac{w_{ji}}{s_j^{\textrm{out}}} + \frac{q}{N} + \frac{1-q}{N} \sum_j \, \delta \left(s_j^{\textrm{out}}\right) \;\; , \label{eq:pg} \end{equation} where $w_{ji}$ is the weight of a link and $s_{j}^{out}$ = $\Sigma_{i} {w_{ji}}$ is the out-strength of a link. $p_i$ is the PageRank score assigned to team $i$ and represents the fraction of the overall ``influence'' sitting in the steady state of the diffusion process on vertex $i$ (\cite{radicchi11}). In Eqs.~(\ref{eq:pg}), $q \in \left[0,1\right]$ is a control parameter that accounts for the importance of the various terms contributing to the score of the nodes and $N$ is the total number of teams in the network. The term $ \left(1-q\right) \, \sum_j \, p_j \, \frac{w_{ji}}{s_j^{\textrm{out}}}$ represents the portion of the score received by node $i$ in the diffusion process according to the hypothesis that vertices redistribute their entire credit to neighboring nodes. $\frac{q}{N}$ stands for a uniform redistribution of credit among all nodes. The term $\frac{1-q}{N} \, \sum_j \, p_j \, \delta\left(s_j^{\textrm{out}}\right)$ serves as a correction in the case of the existence of dangling nodes (i.e., nodes with null out-degree), which otherwise would behave as sinks in the diffusion process. \section{Results} \begin{figure}[!ht] \begin{center} \includegraphics[width=5.5in]{Testteamsall_v1.eps} \end{center} \caption{ The network of teams in the history of Test cricket ($1877-2010$). } \label{fig2} \end{figure} Traditionally, the choice of $q$ is set at $0.15$ (\cite{brin1}). Hence, we set $q=0.15$ and run the ranking scheme on networks of cricket teams and also on their captains. In Table~\ref{table1}, we report the results obtained from analysis of network of cricket teams for Test cricket. We identify {\it Australia} as the most successful team in history of Test cricket. Even though {\it South Africa} was banned from playing international cricket from $1970 - 1991$, it emerges as the second best team followed by {\it England}, {\it West Indies}, {\it Pakistan}, {\it India}, {\it Sri Lanka}, {\it New Zealand}, {\it Zimbabwe} and {\it Bangladesh}. Table~\ref{table2} shows the ranking of teams in history of ODI cricket ($1971 - 2010$). Again, {\it Australia} emerges as the best ODI team ever followed by {\it South Africa}, {\it West Indies}, {\it England}, {\it Pakistan}, {\it India}, {\it New Zealand}, {\it Sri Lanka}, {\it Zimbabwe} and {\it Bangladesh}. The success of Australia could be justified by the dominance of {\it Australia} in International cricket for a long period of time. {\it Australia} won test series in all the countries and also won four ICC World cups in $1987$, $1999$, $2003$ and $2007$. \newline \begin{figure}[!ht] \begin{center} \includegraphics[width=5.5in]{skippertop20_v1.eps} \end{center} \caption{ Subraph of the network of most succesful captains in the history of Test cricket ($1877-2010$). } \label{fig3} \end{figure} We also report the results obtained from the analysis of the network of competing captains(See Table~\ref{table3}). {\it Steve Waugh} heads the top $20$ list of most successful captains in Test cricket. The success of {\it Steve Waugh} could be {\it posteriori} justified by the fact that he led {\it Australia} in $15$ of their world-record $16$ successive Test victories. Over all {\it Steve Waugh} won $72\%$ of the Test matches he captained. It is interesting to note that $8$ of the top $20$ captains are from {\it Australia}. South Africa's {\it Graeme Smith} emerges as the second best captain with {\it Ricky Ponting} occupying the third position. From the subcontinent only India's {\it M. S. Dhoni} and {\it Sourav Ganguly} finds a place in the top $20$ list. We also perform a similar analysis in ODI cricket (See Table~\ref{table4}). This time {\it Ricky Ponting } emerges as the best captain in ODI history, followed by {\it Graeme Smith} (South Africa) in second place and {\it Imran Khan} (Pakistan) in the third. {\it Ricky Ponting}'s success as a captain in the ODI format is marked by two successive World Cup wins in $2003$ and $2007$, with a world-record of $34$ consecutive undefeated World Cup games. Under his captaincy {\it Australia} also won the Champions trophy in $2006$ and successfully defended the title in $2009$. Contrary to the list in Test cricket, several of the successful captains in the ODI format are from the subcontinent. \newline We also perform a different kind of analysis by constructing networks of teams and their captains in different eras. In Table~\ref{table5a} and Table~\ref{table5b} we report the ranking of teams in different era of Test cricket respectively. We compare our ranking with Reliance ICC Team Ranking\footnote{ The Reliance ICC Team Rankings were launched for ODI cricket in 2002 and for Test cricket in 2003.}. The table of historical ranking of teams, available at ICC's website($http://icc-cricket.yahoo.net/match\_zone/historical\_ranking.php$), begins from $1951$ for Test cricket and from $1981$ for ODI cricket. We rank the teams according to the average of the points scored by any team. \newline During the period $1877-1951$, {\it Australia} emerged as the most successful team. Between $1952$ and $1960$ {\it Australia} was the most successful team according to the PageRank algorithm and also ICC's ranking scheme. During $1961-1970$ {\it West Indies} was the best team according to ICC ranking. Even though the early 1960s were poor periods for {\tt England}, during the late 60's {\it England} defeated stronger opponents like {\it West Indies} and {\it Australia}. Hence judging by the quality of wins, according to PageRank during $1961-1970$ England was the most successful team. A similar effect is also observed during the $1971-1980$ era, where {\it India} occupies the second position according to PageRank. During the same period {\it India} defeated stronger opponents like {\it West Indies} and {\it England}. \newline Both ranking schemes show {\it West Indies} was the best team between $1981$ and $1990$. Their best period was between February 1981 and December 1989: in $69$ Tests in that span, they had a $40$-$7$ win-loss record, with victories against {\it Australia}, {\it England}, {\it New Zealand} and {\it India}. During the same span, {\tt Pakistan} was victorious against quality opposition like {\it Australia}, {\it England}, and {\it India}. We observe that both ranking schemes predict {\it Australia} as the best team since then. The dominance of {\it Australia} in both decades is also reflected in the fact that between October $1999$ and November $2007$, they played $93$ Tests, and won $72$ of them with $72$-$10$ win-loss record. The ranking of other teams according to PageRank does not correspond to those of ICC Ranking. During $1991-2000$ {\it India} occupies the third position according to PageRank score, instead of {\it West Indies}. Similarly, between $2001$ and $2010$, {\it India} occupies the second position according to PageRank, whereas according to the ICC Ranking {\it South Africa} occupies the second spot. \newline We report a similar ranking of teams in ODI cricket in different era in Table~\ref{table6a}. We observe that {\it West Indies} was the best team throughout the 70's and 80's. PageRank score shows that {\it South Africa} was the best team in the 90's and {\it Australia} is the best team from $2000-2010$. According to ICC Ranking {\it Australia} is the most successful team during the 1990s and also from $2000-2010$. We observe strong correlation between PageRank score and Reliance ICC Ranking and fraction of victories (in-strength rank). We compare the overall ranking of teams playing Test cricket ($1952-2010$) and ODI cricket ($1981-2010$). Figure~\ref{fig4}(a) shows that between $1952$ and $2010$ {\it South Africa} is the best team according to PageRank score, where as {\it Australia} is the best team according to Reliance ICC Ranking. We observe strong correlation between the ranking schemes for ODI cricket ($1981-2010$) (as shown in Figure~\ref{fig4}(b)). According to PageRank score and in-strength the top three positions in Test cricket ($1877-2010$), are occupied by {\it Australia}, {\it South Africa} and {\it England} respectively (see Figure~\ref{fig4}(c)). In ODI cricket ($1971-2010$), {\it Australia} emerges as the best team according to PageRank score as well as in-strength. In Figure~\ref{fig5} we show the correlation among different ranking schemes as function of time. \newline We provide a ranking of captains in Test cricket (Table~\ref{table5c}) and ODI cricket (Table~\ref{table6b}) in different era. Between $1877$ and $1951$ {\it Bill Woodfull} (Australia) is the most successful captain with {\it Sir Don Bradman} occupying the second position. {\it Richie Benaud} (Australia) leads the list twice during $1952-1960$ and $1961-1970$. During the period $1971-1980$ {\it Ian Chappell} occupies the top position as captain, with {\it Clive Lloyd} occupying the second position. From $1981-1990$ West Indies was the most successful team and {\it Sir Vivian Richards} was the most successful captain. {\tt Mark Taylor} (Australia) is the best captain between $1991$ and $2000$ and {\it Graeme Smith} (South Africa) emerge as the best captain during $2001-2010$. In ODI cricket Australia's {\it Greg Chappell} emerge as the most successful captain between $1971$ and $1980$. {\it Clive Lloyd} occupy the second position during that period. Pakistan's {\it Imran Khan} leads the list during the $1981-1990$ era. South Africa's {\it Hansie Cronje} was the most successful captain from $1991-2000$. During the period $2000-2010$ {\it Ricky Ponting} is the most successful captain followed by South Africa's {\it Graeme Smith} and India's {\it M.S.Dhoni}. In Figure~\ref{fig6} we show the correlation among the two ranking schemes for captains. \begin{table} \centering \caption{{\bf Most successful teams in history of Test cricket ($1877 - 2010$}). The teams are ranked according to the PageRank score of each team.} \begin{tabular}{ll} \hline \textbf{Rank} & \textbf{Team} \\ \hline $1$ & Australia \\ $2$ & South Africa \\ $3$ & England \\ $4$ & West Indies \\ $5$ & Pakistan \\ $6$ & India \\ $7$ & Sri Lanka \\ $8$ & New Zealand \\ $9$ & Zimbabwe \\ $10$ & Bangladesh \\ \hline \end{tabular} \label{table1} \end{table} \begin{table} \centering \caption{{\bf Most successful teams in the history of ODI cricket ($1971 - 2010$). } The teams are ranked according to the PageRank score of each team.} \begin{tabular}{ll} \hline \textbf{Rank} & \textbf{Team} \\ \hline $1$ & Australia \\ $2$ & South Africa \\ $3$ & West Indies \\ $4$ & England \\ $5$ & Pakistan \\ $6$ & India \\ $7$ & New Zealand \\ $8$ & Sri Lanka \\ $9$ & Zimbabwe \\ $10$ & Bangladesh \\ \hline \end{tabular} \label{table2} \end{table} \begin{table} \centering \caption{\textbf{Top twenty captains in Test cricket ($1877-2010$). } We also provide the nationality of the captain. The captains are ranked according to the PageRank score of each captain.} \begin{tabular}{lll} \hline \textbf{Rank} & \textbf{Captain} & \textbf{Country} \\ \hline $1$ & Steve Waugh & Australia \\ $2$ & Graeme Smith & South Africa \\ $3$ & Ricky Ponting & Australia \\ $4$ & Greg Chappel & Australia \\ $5$ & Richie Benaud & Australia \\ $6$ & Clive Lloyd & West Indies \\ $7$ & Ian Chappel & Australia \\ $8$ & Allan Border & Australia \\ $9$ & M. S. Dhoni & India \\ $10$ & Nasser Hussain & England \\ $11$ & Peter May & England \\ $12$ & Bill Woodfull & Australia \\ $13$ & Sir Vivian Richards & West Indies \\ $14$ & Sir Frank Worell & West Indies \\ $15$ & Sourav Ganguly & India \\ $16$ & Kim Hughes & Australia \\ $17$ & Ray Illingworth & England \\ $18$ & Geoff Howarth & New Zealand \\ $19$ & Andrew Strauss & England \\ $20$ & Stephen Fleming & New Zealand \\ \hline \end{tabular} \label{table3} \end{table} \begin{table} \centering \caption{\textbf{Top twenty captains in ODI cricket ($1971-2010$). }We also provide the nationality of the captain. The captains are ranked according to the PageRank score of each captain.} \begin{tabular}{lll} \hline \textbf{Rank} & \textbf{Captain} & \textbf{Country} \\ \hline $1$ & Ricky Ponting & Australia \\ $2$ & Graeme Smith & South Africa \\ $3$ & Imran Khan & Pakistan \\ $4$ & Hansie Cronje & South Africa \\ $5$ & Arjuna Ranatunga & Sri Lanka \\ $6$ & Stephen Fleming & New Zealand \\ $7$ & Clive Lloyd & West Indies \\ $8$ & M. S. Dhoni & India \\ $9$ & Sir Vivian Richards & West Indies \\ $10$ & Kapil Dev & India \\ $11$ & Allan Border & Australia \\ $12$ & Mahela Jayawardene & Sri Lanka \\ $13$ & Brian Lara & West Indies \\ $14$ & Daniel Vettori & New Zealand \\ $15$ & Paul Collingwood & England \\ $16$ & Sourav Ganguly & India \\ $17$ & Mohammad Azharuddin & India \\ $18$ & Rahul Dravid & India \\ $19$ & Javed Miandad & Pakistan \\ $20$ & Wasim Akram & Pakistan \\ \hline \end{tabular} \label{table4} \end{table} \begin{table} \centering \caption{{\bf Ranking of teams in different era in Test history.} We have shown the ranking from $1877 - 1980$. There exist no ICC ranking during $1877-1950$. } \begin{tabular}{ccc} \toprule \multirow{2}{*}{} \textbf{Era} & \textbf{PageRank} &\textbf{Reliance ICC-Ranking} \\ \midrule \multirow{6}{*}{\textbf{1877-1950}} & \multirow{6}{*}{} & \multirow{6}{*}{\textbf{-NA-}} \\ & Australia &\\ & England & \\ & West Indies & \\ & South Africa & \\ & New Zealand & \\ & India & \\ \midrule \multirow{6}{*}{\textbf{1951-1960}} & \multirow{6}{*}{} Australia & Australia \\ & England & England \\ & Pakistan & West Indies \\ & West Indies & South Africa \\ & South Africa & Pakistan \\ & India & India\\ & New Zealand & New Zealand \\ \midrule \multirow{7}{*}{\textbf{1961-1970}} & \multirow{6}{*}{} England & West Indies \\ & West Indies & Australia \\ & Australia & England \\ & New Zealand & South Africa\\ & South Africa & India \\ & India & Pakistan \\ & Pakistan & New Zealand \\ \midrule \multirow{7}{*}{\textbf{1971-1980}} & \multirow{6}{*}{} Australia & Australia \\ & India & England \\ & West Indies & Pakistan \\ & England & West Indies \\ & Pakistan & India \\ & New Zealand & New Zealand \\ \bottomrule \end{tabular} \label{table5a} \end{table} \begin{table} \centering \caption{{\bf Ranking of teams in different era in Test history.} We have shown the ranking from $1981 - 2010$. } \begin{tabular}{ccc} \toprule \multirow{2}{*}{} \textbf{Era} & \textbf{PageRank} &\textbf{Reliance ICC-Ranking}\\ \midrule \multirow{8}{*}{\textbf{1981-1990}} & \multirow{8}{*}{} West Indies & West Indies \\ & Pakistan & Pakistan \\ & Australia & New Zealand \\ & New Zealand & Australia \\ & England & India \\ & India & England\\ & Sri Lanka & Sri Lanka \\ & Zimbabwe & Zimbabwe \\ \midrule \multirow{10}{*}{\textbf{1991-2000}} & \multirow{10}{*}{} Australia & Australia \\ & South Africa & South Africa \\ & India & West Indies\\ & West Indies & Pakistan \\ & Pakistan & India \\ & England & England \\ & New Zealand & Sri Lanka \\ & Sri Lanka & New Zealand \\ & Zimbabwe & Zimbabwe\\ & Bangladesh & Bangladesh \\ \midrule \multirow{10}{*}{\textbf{2001-2010}} & \multirow{10}{*}{} Australia & Australia \\ & India & South Africa \\ & South Africa & India \\ & England & England \\ & Sri Lanka & Sri Lanka\\ & Pakistan & Pakistan\\ & New Zealand & New Zealand \\ & West Indies & West Indies \\ & Zimbabwe & Zimbabwe\\ & Bangladesh & Bangladesh\\ \bottomrule \end{tabular} \label{table5b} \end{table} \begin{table} \centering \caption{{\bf Ranking of teams in different era in ODI history.} We construct network of teams for each era. The teams are then ranked according to the PageRank score and compared with the Reliance ICC Ranking of Teams. During the period $1981-1990$ Zimbabwe and Bangladesh receive no points in the Reliance ICC Ranking and hence their ranks are not listed.} \begin{tabular}{ccc} \toprule \multirow{2}{*}{} \textbf{Era} & \textbf{PageRank} &\textbf{Reliance ICC-Ranking} \\ \midrule \multirow{8}{*}{\textbf{1971-1980}} & \multirow{8}{*}{} & \multirow{8}{*}{\textbf{-NA-}} \\ & West Indies & \\ & Australia & \\ & England & \\ & New Zealand & \\ & Pakistan &\\ & India &\\ & Sri Lanka & \\ \midrule \multirow{8}{*}{\textbf{1981-1990}} & \multirow{8}{*}{} West Indies & West Indies \\ & Australia & Australia \\ & England & England\\ & Pakistan & Pakistan\\ & India & India\\ & New Zealand & New Zealand\\ & Sri Lanka & Sri Lanka\\ & Zimbabwe & $-$\\ & Bangladesh & $-$ \\ \midrule \multirow{10}{*}{\textbf{1991-2000}} & \multirow{10}{*}{} South Africa & Australia \\ & Australia & South Africa \\ & Pakistan & Pakistan \\ & England & West Indies\\ & Sri Lanka & England \\ & West Indies & India \\ & India & Sri Lanka \\ & New Zealand & New Zealand \\ & Zimbabwe & Zimbabwe\\ & Bangladesh & Bangladesh \\ \midrule \multirow{10}{*}{\textbf{2001-2010}} & \multirow{10}{*}{} Australia & Australia \\ & South Africa & South Africa \\ & India & Sri Lanka \\ & Sri Lanka & Pakistan \\ & Pakistan & India \\ & New Zealand & New Zealand\\ & England & England \\ & West Indies & West Indies \\ & Bangladesh & Zimbabwe \\ & Zimbabwe & Bangladesh \\ \bottomrule \end{tabular} \label{table6a} \end{table} \begin{figure}[!ht] \begin{center} \includegraphics[width=5.5in]{Figure3_v1.eps} \end{center} \caption{{\bf Relation between different ranking schemes. } {\bf (A)} Scatter plot between the rank positions obtained according to Reliance ICC Ranking and those obtained with PageRank for Test cricket ($1952-2010$); (Kendall ${\tau}=0.644$, Spearman correlation ${\rho}=0.818$). {\bf (B)} Scatter plot between the rank positions obtained according to Reliance ICC Ranking and those obtained with PageRank for ODI cricket ($1981-2010$); (${\tau}=1.0$, ${\rho}=1.0$). {\bf (C)} Scatter plot between the rank positions obtained according to in-strength and those obtained with PageRank for Test cricket ($1877-2010$); (${\tau}=0.867$, ${\rho}=0.927$). {\bf (D)} Scatter plot between the rank positions obtained according to in-strength and those obtained with PageRank for ODI cricket ($1971-2010$); (${\tau}=0.644$, ${\rho}=0.709$). } \label{fig4} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=5.5in]{Figure4_v1.eps} \end{center} \caption{{\bf Correlation among different ranking schemes.} {\bf (A)} Spearman correlation coefficient (red) and Kendall $\tau$ (blue), between the ranking based on PageRank and the one based on the Reliance ICC Ranking, as function of time, for Test matches ($1952-2010$). {\bf (B)} The correlation coefficients are calculated between the ranking based on PageRank and the one Reliance ICC Ranking for ODI matches ($1981-2010$). {\bf (C)} The correlation coefficients are calculated between the ranking based on PageRank and In-strength for Test matches ($1952-2010$). {\bf (D)} The correlation coefficients are calculated between the ranking based on PageRank and In-strength for ODI matches ($1981-2010$).} \label{fig5} \end{figure} \begin{table} \centering \caption{{\bf Ranking of captains in different era in Test history.} We have shown the ranking of top five captains between $1877 - 2010$ as well as their nationality. A network of competing captains are generated for each era. We run the ranking procedure and rank the captains according to their PageRank score.} \begin{tabular}{ccc} \toprule \multirow{2}{*}{} \textbf{Era} & \textbf{Top five captains} & \textbf{Country}\\ \midrule \multirow{4}{*}{\textbf{1877-1950}} & \multirow{4}{*}{} Bill Woodfull & Australia \\ & Sir Donald Bradman & Australia \\ & John Goddard & West Indies \\ & Sir Gubby Allen & England \\ & Normal Yardley & England \\ \midrule \multirow{4}{*}{\textbf{1951-1960}} & \multirow{4}{*}{} Richie Benaud & Australia \\ & Gulabrai Ramchand & India \\ & Peter May & England \\ & Abdul Kardar & Pakistan \\ & Lindsay Hassett & Australia \\ \midrule \multirow{4}{*}{\textbf{1961-1970}} & \multirow{4}{*}{} Richie Benaud & Australia \\ & Sir Frank Worrell & West Indies \\ & Bob Simpson & Australia \\ & Ted Dexter & England \\ & Sir Garry Sobers & West Indies \\ \midrule \multirow{4}{*}{\textbf{1971-1980}} & \multirow{4}{*}{} Ian Chappel & Australia \\ & Clive Lloyd & West Indies \\ & Greg Chappell & Australia \\ & Ray Illingworth & England \\ & Mike Denness & England \\ \midrule \multirow{4}{*}{\textbf{1981-1990}} & \multirow{4}{*}{} Sir Vivian Richards & West Indies \\ & Allan Border & Australia \\ & Greg Chappell & Australia \\ & Clive Lloyd & West Indies \\ & Geoff Howarth & New Zealand \\ \midrule \multirow{4}{*}{\textbf{1991-2000}} & \multirow{4}{*}{} Mark Taylor & Australia \\ & Hansie Cronje & South Africa \\ & Allan Border & Australia \\ & Mike Atherton & England \\ & Steve Waugh & Australia \\ \midrule \multirow{4}{*}{\textbf{2001-2010}} & \multirow{4}{*}{} Graeme Smith & South Africa \\ & Ricky Ponting & Australia \\ & Steve Waugh & Australia \\ & M. S. Dhoni & India \\ & Sourav Ganguly & India \\ \bottomrule \end{tabular} \label{table5c} \end{table} \begin{table} \centering \caption{{\bf Ranking of captains in different era in ODI history.} A network of teams is generated for each era. We then run the PageRank algorithm on each network which gives a PageRank score. The teams are then ranked according to their PageRank score. We have shown the ranking of top five captains between $1971 - 2010$ as well as their nationality.} \begin{tabular}{ccc} \toprule \multirow{2}{*}{} \textbf{Era} & \textbf{Top five captains} & \textbf{Country}\\ \midrule \multirow{4}{*}{\textbf{1971-1980}} & \multirow{4}{*}{} Greg Chappell & Australia \\ & Clive Lloyd & West Indies \\ & Geoff Howarth & New Zealand \\ & Mike Brearley & England \\ & Sunil Gavaskar & India \\ \midrule \multirow{4}{*}{\textbf{1981-1990}} & \multirow{4}{*}{} Imran Khan & Pakistan \\ & Sir Vivian Richards & West Indies \\ & Kapil Dev & India \\ & Allan Border & Australia \\ & Javded Miandad & Pakistan \\ \midrule \multirow{4}{*}{\textbf{1991-2000}} & \multirow{4}{*}{} Hansie Cronje & South Africa \\ & Arjuna Ranatunga & Sri Lanka \\ & Mohammad Azharuddin & India \\ & Wasim Akram & Pakistan \\ & Richie Richardson & West Indies \\ \midrule \multirow{4}{*}{\textbf{2001-2010}} & \multirow{4}{*}{} Ricky Ponting & Australia \\ & Graeme Smith & South Africa \\ & M. S. Dhoni & India \\ & Stephen Fleming & New Zealand \\ & Mahela Jayawardene & Sri Lanka \\ \bottomrule \end{tabular} \label{table6b} \end{table} \begin{figure}[!ht] \begin{center} \includegraphics[width=5.5in]{skippercorrelationv1.eps} \end{center} \caption{{\bf Relation between PageRank and in-strength Rank for captains. } {\bf (A)} Scatter plot between the rank positions obtained according to in-strength and those obtained with PageRank for Test cricket ($1952-2010$); (Kendall ${\tau}=0.734$, Spearman correlation ${\rho}=0.892$). {\bf (B)} Scatter plot between the rank positions obtained according to in-strength and those obtained with PageRank for ODI cricket ($1981-2010$); (${\tau}=0.836$, ${\rho}=0.948$). } \label{fig6} \end{figure} \section{Conclusion} Our work demonstrates the strength of social network analysis methods in quantifying the success of cricket teams and their captains. Here we have created a directed and weighted network of contacts (i.e, teams and captains). The correct assessment of a team's success (or captain's success) needs the consideration of the entire network of interaction. The PageRank algorithm takes into account the quality of matches won. For example, a win against a strong team is more important than a win against a weak team. Also a captain is as good as the team. In this sense, a win against {\it Clive Lloyd}, {\it Steve Waugh} or {\it Graeme Smith} is more relevant than a win against a lesser captain. Our analysis shows that PageRank algorithm is effective in finding the most successful team and captain in the history of cricket. \newline It should be noted that success of a team or a captain depends on various factors like home advantage, success of batsmen and bowlers. For example, Australia's dominance in both forms of the game is a manifestation of the fact that they are able to adjust in all kinds of pitches around the world, whereas subcontinent teams always played well under subcontinent conditions but were not able to repeat their performance abroad on a consistent basis. Our analysis does not require these `external' factors which are usually taken into account in ICC rankings. However, we would like to mention that our method does not aim to replace the ICC ranking. It suggests a novel approach to refine the existing ranking scheme. \newline We would like to state that cricket is a team game. Success or failure of a team depends on the collective performance of all team members. Simple statistics like runs scored by batsmen, wickets taken by bowlers or exceptional fielding does not provide a reliable measure of a player's contribution to the team's cause. Quantifying the impact of player's individual performance in sports has been a topic of interest in soccer (\cite{duch10}) and baseball (\cite{saavedra09}). However, in cricket the rules of the game are different and therefore it would be interesting to apply tools of network analysis on interaction between players. For example, a contact network of batsman $vs.$ bowler could give an estimate of the greatest batsman (bowler) ever. Potentially, a quantitative approach to a player's performance could be used to estimate the Man of the Match (Series) award after a tournament. \section*{Acknowledgement} We thank the cricinfo website for the public availability of information on cricket matches. We also gratefully acknowledge helpful discussions with Rufaro Mukogo, David Mertens and Xiaohan Zeng.
{ "attr-fineweb-edu": 2.019531, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdWU5qhDBSUjwh57s
\section{Introduction} Pep Guardiola, current Manchester City's soccer coach and former Futbol Club Barcelona's, said once that elder people claim that in yesteryear soccer you had to control the ball, then look and turn around, and finally, make the pass, while in today's faster version of soccer, players need first to look (and orient correctly) before controlling and passing the ball. Therefore, getting orientation metrics may help coaches to boost the performance of a team by designing optimal tactics according to players' strengths and weaknesses. However, the concept of orientation is a complex concept without an exact definition, and during a soccer game, there are a total of/up to 22 players oriented in their own way at any given time during 90 minutes. In order to avoid the so-called concept of \textit{paralysis by analysis}, in this paper soccer events are filtered, hence including just pass events, which are the ones in where orientation takes the most important role according to Guardiola's words. The main contribution of this research is a computational model that, for each pass event, outputs the feasibility of receiving the ball for each potential candidate of the offensive team. The proposed model combines three different types of feasibility measures, defined on the grounding assumption that, among all potential receivers, the passer will move the ball to the (a) best oriented, (b) less defended and (c) closest available player. Orientation is obtained through a Computer Vision state-of-the art method \cite{arbuesOrientation}, which outputs an orientation value for each player by projecting the upper-torso pose parts in a 2D field. On top of these data, a novel feasibility measure is introduced to describe how good/bad the orientation fit between a passer and a potential receiver is. Given the location of all defenders, another feasibility metric is defined to establish how tough it is for the passer to move the ball to a particular player; this metric takes into account the distance of all defenders with respect to the passing line, which is defined by the relative angle in the 2D field that joins the passer and the receiver. Finally, pairwise distances among offensive players are used to construct a third feasibility measure based on the separation between players, hence assuming that players close to the ball have higher chances of receiving it than farther ones.\\ Results, expressed with Top-1 and Top-3 accuracy, show that the combination of all feasibility measures outperforms any of their individual performances, and that the model strongly benefits from the inclusion of the orientation feasibility measure. Moreover, existing state-of-the-art (SoA) models have been tested and compared, both before and after adding orientation as a feature to predict the outcome of passes, obtaining promising results which show that models can be confidently refined by adding these type of data. \\ The rest of the paper is organized as follows: in Section \ref{sec:SoA}, the related research is analyzed, including the details of the methods this research stems from; the proposed computational model is described in Section \ref{sec:ProMet}, along with all technical details. Feasibility results, discussion and possible combinations are studied in Section \ref{sec:Res}, and finally, conclusions are drawn in Section \ref{sec:Conc}. \section{Related work} \label{sec:SoA} Since the irruption of Moneyball \cite{lewis2004moneyball}, sports clubs started conducting research about applied data science with the main purpose of boosting team performance. More concretely, the inclusion of tracking data proved to be crucial for the design of team strategies, so computer vision became (and still is) a hot topic in this research field. Lately, many contributions have been made towards geometric and semantic sports analysis \cite{maksai2016players,bertasius2017baller,thomas2017computer,felsen2017will,shih2017survey,senocak2018part,wu2019learning,chen2019sports,dwibedi2019temporal,cioppa2019arthus,stein2019movement,ran2019robust}, mostly driven by direct applications that might be useful for coaches in order to prepare optimal tactics. In particular, recent contributions in soccer such as \cite{rematas2018soccer,cioppa2019arthus,chen2019sports,fernandez2019decomposing,chawla2017classification} managed to better explain this sport analytically through tracking data, among others. However, authors claim that there is still a lack of contextualization due to undefined variables, such as player body orientation. To the best of our knowledge, the only method that aims to extract player body orientation over soccer video footage was published by Arbués-Sangüesa \textit{et al.}~\cite{arbuesOrientation}. This method computes players' orientation by combining: (a) the angle of the player with respect to the ball, with (b) an estimation of the body orientation as a 2D projection of the normal vector to the upper-torso. In order to do so, this work first uses OpenPose \cite{ramakrishna2014pose, wei2016convolutional, cao2017realtime} over the soccer video footage to detect player's body keypoints. Moreover, a Support Vector Machine model (based on color and geometrical feature vectors) is applied in order to ensure that OpenPose parts are not swapped. This method achieves a median absolute error of 26 degrees/player, and three different types of orientation visualization tools are introduced: OrientSonars, Reaction and On-Field maps. In the presented article, this method is used to obtain the estimation of the orientation of each player on the 2D field. Moreover, soccer analysts have been struggling for many years to find a way to assign some value to the individual actions performed by each player, thus obtaining specific metrics for each move. Different passing probability models and the quantification of concepts such as the pass risk/reward are introduced in \cite{gyarmati2016qpass, link2016real, power2017not}, and deep analysis of passing strategies are studied in \cite{gyarmati2015automatic,szczepanski2016beyond,chawla2017classification}; more concretely, \cite{hubacek} proposes a passing prediction model based on an end-to-end CNN approach. Note that none of the previous models take orientation into account. Furthermore, given that the main reward of soccer players is to score a goal, and knowing that this type of action is a rare event during the 90 minutes of the game, Fernandez \textit{et al.} \cite{fernandez2019decomposing} introduced a new metric called Expected Possession Value (EPV), which already existed for basketball scenarios \cite{cervone2014pointwise}. The main objective of this metric is to predict an expected value of scoring/receiving a goal at a given time in any field position, based on a spatial analysis of the whole offensive and defensive setup at that moment; more concretely, in pass events, having a passer $P$, an EPV map can be computed for each field position $x\in\mathbb{R}^2$, which estimates the above-mentioned expected-value if $P$ passes the ball to $x$. The main EPV model consists of different likelihood components, especially emphasizing a passing probability model. In the present paper we will include a comparison and an analysis illustrating that those previous proposals can be improved by introducing player orientation information in the pass event analysis. \section{Proposed Pass-Orientation Model} \label{sec:ProMet} In this section, we propose a computational model to estimate the most plausible ball player pass at any given time based on the prior information that a player is going to execute a pass. To achieve this goal, we will attribute a feasibility score obtained by defining appropriate estimations that take into account player orientation and the configuration of the offensive and defensive team in the 2D field at that time. Intuitively, it stems from the fact that, in a pass event, there are 10 potential candidates of the same team who might receive the ball, each one of them holding a particular orientation with respect to the passer and at a certain position in the field. Let $u(\cdot,t)$ be a color video defined on $\Omega\times\left\{1,\dots, T\right\}$, where $\Omega\subset\mathbb{R}^2$ denotes the image frame domain and $\left\{1,\dots, T\right\}$ is the set of discrete times. Given a time $t$, our method first considers the visible players in $u(\cdot,t)$ (\textit{i.e.}, visible players in the image frame at time $t$) together with their body orientation. In this paper the detection of the players is given but, alternatively, a detector can be used such as, \eg,~\cite{ren2015faster,cioppa2019arthus,johnson2020sloan}. On the other hand, the orientation of the players in the 2D field is obtained with the method described in~\cite{arbuesOrientation} (for the sake of completeness, details have been given in previous Section \ref{sec:SoA}). From now on, the position and orientation of the players will be considered over a 2D field template. To simplify the notation, the dependence on $t$ of the considered elements will be omitted. Let $P$ denote the 2D position in the template field of the player with the ball at time $t$ who is going to execute the pass. Let $\{ R_i, \, i=1,\dots,I\}$ and $\{D_k, \, k=1,\dots,K\}$ denote, respectively, the 2D position in the field of the visible team-mates of $P$, and the current defenders at time $t$, with $I\leq 10, K\leq 11$. The former ones constitute the set of visible receivers of the ball at time $t+\Delta_{t}$, being $\Delta_{t}$ the duration of the pass. Let $H_i$ denote the prior or hypothesis that player $P$ is going to pass the ball to receiver $R_i$. The main idea is to define a feasibility measure which is grounded on three elements: (a) the body orientation of every player together with (b) the pressure of the defenders $D_k$, both on $P$ and $R_i$, and (c) the relative position of $R_i$ with respect to $P$. Then, the most feasible ball pass $\hat{H}$ is computationally selected as the one maximizing \begin{equation}\label{eq:maxF} \hat{H}=\arg \max_i F(i) , \end{equation} where $F(i)$ is the feasibility of the event pass in $H_i$, which can be defined as \begin{equation} \label{eq:feas} F(i)=F_o(i) F_d(i) F_p(i), \end{equation} where $F_o(i)$, $F_d(i)$, and $F_p(i)$ stand for the orientation, defenders and proximity scores, respectively, defined later in this section. Finally, it must be stated that all feasibility measures are obtained right at the moment when the passer $P$ kicks the ball. \subsection{Orientation} \label{sec:OrComp} One of the aspects that drastically affects the outcome of a pass is the players' body-orientation. If a player is relatively close to the passer and without being defended, he/she might still not be able to receive the ball properly if he/she is facing away. For a given pass event, the orientation of each player is computed using \cite{arbuesOrientation} in a window of $\pm Q$ frames with respect to the exact pass moment $t$. The median value of these $2Q+1$ observations is considered as the player orientation in the event at time $t$. In practice, a window of 5 frames is used in 25 fps videos. Once obtained this estimation, an orientation-based pass feasibility measure is proposed, which takes into account geometrical quantities and outputs a score of how well a player is oriented in order to receive the ball. In order to take only the orientation information into account (proximity between players will be considered in the 3rd feasibility measure, as seen in Subsection \ref{sec:subDistances}) all potential receivers $R_{i}$ are placed at the same distance with respect to the passer whilst preserving the original angle in the 2D field between the passer $P$ and each receiver $R_i$. Note that this angle is only related to relative position and not to player body orientation. This step is illustrated in Figure \ref{fig:Or1}. Once all potential receivers are placed at an equidistant distance $Z>0$ with respect to the passer, the body orientation of all players, expressed as $\phi(P)$ and $\phi(R_{i})$ for the passer and the receiver $i$ respectively is considered (it corresponds to red vectors in Figures \ref{fig:Or1} and \ref{fig:Or2}). Intuitively, $\phi(P)$ provides an insight of the passer field of view, and by setting a range of $\pm \psi$º with respect to the passer body orientation, an approximate spectrum of the passer field of view is obtained. By setting $\psi$º$>0$ to a fixed value (\textit{i.e.} 30 degrees), an isosceles triangle with the two equal sides of length $2Z$ is defined (see Figure \ref{fig:Or2}). This triangle is denoted by $T_P$ and imposes a limit to the region where the player can pass the ball. The same procedure is repeated for $\phi(R_{i})$, with the triangle $T_{R_i}$ indicating the field of view of the receiver, which shows in which directions he/she can get a pass from; the length of the two equal sides of triangle $T_{R_i}$ is set to $Z$. Figure \ref{fig:Or2} displays some possible scenarios. We claim, and numerically verify in Section~\ref{sec:IndPer}, that the weighted area of the intersection of triangles $T_P$ and $T_{R_i}$ gives a measure of how easy it can be for a player to receive a pass in the given configuration: no intersection indicates the inability to get it, whilst partial or total intersection indicates a proper orientation fit. Accordingly, the orientation-based feasibility is defined as \begin{equation}\label{eq:Fo} F_{o}(R_{i})=\frac{1}{c}\int_{T_P\cap T_{R_i}} \left(e^{-\text{d}(P,x)}+e^{-\text{d}(R_i,x)}\right) dx \end{equation} where $c>0$ is a normalizing constant and $\text{d}(a,b)$ denotes the Euclidean distance between $a$ and $b$ normalized so that the maximum distance in the field is $1$. Let us first discuss the weights in \eqref{eq:Fo}. The intrinsic geometry of the triangle has an obvious limitation when it comes to shape intersection: considering the vertex that coincides with the passer position as the triangle beginning, triangles contain a large portion of area in regions placed far from their beginning. Hence, the values inside the computed triangles are weighted according to their relative position with respect to the triangle beginning, fading out in further positions. This effect can be seen as different color opacity in the triangles displayed in Figure \ref{fig:Or2}. Finally, the reasoning for setting different triangle heights is that, if both passer' and receiver' associated triangles had the same height, players that are located behind a passer who is not looking backwards would intersect notably, despite being a non feasible pass (like in the top-centered example sketch of Figure \ref{fig:Or2}). \begin{figure} \centering \includegraphics[width=0.4\textwidth]{DraftOr1hd.png} \caption{In order not to take pairwise distances into account while computing orientation feasibility, all players are moved towards an equidistant distance (unit circle).} \label{fig:Or1} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{DraftOr2hd.png} \caption{Individual scenarios of intersection given the relocated players of Figure \ref{fig:Or1}. As it can be seen, the top-right player is the best oriented candidate to receive the ball.} \label{fig:Or2} \end{figure} \subsection{Defenders Position} Apart from considering the visible player s of the offensive team, the behavior of the defenders, $\{D_{k}\}_k$, is continuously changing the decision-making process. Even if a player is near the passer and properly oriented, the probability of receiving the ball can be really low if he/she is properly guarded; however, it is hard to define how well a player is being defended at a time. Considering only passing events, defenders close to the line that connects the passer with the receiver (passing line) are the ones in a more advantageous position to transform a pass into a turnover. Let us denote by $\beta(P,R_{i})$ the angle in the 2D template field between the passer $P$ and the receiver $R_i$ (see Figure \ref{fig:Or1}), and by $\beta(P,D_{k})$ the one between the passer $P$ and defender $D_{k}$. Using this angle, the proposed defenders-based feasibility will take into account two feasibility scores: (a) the feasibility $\text{F}_{d,P}(R_{i})$ of passing in the direction of $\beta(P,R_{i})$ and (b) the feasibility $\text{F}_{d,R}(R_{i})$ of receiving the ball from $P$. For the first case, the distance and the angle of all defenders with respect to the passer is computed. Therefore, the definition of the feasibility measure $F_{d,P}(R_{i})$ depends on the Euclidean distances of the closest defenders with respect to the passer: \begin{equation}\label{eq:FdP} \begin{split} & F_{d,P}(R_{i}) =\\ & \text{exp}\left(-\frac{1}{J} \sum_{k \in \mathcal{N}_P} w\left(\beta(P,D_{k}),\beta(P,R_{i})\right) (1-\text{d}(P,D_{k}))\right) \end{split} \end{equation} where $\mathcal{N}_P$ denotes the set of the $J$ nearest neighbor defenders from $P$, according to the weighted distance $\text{d}_w$, defined as \begin{equation} \text{d}_w (P,D_{k}) = w(\beta(P,D_{k}),\beta(P,R_{i})) \, \text{d} (P,D_{k}) \end{equation} where $\text{d} (P,D_{k})$ denotes the normalized Euclidean distance between $P$ and $D_{k}$. Finally, the weights $w$ are defined as \begin{equation} w(\beta(P,D_{k}),\beta(P,R_{i})) = \begin{cases} 0.25 &\mbox{if } \alpha < 22.5 \text{º} \\ 0.5 & \mbox{if } 22.5 \text{º} \leq \alpha < 45 \text{º} \\ 2 & \mbox{otherwise} \end{cases} \end{equation} where $\alpha = |\beta(P,D_{k})-\beta(P,R_{i})|$ (modulus 360º). In practice, we take $J = 3$. Function $w$ is used to model that defenders close to the passing line (and thus with an associated small $\omega$ value) entail a higher risk for that specific pass. This whole procedure can be seen in the left side of Figure \ref{fig:Def1}, where the three closest defenders are highlighted for two hypothetical passes. For $\text{F}_{d,R}(R_{i})$, the same procedure is repeated with respect to the receiver; however, in order to have two independent quantities, the $J$ nearest neighbors considered when computing $F_{d}(P)$ are discarded. Hence, $\mathcal{N}_{R_i}$ is the set of the $J$ nearest neighbor defenders from $R_i$ (according to $\text{d}_W$) belonging to $\cal{N}_P^C$, \textit{i.e.}, the complement of $\mathcal{N}_P$ (that is, the set of the visible defenders at time $t$ that are not in $\mathcal{N}_P$). The feasibility to receive the ball from a given angle can be expressed as: \begin{equation}\label{eq:FdR} \begin{split} & F_{d,R}(R_{i}) = \\ & \text{exp}\left(\!-\frac{1}{J} \!\sum_{k \in \mathcal{N}_{R_i}}\! w\left(\beta(R_{i},D_{k}),\beta(P,R_{i})\right)(1-\text{d}(R_{i},D_{k}))\right) \end{split} \end{equation} The right part of Figure \ref{fig:Def1} shows a graphical example, where the top closest weighted defenders are found with respect to the receiver once discarded the closest defenders found when computing $F_{d,P}(R_{i})$ (Figure \ref{fig:Def1}). To conclude, the defenders feasibility is defined as $F_{d}(R_{i}) = F_{d,P}(R_{i})F_{d,R}(R_{i})$, and it is a measure of how likely the event of passing to a particular player is, given the defensive spatial configuration. \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{DraftDefhd.png} \caption{Computation of $F_{d,P}(R_{i})$ and $F_{d,R}(R_{i})$ for two different potential receivers. For both cases, (left) general setup, plus detection of the 3 closest weighted defenders in the scenario of the (middle) left-sided and (right) right-sided player.} \label{fig:Def1} \end{figure*} \subsection{Pairwise Distances}\label{sec:subDistances} Finally, the position in the 2D field affects also the passing options, as players placed closer to the passer have a higher probability of receiving the ball. For this reason, the feasibility of receiving the ball based on pairwise distances or proximity can be defined as inversely proportional to the distance by: \begin{equation}\label{eq:Fdist} F_{p}(R_{i}) = \text{exp}\left(-\text{d}(P,R_{i})\right) \end{equation} \subsection{Combination} \label{sec:subComb} Once all three independent feasibility measures are computed, Equation~\eqref{eq:feas} is proposed to combine them. Notice that a low feasibility value in one of the three features (orientation, defenders or distance) indicates that the pass is highly risky, no matter what the other values are. \section{Results} \label{sec:Res} The dataset provided by F.C. Barcelona included 11 whole games of their team; not only video footage was provided, but also eventing data. By filtering pass events, 6038 pass events were gathered; these pass events are tagged as well with a binary flag of their outcome, indicating if the receiver was able to control the ball properly (from now on, called successful pass) or not. In this Section, several experiments will be detailed with one main goal: to study if proper orientation of soccer players is correlated with successful receptions, thus maximizing the probability of creating a potential goal opportunity. Hence, in order to examine the effect of including the orientation, another baseline pass model will be used for testing, which will only use the output of $F_{p}$ and $F_{d}$; more concretely, $F$ will be compared with $F_{pd}$, defined as: \begin{equation} F_{pd}(R_{i}) = F_{p}(R_{i})F_{d}(R_{i}). \end{equation} For the whole dataset, in order to measure accuracy, a Top-X metric is obtained by comparing the ground truth receiver of the each pass event with the one indicated by the feasibility scores among all candidates. This metric indicates the number of times (expressed as a percentage) where the current receiver of a given pass is included in the first $X$ candidates according to the feasibility models. In this Section, Top-1 and Top-3 accuracy metrics will be studied under different conditions. Moreover, histograms will be plotted for each scenario. In all cases, the number of bins is 9, as it corresponds to the number of potential receivers of a play; note the goalkeeper has been excluded because it does not appear in the frame domain in many scenarios. The height of each particular bin $B_{n}$ (with $n \leq 10$) represents the number of times that the ground truth receiver has been considered the $n$ best candidate according to the feasibility values (for instance, $B_{1}$ equals the number of times that the actual receiver was considered as the best option). In these Figures, the histograms of successful (blue) and unsuccessful (orange) passes are plotted together. \subsection{Orientation Relevance in Pass Feasibility} \label{sec:IndPer} The importance of orientation in the computation of the proposed feasibility $F$ will be shown by comparing the results of $F$ with the ones obtained with the baseline feasibility $F_{pd}$, which does not include orientation. As it can be seen in Table \ref{tab:tComb}, in both cases the Top-1/3 metric shows that the introduced features in the feasibility computation are directly correlated to the outcome of the play: the difference in Top-1 accuracy between successful and non-successful passes is more than the double, and in Top-3 is more than 0.2. Besides, orientation makes a difference by complementing distance and defenders. Apart from boosting the difference between successful and non-successful passes by a margin of 0.04/0.02, $F$ outperforms $F_{pd}$ Top-1 accuracy by 0.07 and Top-3 by 0.05. Visually, this difference can be spotted in the first bins of the histogram displayed in Fig. \ref{fig:PlotCOMB}. \begin{figure}[] \centering \includegraphics[width=0.35\textwidth]{plotCOMBhdDef.png} \caption{Histogram distribution comparison between $F_{dp}$ and $F$; note that the later includes the computed orientation feasibility. } \label{fig:PlotCOMB} \end{figure} \begin{table}[] \begin{center} \scalebox{0.9}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (NSucc.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-3\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-3\\ (NSucc.)\end{tabular}} \\ \hline $F_{pd}$ & 0.299 & 0.149 & 0.650 & 0.411 \\ \hline $F$ & 0.367 & 0.175 & 0.702 & 0.487 \\ \hline \end{tabular}} \end{center} \caption{Top-1/3 accuracy for successful/non-successful passes obtained before ($F_{pd}$) and after ($F$) including orientation as a feasibility measure.} \label{tab:tComb} \end{table} \noindent\textbf{Decomposed $F_{o}$ - $F_{d}$ - $F_{p}$ Performance.}\\ In order to show how useful the individual estimations are, the performance of the three individual feasibility measures ($F_{p}$, $F_{d}$, and $F_{o}$) is studied together with their combination. These results are shown in Table \ref{tab:tIndPer} and Figure \ref{fig:PlotIND}. For the successful passes, the histogram of all three components share more or less the same shape. However, the top bins of $F_{p}$ have higher values (0.34, 0.70 for Top-1 and Top-3 accuracy respectively); as a result, the bottom bins have low values, which means that it is unlikely to pass the ball to players placed far away with respect to the ball. For the unsuccessful passes, $F_{d}$ and $F_{p}$ components seem to be the most and less relevant ones, respectively. This means that passing to a player who is far away does not always imply a turnover, but passing to a well-defended player does (0.14 difference in Top-1 accuracy). Generally, $F_{o}$ resembles $F_{p}$, but the histogram is more distributed (flat shape). Combining all three methods (by computing their product) adds some value due to contextualization. For instance, orientation by itself does not take pairwise distances into account: this means that, in particular scenarios, players placed far away in the field might be the best potential candidates in terms of orientation, but as it has been proved, these passes will hardly ever exist. Besides, our proposed feasibility measure $F$ (declared in \eqref{eq:feas}) combines all three components and keeps the high Top-1 and Top-3 metrics of $F_{p}$ whilst preserving the difference between the successful/not-successful passes of $F_{d}$. The bottom-right histogram shows that this goal has been accomplished. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{plotINDhdDef.png} \caption{Histogram distribution among potential receivers evaluating individual feasibility components. From left-right, top-bottom: (a) $F_{p}$, (b) $F_{d}$, (c) $F_{o}$ and (d) Combination.} \label{fig:PlotIND} \end{figure} \begin{table}[] \begin{center} \scalebox{0.9}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (NSucc.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-3\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-3\\ (NSucc.)\end{tabular}} \\ \hline $F_{o}$ & 0.260 & 0.232 & 0.566 & 0.546 \\ \hline $F_{p}$ & 0.340 & 0.320 & 0.704 & 0.665 \\ \hline $F_{d}$ & 0.243 & 0.107 & 0.604 & 0.336 \\ \hline \end{tabular}} \end{center} \caption{Top-1/3 accuracy for successful/non-successful passes obtained with all three individual feasibility estimations.} \label{tab:tIndPer} \end{table} \subsection{Players' Field Position / Game Phase} Once analyzed the impact of orientation as a feasibility measure, in this Subsection, its effect on different kind of players and game phases is analyzed. By classifying them according to the basic field positions (defenders, midfielders and forwards), Figure \ref{fig:PlotPOS} and Table \ref{tab:tType} show the differences, in terms of orientation-based feasibility, among them, which state that midfielders are the ones under bigger $F_{o}$ influence. When introducing orientation in the feasibility measure, both the Top-1 and Top-3 accuracy have a boost of 0.10 while preserving a similar difference in successful-unsuccessful differences (first 3 bins of the midfielders histogram). Defenders are not heavily affected by orientation, mostly because of the many security passes that they perform: in this type of pass (usually between defenders), both players have no opponents surrounding them, and they can freely pass to their closest team-mates without having to be strictly oriented towards them. Forwards are also affected by orientation, but they give and receive less passes; besides, in their domain, passes do not only have a high turnover risk, but also a high potential reward. \begin{figure}[] \centering \includegraphics[width=0.4\textwidth]{plotPOShdDef.png} \caption{Histogram distribution, obtained with (left) $F_{dp}$ and (right) $F_{dpo}$, for different player positions. From top to bottom: defenders, midfielders, and forwards.} \label{fig:PlotPOS} \end{figure} \begin{table}[] \begin{center} \scalebox{0.9}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (NSucc.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-3\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-3\\ (NSucc.)\end{tabular}} \\ \hline $F_{pd}$ (def.) & 0.354 & 0.134 & 0.724 & 0.436 \\ \hline $F$ (def.) & 0.404 & 0.162 & 0.720 & 0.521 \\ \hline $F_{pd}$ (mid.) & 0.235 & 0.114 & 0.575 & 0.341 \\ \hline $F$ (mid.) & 0.341 & 0.196 & 0.673 & 0.456 \\ \hline $F_{pd}$ (for.) & 0.278 & 0.158 & 0.589 & 0.426 \\ \hline $F$ (for.) & 0.315 & 0.178 & 0.653 & 0.459 \\ \hline \end{tabular}} \end{center} \caption{Top-1/3 accuracy for successful/non-successful passes, before/after including orientation, split by player position.} \label{tab:tType} \end{table} In a similar way, passes can be also classified according to the location of the passer in relation to the defensive team spatial configuration, as it is not the same a security pass of a defender than another pass of the same defender but in the offensive side of the court. In order to introduce this kind of context, different phases of the offensive plays are evaluated individually by clustering the 2D coordinates of the defensive players in the field. Bearing in mind that in a soccer lineup there are mainly 3 rows of horizontally distributed players (both for offense and defense), three phases (displayed in Figure \ref{fig:gamePh}) can be defined: (a) build-up, when the ball is located before the first row of defenders, (b) progression, after the first and before the third row of defenders, and (c) finalization, after the last row of defenders. Results are displayed in Figure \ref{fig:PlotPHA} and Table \ref{tab:tPHA}. Once again, the effect of orientation is vital in the half-court, with a notable difference between successful and non-successful passes in the progression phase (around 0.2 difference in both Top-1 and Top-3, and more than 0.7 Top-3 accuracy). As expected, the build-up and finalization game phases are, respectively, the ones with lower and higher risk, but even in these extreme cases, the inclusion of $F_{o}$ also boosts the pass accuracy metrics. \begin{figure}[] \centering \includegraphics[width=0.3\textwidth]{GamePhaseshd.png} \caption{Game phases depend on the position of the passer with respect to the defense spatial configuration.} \label{fig:gamePh} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.4\textwidth]{PlotLINEShdDef.png} \caption{Histogram distribution, obtained with (left) $F_{dp}$ and (right) $F_{dpo}$, for different game phases. From top to bottom: build-up, progression and finalization.} \label{fig:PlotPHA} \end{figure} \begin{table}[] \begin{center} \scalebox{0.9}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (NSucc.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}TOP-3\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-3\\ (NSucc.)\end{tabular}} \\ \hline $F_{pd}$ (bu.) & 0.282 & 0.143 & 0.610 & 0.382 \\ \hline $F$ (bu.) & 0.355 & 0.162 & 0.688 & 0.444 \\ \hline $F_{pd}$ (pr.) & 0.297 & 0.128 & 0.659 & 0.365 \\ \hline $F$ (pr.) & 0.372 & 0.162 & 0.712 & 0.480 \\ \hline $F_{pd}$ (fi.) & 0.326 & 0.185 & 0.687 & 0.490 \\ \hline $F$ (fi.) & 0.376 & 0.203 & 0.710 & 0.534 \\ \hline \end{tabular}} \end{center} \caption{Top-1/3 accuracy for successful/non-successful passes, before/after including orientation, split by player game phase (\textit{bu} - build up, \textit{pr} - progression, and \textit{fi} - finalization).} \label{tab:tPHA} \end{table} \subsection{Combination with Expected Possession Value} As mentioned in Section \ref{sec:SoA}, EPV is a recently introduced indicator that tries to boost individual/team performance by assigning value to individual actions, using (among others) a pass probability model. However, the EPV model of \cite{fernandez2019decomposing} does not take the body orientation of players into account, thus producing results that, despite being notably accurate, can be refined. An example is shown in Figure \ref{fig:EPVcomb}; for the displayed pass event, the spatial output of the pass probability model (left) and the EPV map (right) can be seen in the middle row. As observed in the original frame, the passer (white circle) is the central mid-fielder, who is directly facing the right-central defender; for this reason, the passer cannot see in his field of view the left-central defender, hence lowering the latter's receiving chances. However, the output of the pass probability model considers the left-central defender as a notable candidate, and EPV does not penalize this pass as a risky one. Nevertheless, by combining our orientation-based feasibility measure $F_{o}$ with the output of the (a) original probability model or the (b) output of the EPV model, maps could be adapted accordingly, thus enhancing potentially good receivers in particular regions as it is displayed in the last row of Figure \ref{fig:EPVcomb}. \begin{figure}[] \centering \includegraphics[width=0.45\textwidth]{epv2Modelhd.png} \caption{(a) Pass event and zoom in the passer region; (b,c-top) output of the pass probability/EPV models respectively of \cite{fernandez2019decomposing}, typically $\Psi$ equals 0.015, (b,c-bottom) output example made by hand; the combination of the existing models with body orientation would refine the restricting the area of potential receivers.} \label{fig:EPVcomb} \end{figure} The main challenge when combining both methods is the dimension miss-alignment: both the pass probability and EPV models extract an output map with a value for each discretized field position (downscaled to $104\times68$), whilst the proposed model defines an individual feasibility value for each of the 10 potential receivers. In order to get a single probability/EPV value for each player in the field, and being $\rho$ the output map (defined by the pixels of the downscaled field), a geometrical solution is provided; its approach is based on the idea that an individual value can be obtained by integrating the probability/EPV values on a meaningful area that extends from the passer to the receiver. In particular, for a given receiver $R_{i}$, first, a disc $Q_i$ of radius $q>0$ is defined around his/her 2D field position, and then, a tubular region $S_i$ of fixed width $s>0$ is defined from $P$ (starting position) to $R_{i}$ (thus, its length is proportional to the distance between the passer and the potential receiver). The final individual value for receiver $R_{i}$, denoted here as $V(R_{i})$, can be obtained as: \begin{equation}\label{eq:SV} V(R_{i})=\frac{1}{\text{Area}\left(Q_{i}\cup S_{i}\right)} \int_{Q_{i}\cup S_{i}} \rho(x) dx \end{equation} where $\text{Area}\left(Q_{i}\cup S_{i}\right)$ denotes the area of the region $Q_i\cup S_i$. In practice, $q$ and $s$ have been set to $\frac{5}{W_{\rho}}$ and $\frac{2}{W_{\rho}}$, respectively, being $W_{\rho}$ the width of the output map $\rho$ (\textit{i.e.} $104$). Note that Equation \eqref{eq:SV} can be used for both types of maps, being $\rho$ the output of either the pass probability model (from now on $V_{P}$) or the EPV generic model (from now on $V_{E}$). Visually, this whole procedure can be seen in Fig. \ref{fig:EPV2model} for four different receiver candidates. \begin{figure}[] \centering \includegraphics[width=0.25\textwidth]{epvModhd.png} \caption{Geometrical approach to assign discretized pass probability/EPV field values to particular potential receivers.} \label{fig:EPV2model} \end{figure} For comparison purposes, the individual probabilities $V_{P}$/expected values $V_{E}$ are multiplied by our feasibility orientation estimation $F_{o}$, (Subsection \ref{sec:OrComp}); in this way, the effect of orientation itself can be tested for $V_{P}F_{o}$ and $V_{E}F_{o}$. Note that the other components $F_{p}$ and $F_{d}$ have not been used, as both pass probability and EPV models already include this type of information in its core. Results are displayed in Table \ref{tab:EPVPass} and Fig. \ref{fig:CompPass}. As it can be seen, better accuracy is obtained when taking orientation into account in all scenarios, especially in the top-1 accuracy case, obtaining a boost of almost 0.1 in the output of the current pass probability model. Moreover, orientation also improves the raw performance of $V_{E}$ (0.07 improvement in Top-1 accuracy), especially by solving miss-leading cases in which players are located out of the field of view of the passer. As a conclusion, it has been proved that merging orientation in the SoA implementation of EPV \cite{fernandez2019decomposing} could help getting a more accurate model, which can lead to a better understanding of the decision-making process. \begin{figure}[] \centering \includegraphics[width=0.4\textwidth]{CompEPVPasshd.png} \caption{Histogram distribution of $V_{P}$ and $V_{E}$, plus the corresponding addition of $F_{o}$ component.} \label{fig:CompPass} \end{figure} \begin{table}[] \begin{center} \scalebox{0.9}{ \begin{tabular}{|c|c|c|} \hline \textbf{} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ (Succ.)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-3\\ (Succ.)\end{tabular}} \\ \hline $V_{P}$ & 0.243 & 0.567 \\ \hline $V_{P}$ + $F_{o}$ & 0.332 & 0.612 \\ \hline $V_{E}$ & 0.266 & 0.606 \\ \hline $V_{E}$ + $F_{o}$ & 0.337 & 0.637 \\ \hline \end{tabular}} \end{center} \caption{Top-1/3 Accuracy of the EPV models' output, plus their comparison when merging orientation feasibility.} \label{tab:EPVPass} \end{table} \section{Conclusions} \label{sec:Conc} In this paper, a novel computational model that estimates the feasibility of passes in soccer games has been described. The main contribution of the proposed method is the inclusion of orientation data, estimated directly from video frames using pose-models, into a passing model, which has proved to be a key feature in the decision-making process of players and is strictly correlated to the play outcome. Orientation feasibility is computed with a geometrical approach among offensive players, and it is combined with two other estimations, based on the defenders location with respect to potential receivers, and pairwise distances. Moreover, the combination of the model's output with existing pass probability/EPV models has been studied, obtaining confident results which indicate that SoA methods can be refined by including orientation data. As future work, apart from studying the viability of this type of model in other sports, a passing feasibility discretization of the full-field will be modelled, since players tend to pass not only to the position where the receiver is, but also to large free spaces in front of him/her. Finally, using orientation as a core feature, team action recognition could be applied over the spatial offensive configuration to optimize team tactical strategies. \section*{Acknowledgments} The authors acknowledge partial support by MICINN/FEDER UE project, reference PGC2018-098625-B-I00, H2020-MSCA-RISE-2017 project, ref. 777826 NoMADS, EU H-2020 grants 761544 and 780470 (projects HDR4EU and SAUCE) and F.C. Barcelona's data support.
{ "attr-fineweb-edu": 2.169922, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUevnxK6mkyCfODsKs
\section{Introduction} One of the essential parts in developing a team in the RoboCup 2D soccer simulation league is to design an effective strategy or method that outperforms opponent teams. Player formation is one of the most important aspects in the strategy design as it gives the guidelines of the decision making during the game. The player formations are generally designed according to a given opponent team. However, this tasks is labourious since the search space can be really large depending on the set-play the formation is associated with. In addition, selecting the best strategy regarding unknown opponents is one of the most challenging task of this league. On the other hand, it is not necessary to create a specialized player distribution against each of all opponents as it is possible that some of them are similar regarding particular features. By using this fact, it is possible to cluster similar opponents together and then look for the most effective strategy against this group. This research proposes a model which groups similar opponent teams together during a learning stage and determines the most effective player formation for each cluster by using sequential Bayes' estimations. Then, during a real game the system classify the current opponent among one of the determined clusters and apply the strategy that has been estimated to be the best regarding the resulting classification. \section{Related Work} The task of recognizing the opponent strategy in order to apply an appropriate counter action has been already adressed in previous researches. For example, the works of Visser \textit{et al.}\cite{annclassification} and Dr{\"u}cker \textit{et al.}\cite{virtualwerder} propose a system for recognizing opponent's formations and then apply a counter formation. This is done by using an artificial neural network that is able to classify data among 16 formation classes and then apply the counter formation especially designed against each class. Classified data are a representation of the field as a grid expressing the formation of the opponent. Riley and Veloso \cite{gridanalysis} also proposed a method performing opponent classification by using a grid representation of the field. However, the grid is used for displacement and location of objects instead of just observing the structures of formations. Also, they used a decision tree instead of neural networks. However, this paper focuses on how to select the best player formation regarding a particular cluster rather than the issue of how build clusters. The selection issue is a well know problem in probability, often named as the $k$-armed bandit problem. This problem has been already addressed in the context of simulated soccer game by Bowling \textit{et al.} \cite{behaviorselection}. They proposed to apply an algorithm that selects the most effective and available team plans during particular situations. The most effective plan is the one that minimizes the regret which is the amount of additional reward that could have been received by behaving optimally. The work presented in this paper is similar in the sense that we also focus on a method which selects the best choice in a particular situation. However, the situation is considered to be a particular opponent team in a particular event of the game and not a particular state of the environment. Doing so allows us to focus on more precise effectiveness measurement functions. \section{Proposed Model} As a solution, we propose to use a simple model as shown in \figurename \ref{fig:model}. This system takes a label of cluster of opponent teams' as an input parameter and returns the best strategy to apply. However, it is difficult to estimate the best strategy among the ones we have in hand. For this reason the proposed model consists of two modules, \textit{Learner} and \textit{Selector}. The \textit{Learner} part works in offline mode. It takes a set of clusters as an input parameter. Clusters are obtained by applying hierarchical clustering on opponent teams' distributions before the \textit{Learner} works. Then, its role is to learn against each cluster, by looking at the set of strategies that we have already developed, which one is the most appropriate. This decision is done by performing statistical analysis on simulated games by using the different strategies as it is explained in Section \ref{sec:learning}. Once the learner is able to decide which strategy we should apply regarding a particular cluster of opponent teams, it inserts the cluster-strategy pair in a database. The \textit{Selector} part works in online mode. It takes the resulting classification of the current opponent team as input. Then, by using the estimations done by the learner it can directly return the best strategy to apply. As a first trial of this system, the proposed model was used to determine which corner-kick formations should be used against particular clusters of opponent teams from JapanOpen competitions. This championship is the RoboCup yearly meeting within Japan. \begin{figure}[h] \centering \includegraphics[scale=0.2]{img/strategy_selector_model_bw2.png} \caption{Proposed model.} \label{fig:model} \end{figure} \section{Opponents Clustering} \subsection{Team distributions} At a general level the system groups opponent teams by similarity in the player formation. In order to understand the player formation, the distribution of the players is used. In this paper, offensive corner-kick formations were designed regarding the defensive formation of the opponent. Therefore, this work suggests to build such player distribution representing the defense of the opponent by considering locations of players over the corner-kick area. As a way to represent the opponent player distributions, the system designs a partition of the corner-kick area of the field as shown in \figurename \ref{fig:partition}. This partition is totally arbitrary, but shows how opponent players are spread in this area during their defensive corner-kick situations. Resulting distributions represent the number of players in each of the 18 blocks in the area of interest (the so-called attacking third). Also, an additional block representing the remaining part of the field is considered. For example, \figurename \ref{fig:partition} shows eleven opponents in their defensive corner-kick formation. By analyzing this defense player formation, the resulting distribution would be written as the following 19-dimensional integer vector: \\$[1, 0, 1, 0, 0, 0, 1, 2, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 2]$. If we consider a rougher partition of the field, the player distributions would tend to be the same regardless the opponent team. For instance, let us consider the extreme case where the grid is only constitued of one cell. By doing so, any team would be represented by the 1-dimensional integer vector. Conversely, a finer partition would make the player distributions become much more different from each other. \begin{figure}[h] \centering \includegraphics[scale=0.15]{img/cut_distribution_blackwhite2.png} \caption{19 blocks of the partitioned soccer field.} \label{fig:partition} \end{figure} \subsection{Clustering process} Once all opponents' distributions are determined, the degree of similarity between each possible pair is analyzed in order to generate a distance matrix. The distances between distributions are computed by using the Earth Mover's Distance (EMD) \cite{emd} method. EMD provides a pseudo metric measure between two probability distributions. It can handle vectors with different dimensionalities and weighted features. The measurement process is expressed as a transportation problem where one distribution is the supplier and the other the customer. The cost between the supplier and customer is related to the distance between features of the two distributions which are computed by using a ground distance such as the euclidean distance. This is an advantage of using EMD since we can evaluate how much two formations of players are different by using a ground distance that makes sense in the case of soccer field. Also, the possibility to consider weighted features could become an advantage in future work since it is possible to give more importance to certain parts of the formations. It is possible to apply hierarchical clustering on the resulting distance matrix in order to determine clusters of similar opponent teams. This process merges the pairs with the smallest distance together until all the opponents belong to a single cluster. By using a threshold representing the maximum distance accepted between clusters before merging, the user can stop the clustering process and then obtain several clusters rather than a single one. \section{Strategy Selection} \label{sec:learning} \subsection{Performance evaluation of player formations} In order to select the most effective strategy from a given set of strategies, the performance evaluation of the player formations with respect to a success metric is required. For example, the probability of success of an attack following a corner-kick as shown in \figurename \ref{fig:successck}, can be used as a performance metric. However, the RoboCup 2D soccer simulation league introduces randomness in the way the players interact with the environment. Each player receives imperfect and noisy information from his virtual sensors. As a result, two soccer games with the exactly same teams can differ significantly. Therefore, evaluating player positioning performance is a challenging task. There is a lot of variance when trying to estimate a success metric. Thus, it is necessary to run a large number of soccer games in order to estimate one player formation's performance with enough precision. In order to sort each player formation with respect to the others, the difference in means between the probability of successful corner-kick distributions of each player formation's simulation is considered. \begin{figure}[h] \centering \includegraphics[scale=0.25]{img/ck2_bw.png} \caption{Example of an actions' chain for a corner-kick which leads to a successful score.} \label{fig:successck} \end{figure} \subsection{Sequential Bayes' estimation} Bayes' theorem is stated as in (\ref{eq:bayestheorem}): \begin{equation} \label{eq:bayestheorem} p(\theta | D) = \frac{p(D | \theta)P(\theta)}{p(D)}, \end{equation} \noindent where $p(\theta | D)$ is called the posterior, $p(D | \theta)$ is likelihood, $p(\theta)$ the prior and $p(D)$ is the evidence which stands as a normalizing constant. It is calculated as expressed in (\ref{eq:evidence}): \begin{equation} \label{eq:evidence} p(D) = \int p(D | \theta)p(\theta)d\theta, \end{equation} \noindent where $\theta$ represents the value of the parameter to estimate, in our case that is the probability of the success of an attack following a corner-kick. $D$ corresponds to the new data extracted at the moment of applying the theorem. The purpose of the Bayes' theorem is to update the prior belief $p(\theta)$ we have about the value of $\theta$ using new data $D$. The posterior distribution $p(\theta | D)$ will then correspond to our updated belief in the different possible values of $\theta$. It is possible to sequentially update the parameters by applying Bayes' theorem each time one or more simulations are over by using the previous posterior as the prior for the next computation of the posteriors. Obviously, according to the success metric used by the system, the results of one experience (successful corner-kicks observed within one game), the likelihood follows a binomial law as in (\ref{eq:binomlaw}): \begin{equation} \label{eq:binomlaw} p(X = k) = C_{k}^{n}\theta^{k}(1 - \theta)^{n - k}, \end{equation} \noindent where $n$ is the number of total corner-kicks observed during the simulated game, $k$ the number of successful corner-kicks observed and $\theta$ the probability of an offensive corner-kick to be successful by using the player formation. Navarro and Perfors \cite{beta-binomial} have demonstrated that the posterior distribution of a beta-binomial distribution is also a beta-distribution. Thus, if you consider the probability of getting a successful corner-kick by using a particular formation of players, the posterior distribution after observing $k$ successes over the total $n$ corner-kicks can be expressed as in (\ref{eq:posterior}): \begin{equation} \label{eq:posterior} p(\theta | k, n) \sim B(a + k, n - k + b) \end{equation} \noindent where $B$ denotes the beta distribution, $a$ and $b$ are the parameters coming from the prior distribution and $\theta$ is the probability of a successful attack following a corner-kick which is the parameter we want to estimate. This fact simplifies computations since it is possible to represent the performance of a player formation by a Beta distribution and then after running a game, construct a new Beta distribution by giving the number of corner-kicks and the number of observed successes. \subsection{Player formations comparisons} \label{sec:comparisons} A difference distribution is used to determine whether one player formation is better than another or whether additional simulations are required to be sure. For this purpose, the system begins by computing the Highest Density Interval (HDI) \cite{bayesbook} which is the interval that spans most of the mass of the distribution (say 95\%) such that every point inside the interval has a higher probability than any point outside the interval. To compare the performance of two player distributions in the attack case (let us say Distribution 1 and Distribution 2), the probability of success of Distribution 1 and Distribution 2 is considered, defined as $p_{1}$ and $p_{2}$, respectively. Assume that a posterior distribution for each of those probabilities is obtained. In this case, by calculating all of the possible values of $p_{1} - p_{2}$, it is possible to obtain a distribution of the difference of $p_{1} - p_{2}$. HDI is used instead of the posteriors in order to simplify the computation of this calculation. Then, there are three possible scenarios as follows. Preliminary, let us define $[u, v] = \{x \in \mathbb{R} | u \leq x \leq v\}$ to be the HDI of the resulting distribution $p_{1} - p_{2}$. The first possible case is when $u \geq 0$, which means $p_{1} - p_{2} > 0 \Rightarrow p_{1} > p_{2}$. Naturally, the opposite case is also possible, $p_{1} - p_{2} < 0 \Rightarrow p_{1} < p_{2}$, which happens when $v \leq 0$. Another possible sketch is that $[u, v] = \{x \in \mathbb{R} | w \leq u \leq x \leq v \leq z\}$ where $w$ and $z$ are around 0 which is equivalent to saying that $p_{1} = p_{2}$ for all practical purposes. The $[w, z] = [-0.015, +0.015]$ interval is used in this paper. If the two player formations are deemed equal or when the maximum number of simulations is reached, the player formation with less variance is considered as better than the other. \section{Experiments} \subsection{Opponents clustering} First experiments involved 12 teams participating in Japan Open competitions, as well as two versions of Agent2D \cite{heliosbasepkg} which does not participate in any competitions, but are used by most of the participants as the starting point of team development. Three clusters were created by the hierarchical clustering. The second cluster is the most populated among the three ones because it represents the teams using a player formation similar (if not the same) to that of Agent2D, which constitutes probably their implementation starting point. On the other hand, the third cluster included only the team Ri-one\_B 2015 that is too far to be merged with any other clusters. \subsection{Association learning} \label{sec:associationlearning} In order to experiment the abilities of the learner, we used three corner-kick formations that were already implemented in our team. Additionally, a special script was used. This script runs simulations which only perform corner-kick situations. Generally, 37 corner-kicks are executed during one simulation, but this number can vary from one run to another due to the randomness present in simulations. As first experiment, 10 simulations per strategy were simulated before comparing pairs of player formations and a beta distribution with parameters 2 and 2 (i.e., Beta(2, 2)) was used to represent our prior beliefs. The results of our first experiment are shown in \figurename \ref{fig:postg10}, the probability density functions of each player formation against each cluster. It can be seen that each cluster is associated with a different player formation. Excepted the pair (1, 3) for the first cluster (\figurename \ref{fig:postc1g10}), all pairs can be easily ranked. Then, the most effective player formation can be determined with certainty. However, the HDI of almost all distributions is quite large, thus a precise probability of success cannot be provided. \begin{figure}[b] \centering \subfloat[Cluster 1]{{\label{fig:postc1g10}\includegraphics[scale=0.21]{img/cluster-1_10_800dpi_bw.png}}} \qquad \subfloat[Cluster 2]{{\label{fig:postc2g10}\includegraphics[scale=0.21]{img/cluster-2_10_800dpi_bw.png}}} \qquad \subfloat[Cluster 3]{{\label{fig:postc3g10}\includegraphics[scale=0.21]{img/cluster-3_10_800dpi_bw.png}}} \caption{Posterior distributions for each cluster, by running $M=10$ simulations} \label{fig:postg10} \end{figure} In order to improve estimations about the player formations' probability of success, a second experiment was conducted and simulations were generated by blocks of 60 games. That is, 60 games for each player formation were conducted every time the performance is compared. \figurename \ref{fig:postg60} shows the resulting probability density functions of the player formation. As expected, curves became finer and tended to be centered to the true probability of their respective player formation. Also, the pairs which were difficult to differentiate after the first experiment, can now be well ordered. \begin{figure}[b] \centering \subfloat[Cluster 1]{{\label{fig:postc1g60}\includegraphics[scale=0.21]{img/cluster-1_60_800dpi_bw.png}}} \qquad \subfloat[Cluster 2]{{\label{fig:postc2g60}\includegraphics[scale=0.21]{img/cluster-2_60_800dpi_bw.png}}} \qquad \subfloat[Cluster 3]{{\label{fig:postc3g60}\includegraphics[scale=0.21]{img/cluster-3_60_800dpi_bw.png}}} \caption{Posterior distributions for each cluster, by running $M=60$ simulations} \label{fig:postg60} \end{figure} Table \ref{tab:resultssummary} provides a summary of the second experience. It shows the final associated player formation for each cluster. Also, it indicates the HDI of the selected player formation. Finally, it gives the ratio of the best player formation's distribution mean over the second best's. \begin{table}[!t] \caption{Results summary of the second experiment.} \label{tab:resultssummary} \centering \begin{tabular}{|c|c|c|c|} \hline Cluster & Distribution & HDI & Ratio\\ \hline 1 & 2 & [0.203, 0.237] & 1.787\\ \hline 2 & 3 & [0.531, 0.571] & 2.073\\ \hline 3 & 1 & [0.471, 0.512] & 2.179\\ \hline \end{tabular} \end{table} \subsection{System validation} The proposed method in this paper estimates the probability of success of offensive player formations against given opponent teams. In other words, the parameter $\theta$ of a binomial distribution is estimated. However, it is legitimate to wonder about the correctness of the estimations. The experiment in this section puts player formations aside and evaluates how well our method can differentiate probability distributions with parameters close together. Additionally, it estimates how many simulations are required to draw trustful conclusions about the ranking of offensive player formations regarding their success probability. Player formations are substitued by a set of randomly generated parameters $\theta$. Then, a simulation of $n$ offensive corner-kicks by using a particular formation is substitued by $n$ sampling from a binomial distribution parameterized by one of the randomly generated $\theta$ values. Notice that the system knows the generated parameters and is able to order them. Afterwards, as in the parameter estimation method, by feeding the Bayesian estimator with the number $k$ of successes over the $n$ samples the system updates prior beliefs about the parameters and tries to estimates the value of the randomly generated parameters. Actually, since the true values are known, it is possible to verify that the system gets back the correct ranking. \figurename \ref{fig:validation20} shows the results obtained by using $n = 20$ samples per simulation for each distribution. The figure consists of three subplots where the $x$-axis represents bins of pairs of $\theta$s depending on the difference of their respective values. For example, assume $\theta_{1} = 0.22$ (22\% of chance to get a success) and $\theta_{2} = 0.24$. Since the difference between $\theta_{1}$ and $\theta_{2}$ is 0.02, this pair is contained in the second bin whose range is from 0.1 to 0.2. The first bin (the one in black) is special, since it represents the interval where parameters are close enough to be considered equal. The number of generated parameters was done in such a way that each bin contains ten pairs of parameters. The first subplot shows the rate of well ordered pairs in each bin. According to this plot, the system can perfectly rank pairs with a difference greater than 0.04 and this accuracy decreases as the distances between parameters increase. In this subplot the correct ranking inside the first bin is not really important since it contains pairs that are considered to be equal. The second and third subplots show the number of correctly ranked (respectively uncorreclty ranked) pairs in each bin and the number of sampling steps before drawing conclusion ($y$-axis). As indicated in the first subplot, the ranking is perfect for any pairs contained in the bin whose range is greater than $0.04$. Furthermore, at most fifty samples were required to obtain such results and this number decreases as the distances increase. However, the system has difficulties to rank pairs with difference less than $0.04$. \begin{figure}[!t] \centering \includegraphics[scale=0.34]{img/theta_test_20.png} \caption{System's validation by blocks of 20 samples.} \label{fig:validation20} \end{figure} \figurename \ref{fig:validation60} shows the performance evaluation by using $n = 60$ samples per simulation for each distribution. Actually, increasing this number improves the accuracy since the system is able to rank prefectly pairs with at least a difference of $0.03$ by requiring at most eighty samples. These two experiments show that increasing the number of samples increase the accuracy. On the other hand, since the data that the player receives is biased and because of the rarity of corner-kick event occurence during one single game, a deviation of 4\% of success probability between two formations is not so significant. For this reason, in the particular case of selecting the best strategy for offensive corner-kicks, estimating the formations' parameter by running only 20 simulations is enough. \begin{figure}[!t] \centering \includegraphics[scale=0.34]{img/theta_test_60.png} \caption{System's validation by blocks of 60 samples.} \label{fig:validation60} \end{figure} \subsection{Cluster validation} It could be also interesting to look at the performance of each player formation in the clusters. While some teams have been considered as similar in terms of defensive player formations, it does not exclude the possibility of disparities among the teams of the same cluster since results of actions are not affected by player positioning only. Proper agents' skills are an equally important factor. In order to verify the quality of association according to each opponent team, another alternative of the algorithm was applied. This one is nearly the same as the standard version, but rather than trying to estimate the effectiveness of each player formation against the clusters, the system estimates it against every team individually. Table \ref{tab:differenceexpectations} summarizes the teams whose the most effective strategy is not the same as the one estimated against the cluster which they belong. As a reminder, Cluster 1 counts three teams and Cluster 2 counts ten teams. Regarding the team Ri-one\_A 2015 (Cluster 1), Distribution 1 seems to be better than Distribution 2 which is the one associated to its cluster. However, the error seems to be much more serious, according to the team A\_TSU\_BI-2014 (Cluster 2) since Distribution 2's mean is slightly more than three times better than the selected formation's (Distribution 3) mean. In fact, this association error is not significant during a game against Ri-one\_A 2015. On the other hand, performing games against A\_TSU\_BI-2014 with the wrong strategy would affect the results of games since there is roughly 20\% more chance to get a success by using the formation associated to Distribution 2 rather than the one selected (Distribution 3). \begin{table}[!h] \caption{Difference between expected performances in cluster.} \label{tab:differenceexpectations} \centering \begin{tabular}{|c|c|c|c|c|} \hline Team & Cluster & Selected (Dist. / HDI) & Best option (Dist. / HDI) & Ratio\\ \hline Ri-one\_A 2015 & 1 & 2 / [0.13, 0.16] & 1 / [0.23, 0.27] & 1.73\\ \hline A\_TSU\_BI- 2014 & 2 & 3 / [0.07, 0.09] & 2 / [0.23, 0.26] & 3.28\\ \hline \end{tabular} \end{table} \section{Conclusion} In this research, a system that is able to select the best player formation in corner-kick situations regarding a group of teams was developed. This decision is taken by doing sequential Bayes' estimations from the results of several games. The model does not create effective offensive player formations, but instead indicates the best that we have already in hand. The results are satisfying since the system is able to rank correctly player formations with at least a difference of 4\% of success probability by proceeding only 20 simulations. Furthermore, it is possible to increase the precision of the system by getting more data. However, by doing so the learning time would increase considerably. Additionally, it is quite impossible to feel a difference during one game, since during a true match the number of corner-kicks that happen is very low. This is why such an error rate is acceptable. On the other hand, there is a possibility of disparities inside the clusters. As explained earlier, if the difference between player formations is only 4\% there is actually no real difference in terms of final results of one game due to the rare occurence of corner-kicks executed during a standard game. But if a player formation is not designed to be the best and is actually three times better than the selected one, difference could be observed regarding final results. These disparities are due to the fact that during the clustering process only the positions of opponents are considered and not the defensive skills of the team. Then, another clustering criterion can be considered for better performance. Finally, while the first trials selected player formations for corner-kicks only it is possible to use it for any situation of the game, at the condition to have a criterion for opponents clustering and a success metric for data observations. Furthermore, it is possible to extend this system in order to build strategies, i.e. sets of player formations that cover any situation, rather than selecting the best player formation according to a particular situation. In this case opponents would not be in only one cluster, but in several clusters, one for each situation. However, such a learning process seems difficult to realize since a very large number of standard games is required, ones which do not simulate only one kind of situation, in order to see enough every kind of situations and hope to obtain good approximations of each player formation.
{ "attr-fineweb-edu": 2.037109, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbHo5ixsDMFwZR9e8
\section{Introduction} \subsection{Overview} By the spring of 2000, after having played on the PGA Tour for less than four years, Tiger Woods had become well established as golf's most feared and intimidating competitor. Typifying this perception, the following appeared on the Golf Today website immediately before the 2000 Masters: \begin{quote} \singlespacing ``Tiger Woods, in the midst of crafting one of golf's most dominating stretches of brilliance and the runaway favorite at the Masters, is becoming the Intimidator of world golf. Seeing Woods atop a Sunday leaderboard has signified a fight for second place among the rest of the world's best. The last 13 times he held the lead or was tied for the lead going into the final round he has closed the deal. Woods bearing down on a front-runner has led to gruesome breakdowns and embarrassing collapses sometimes uncomfortable to watch on video replay. The winner of seven of his last 11 tournaments and second or tied for second in three of the others does not deny the effect his hypnotic form has had on some rivals.'' (Golf Today, 2000). \end{quote} \doublespacing \noindent Consistent with this perception, Gladwell (2008) states: \begin{quote} \singlespacing ``We've seen it a thousand times; top-notch professional golfers crumble on Sunday when in the final pairing with Tiger Woods. Why does this happen with such regularity? What is it that Tiger Woods does to intimidate his fellow golfers and almost guarantee victory for himself?'' \end{quote} \doublespacing In stark contrast, prior to his winning the 2004 Masters, and as of this writing, winning two more of golf's ``major'' championships, Phil Mickelson had been labeled as ``the best player in the world to have never won a major.''\footnote{Golf's majors include The Masters, The U.S. Open, The British Open and The PGA Championship.} \footnote{A Google search on the combined strings ``phil mickelson'' and ``to have never won a major'' brings up 1,310 references but, of course, there are other forms that the expression ``to have never won a major'' can take, which are not part of the search.} Although he had become one of golf's premiere players, he was often accused of choking in majors, especially in the final rounds. Describing Mickelson's propensity to choke, Smith (2001) wrote the following immediately prior to the 2001 PGA Championship: \begin{quote} \singlespacing Mickelson will be among the favourites again, especially on an Atlanta Athletic Club course with the kind of length -- 7,213 yards for a par 70 -- that suits big hitters, and rain-softened greens that will allow him to attack the pins. Then again, his aggressive style is what has cost him so many chances. Mickelson challenges just about every flag, as if the thought never crosses his mind that he might hit a bad shot. The result is missing the green on the short side, leading to bogey or worse. Alas, what Mickelson lacks in majors, he makes up for in macho. \end{quote} \doublespacing In this study, we extend our prior work in Connolly and Rendleman (2008, henceforth CR) to address the issue of Tiger Woods' dominance on the PGA Tour, his effect on the play of other tournament participants, including those with whom he is paired, and Phil Mickelson's alleged propensity to play poorly in golf's major championships. In CR, we use a generalized additive model to estimate time-dependent mean skill functions and the first-order autocorrelation of residual scores about their means for 253 active PGA Tour golfers over the 1998-2001 period. In estimating these functions, we remove the estimated (random) effects associated with the relative difficulty of the course on which each round is played and the relative advantage or disadvantage of each player in playing these courses. Although the CR data is somewhat dated, it covers a period of time when Woods had become known as an intimidating force in golf and Mickelson was being accused of choking in majors. As expected, we find that Woods' play dominated that of other top PGA Tour professionals during our sample period. During year 2000, generally acknowledged as Woods' best on the Tour, Tiger could have won a few tournaments with no luck at all and many more with just a little bit of luck. By contrast, even his strongest competitors, including David Duval and Phil Mickelson, could not have come close to winning tournaments without a substantial amount of luck. Also, as expected, we find statistically significant adverse effects associated with being paired with Woods in the final round of a PGA Tour event. Contrary to popular belief, however, those who are paired with Woods perform better when both are in contention to win a tournament than when they are not. Interestingly, we find that Mickelson actually played well in golf's majors during the 1998-2001 period and that his level of play, relative to his norm, was comparable to that of Tiger Woods, who won five of 16 majors during this period, including the ``Tiger Slam,'' where he won four majors in a row -- the U.S. Open, British Open and PGA Championship in 2000 followed by the Masters in 2001. Moreover, although Mickelson never won a major from 1998 through 2001, we show that he played with a sufficient degree of luck, or his temporary abnormal performance was sufficiently favorable, to have won as many as two major golf championships during this period. In other words, bad luck, or more accurately, the good luck of others, played a substantial role in Mickelson's failure to win any of golf's majors between 1998 and 2001. \subsection{PGA Tour Tournament Play and Qualifying for the Tour} With a few exceptions, in a typical PGA Tour stroke-play event, 156 players begin play in the first of four 18-hole rounds of competition. After the first two rounds, the 70 players and ties with the lowest 36-hole total scores ``make the cut'' and qualify for play in the last two rounds. After the final round of play, the player with the lowest total 72-hole score wins the tournament. If two or more players are tied for first place after 72 holes, the winner is determined by a playoff. All players who make the cut and complete all four rounds of play earn prize money. Those who miss the cut win no money. At the end of each PGA Tour season, the 125 players who have won the most official prize money become ``fully exempt'' to participate on the Tour the following season. Those ranked in positions 126-150 in money winnings become ``partially exempt,'' and are eligible to participate the next season on a limited basis. All tournament winners in a given season earn fully-exempt status the following two years. Those who win The Tour Championship and any World Golf Championship events earn fully-exempt status for three more years, and those who win the four majors and The Player's Championship earn fully-exempt status for five additional years. In addition, approximately 50 players each year earn exemptions though the PGA Tour's annual qualifying tournament (``Q-school'') and by finishing among the top 25 money winners on the PGA Tour-sponsored Nationwide Tour. \section{Data} We collected individual 18-hole scores for every player in every stroke play tournament on the PGA Tour for years 1998-2001 for a total of 76,456 scores distributed among 1,405 players. After limiting our sample to players who recorded more than 90 scores, the resulting sample employed here and in CR consists of 64,364 observations of 18-hole golf scores for 253 active PGA Tour players over 181 stroke-play events. As we describe in CR, most of these omitted players are not representative of typical PGA Tour players. For example, 565 of the omitted players recorded only two 18-hole scores. (More detailed characteristics of the sample are provided in our other paper.) By excluding these players, we maximize the power of Wang's (1998) cubic spline fitting methodology and minimize potential distortions in estimating the statistical properties of golf scores of regular players on the Tour. \section{Statistical Estimation Model} Our model for estimating the skill and luck components of PGA Tour players' golf scores is described in detail in CR. Based on Wang (1998), we estimate a cubic spline-based time-varying mean skill level for each player, after reducing each player's 18-hole score by estimated random round-course effects common to all players and estimated random player-course effects. These estimated effects capture the tendency of individual players to perform better or worse than normal on specific courses. In simplified form, the model can be expressed as follows: \begin{equation} \ s_i = h_i \left( \bullet \right) + r + c_i + \theta _i \end{equation} In (1), $s_i$ is a given 18-hole score for player $i$. $h_i \left( \bullet \right)$ is the cubic spline-based estimate of the same score after removing $r$, the random effect that captures the relative difficulty of the course on which the round was played on that particular day and $c_i$, the random player-course effect that reflects the extent to which the course played favorably or unfavorably for player $i$. $\theta _i$ is the residual random error term that is assumed to follow an AR(1) process, which is captured in the estimation of the cubic spline function $h_i \left( \bullet \right)$. This residual error can be decomposed into two components, $\theta _i = \lambda _i + \eta _i$, where $\lambda _i$ represents the autocorrelated component and $\eta _i$ is white noise. The model is estimated simultaneously for all 253 players in our sample. Note that a round-course interaction is defined as the interaction between a regular 18-hole round of play in a specific tournament and the course on which the round is played. For 156 of 181 tournaments in our sample, only one course is used and, therefore, there is only one such interaction per round. By contrast, the first three rounds of the ATT Pebble Beach National Pro Am are played on three different courses using a rotation that assigns each tournament participant to each of the three courses over the first three days of competition. A cut is made after the third round, and a final round is played the fourth day on a single course. Thus, the Pebble Beach tournament consists of 10 round-course interactions - three for each of the first three days of competition and one additional interaction for the fourth and final day. It should be noted that we do not include specific information about playing conditions (e.g., adverse weather as in Brown (2008), pin placements, morning or afternoon starting times, etc.) when estimating (1). Nevertheless, if such conditions combine to produce abnormal scores in a given 18-hole round, the effects of these conditions should be reflected in the estimated round-course-related random effects. Using (1), the estimated random effects associated with round-course interactions range from $-3.92$ to 6.95 strokes per round, implying almost an 11-stroke difference between the relative difficulty of the most difficult and easiest rounds played on the Tour during the 1998-2001 period. By contrast, the estimated random effects associated with player-course interactions are very small, ranging from $-0.065$ to 0.044 strokes per round, an insufficient amount to have any impact on the overall scores in a typical 72-hole PGA Tour event. Much of the focus in this paper is on the estimated spline fits for individual players and associated $\theta$ and $\eta$ errors. When we refer to a player playing ``normal,'' we are referring to a situation in which his 18-hole score for a given round, adjusted for estimated round-course and player-course effects, equals the time-dependent mean score for that same round given by the player's estimated cubic spline function. Thus, a player who plays ``normal'' plays exactly as predicted by his estimated time-dependent mean skill level rather than at a level that might be characteristic of PGA Tour players in general. $\theta$ errors can be viewed as differences between actual 18-hole scores and predicted scores, taking into account time-dependent cubic spline-based estimates of mean skill levels and the random effects associated with round-course and player-course interactions but without adjusting for autocorrelation in random error structures. $\eta$ errors represent $\theta$ errors adjusted for first-order autocorrelation. Although the potential for autocorrelated $\theta$ errors comes into play in estimating each player's cubic spline function, we do not focus on the correlation in residual errors in this paper. However, in CR, we show that the first-order autocorrelation coefficient in $\theta$ errors is positive for 155 of the 253 players in our sample. Only two players show evidence of significant negative autocorrelation in $\theta$ residuals, and 20 to 23 players (depending on the test) show evidence of significant positive autocorrelation. Using the methodology of Ljung and Box (1978), we determine that there is no significant higher-order autocorrelation in $\eta$ residuals -- once we adjust for first-order autocorrelation in $\theta$ residuals, the remaining residual error for all 253 players represents white noise. Spline fits for Tiger Woods, David Duval and Phil Mickelson, three golfers whose performance is analyzed in this study, are shown in Figure 1. Actual 18-hole scores, reduced by random round-course and player-course effects, are plotted about the smooth spline fits. The spline fit for Tiger Woods is U-shaped, reaching its minimum in year 2000, regarded as the best year in Woods' professional career. Spline fits for David Duval and Phil Mickelson turn out to be linear, with Duval's being positively sloped, indicating his skill was deteriorating, and Mickelson's being negatively sloped, indicating an improvement in skill. (Over all 253 players, 105 spline fits are exactly linear.) ``Scaled golf time'' for each player represents the chronological sequence of rounds for the player scaled to the \{0, 1\} interval. Although all three players participated on the PGA Tour during roughly the same period of time, the scaled golf times for each player represent the joint sequencing of their own scores rather than the sequencing of the scores of the three players. Visual inspection if Figure 1 reveals a very low signal-to-noise ratio for each individual player's spline fit relative to his 18-hole scores adjusted for estimated random round-course and player-course effects. Although the pseudo adjusted R-square for the model as a whole is 0.296, calculated as $1 $-$ Mean\;square\;error / Mean\;square\;total$, the pseudo adjusted R-squares for the individual spline fits are not nearly as high. For example, for Woods, Duval and Mickelson, the pseudo adjusted R-squares associated with their respective spline fits are 0.070, $-0.003$ and 0.034, with the highest being 0.408 for Billy Ray Brown. Based on the relatively low individual pseudo adjusted R-squares, one might argue for a model simpler than a cubic spline function for estimating player skill. Despite the low signal-to-noise ratio, using the bootstrap we show in CR that the spline model is significantly superior (at the 5\% level) to a player's mean score, adjusted for round-course and player-course interactions, for 71 of 253 players. It is significantly superior to a linear time trend for 25 of the players and to a quadratic time trend for 10 players. In addition, the spline model is superior to a time-dependent mean-adjusted score that varies by calendar year for 13 players. Even though a simpler functional form might adequately capture time variation in mean player skill for the great majority of players, the spline model should (approximately) capture that simpler functional form when appropriate, but also capture more complex forms of time-varying skill when such patterns arise in the data. Figure 1 shows the greatest dispersion of 18-hole scores (reduced by random round-course and player-course effects) about their respective spline fits for Phil Mickelson, followed by David Duval and Tiger Woods. In fact, the standard deviations of $\theta$ residual scores for these three players are 3.02, 2.82 and 2.46 strokes, respectively, with the range for all 253 players in our sample of 2.14 to 3.44 strokes per round. Mickelson's standard deviation of 3.02 strokes ranks 15th highest, while Woods' standard deviation of 2.46 strokes ranks 22nd lowest. Although Figure 1 shows that Duval and Mickelson were capable of shooting scores as low as Woods, they could not do so as consistently. Moreover, Woods' worst scores were not generally as high as those of Duval and Mickelson. \section{The Dominance of Tiger Woods} In our original study, we demonstrate that the average total $\theta$ residual winning score per tournament over the 181 tournaments in our sample was $-9.64$ strokes, with the total $\theta$ residual ranging from +0.13 strokes for Tiger Woods in the 1999 Walt Disney World Resort Classic to $-21.59$ strokes for Mark Calcavecchia in the 2001 Phoenix Open. The 1999 Walt Disney Resort Classic is the only tournament in our sample won by a player with a positive total $\theta$ residual score, meaning that the winner played slightly worse than normal as estimated by our model. We also demonstrate that most players among the 20 to 30 best finishers in a tournament experienced negative total $\theta$ residual scores. If the $\theta$ residual is viewed as a ``luck factor,'' our results indicate that to win a tournament on the PGA Tour, one must experience a sufficient amount of good luck to not only shoot a low total score, but to also overcome the collective good luck of the other participants in the tournament. Only Tiger Woods was sufficiently skilled to have won a PGA Tour event during our sample period by playing ``normal.'' An interesting way of assessing Woods' dominance is to ask how well he would have placed in the tournaments in which he participated by simply playing ``normal,'' and, therefore, experiencing no good luck as estimated by our model.\footnote{This analysis does not take into account the possible effect that Wood's normal play might have on the play of others, such as that documented in Section 5.} During the 1999-2001 period, Woods participated in 85 regular stroke-play events on the PGA Tour and never missed a cut in any of these tournaments. By contrast, David Duval, generally regarded as the second-best player during this period, missed six cuts, and Phil Mickelson missed 15. (Mickelson's missing nine more cuts than Duval most likely reflects the greater variability in Mickelson's scores (adjusted for random round-course and player-course effects) rather than his slightly higher spline-predicted score.) Table 1 lists these 85 tournaments along with the winning score, Woods' score, his expected score and his total $\theta$ residual score.\footnote{Wood's actually participated in one more regular stroke-play event, the 1998 ATT Pebble Beach National Pro Am. The first two rounds of this tournament were played in January, 1998, but the weather was so bad that the third and fourth rounds could not be completed. A third and final round was postponed and re-scheduled for July 1998. Many players, including Woods, chose not to play in this final round. Because of the unusual nature of this event, and the fact that Woods chose not to complete the tournament, we have not included it in our analysis in Table 1. } For example, in the 1998 Mercedes Championships, the first tournament listed in Table 1, the winning score was 271 (column 1) and Woods' score was 272 (column 2). Woods' total residual score was $-3.87$ strokes (column 4), so Woods played 3.87 strokes better than predicted. If we subtract the $-3.87$ stroke total residual from Woods' total score, we obtain a total expected score of $272 - (-3.87) = 275.87$ strokes (column 3). Thus, if Woods had played to his norm over the four rounds of the Mercedes Championships, he would have shot 275.87. (We ignore the fact that golf is scored in integers and cannot involve fractional amounts.) Columns 5 and 6 indicate that Woods finished in a two-way tie for second in the tournament. Column 7, the key column in this analysis, indicates that if Woods' total score had been 275.87 as predicted, he would have placed sixth overall. The average of values in column 7 indicate that if Woods had played as predicted in 1998, his average finish in the 19 stroke-play PGA Tour events in which he participated would have been 10.94, or 11th place. In 1998, his best finish playing average would have been fourth. A slightly improved pattern emerges in 1999, where he would have finished in sixth place on average by playing ``normal,'' and he could have won one tournament, which he did, by playing ``normal'' and finished among the top three in six more. There is one striking outlier among the 1999 PGA Tour events in which Woods participated - the ATT Pebble Beach National Pro Am - in which he would have placed 37th by playing ``normal.'' Otherwise, his worst finish in 1999 would have been seventh place. The reason for this outlier is that the course rotation to which Woods was assigned played 5.44 strokes more difficult than the easiest rotation. As shown in Table 2, only one player among the top 14 finishers in the tournament played a difficult course rotation (as indicated by a non-negative total round-course effect). They were all blessed with a rotation that played approximately 5 strokes less difficult than Woods'. By contrast, 10 players among the bottom 16 played one of the difficult rotation assignments as indicated by a total round-course effect greater than three strokes. Note that no one among the top 14 finished the tournament with a positive total residual score, and no one among the bottom 16 finished with a negative total residual. This is consistent with CR, where we show that almost all players who end up among the leaders in a PGA Tour event finish with negative total residual scores. Year 2000, generally regarded as Woods' best on the PGA Tour, is the year in which Woods won the final three majors of the year en route to his eventual ``Tiger Slam.'' Table 1 indicates that Woods could have won three tournaments in 2000 by playing at his (time-dependent) norm and that his average finish would have been between second and third place if he had played ``normal.'' Although Woods began year 2001 playing well, including winning the 2001 Masters to complete his ``Tiger Slam,'' his performance deteriorated in the second half of 2001 after making well-publicized changes to his swing. On average, he would have finished between sixth and seventh place in 2001 by playing ``normal'' but would have done much better in the first half of the year than the second. We now contrast this performance to that of David Duval. Duval was ranked third in the Official World Golf Rankings at the end of 1998, 2000 and 2001 and was ranked second at the end of 1999. No other golfer was consistently ranked second when Duval was ranked third. Moreover, if we rank players on the basis of their average spline-predicted score over our 1998-2001 sample period, Duval ranks second to Woods at 69.15 strokes vs. Woods' 68.18. Over our sample period, Duval missed six cuts; Woods missed none. Although we provide no separate table for Duval, in the tournaments for which Duval made the cut, his average finish playing ``normal'' would have been position 9.32, 13.22, 13.25 and 17.53 in years 1999 to 2001, respectively. Similarly, Phil Mickelson's average finish would have been position 30.81, 24.56, 14.50 and 9.95 by playing ``normal'' in the tournaments in which he made the cut. (He actually missed 15 cuts during this period). These results are in stark contrast to those of Woods, who could have won a handful of tournaments by playing ``normal'' and many more with just a little bit of luck. \section{Psychological Pressure of Competing Against Tiger Woods} Over the years, we have heard many radio and television commentators claim that those who are paired with Tiger Woods tend to perform poorly, presumably due to the psychological pressure that Woods places on his fellow competitors. Moreover, it is generally acknowledged that those who are paired with Woods in the final group on the final day of a PGA Tour event tend to do poorly relative to expectations (i.e., some would say that these players choke).\footnote{In PGA Tour events, the top two players at the end of the next-to-last day of play, generally the third round, are paired together in the last group in the fourth and final round.} In a recent paper, Brown (2008) posits that when competitors in tournaments or other settings such as the general workforce must compete against ``superstars,'' their productivity may be diminished because they feel there is no chance that they can compete successfully against the superstars. She tests her hypothesis by examining the differential performance of PGA Tour players in tournaments in which Woods, the superstar, competes and does not compete over the period 1999 to 2006. She concludes that there is a significant diminution of performance among regular PGA Tour players when Woods is also participating in the tournament. Tables 3 and 4 summarize all of our tests involving Woods' impact on the play of others in the field, including tests that address Brown's hypothesis directly. In all tests, we regress $\eta$ residuals against one or more dummy variables involving the interaction of tournament participants with Woods. We regress $\eta$ residuals rather than $\theta$ residuals against dummies to more accurately isolate the pure impact of players' interactions with Tiger. Otherwise, a portion of a player's abnormal performance in a given round would be attributable to the carryover of abnormal performance in previous rounds and could contaminate the effects we are trying to estimate. In our first test, summarized in Table 3, we regress $\eta$ residuals against a dummy variable indicating whether Woods is participating in the tournament for which the $\eta$ residual is estimated.\footnote{As reported in CR, the original model took 40 hours to estimate on a Windows XP-based PC with 1 GB RAM using a 2.80-GHz Intel Xenon processor. Therefore, we do not re-estimate the CR model using the additional dummy variables in test 1 nor in any subsequent tests. Moreover, we do not test for the significance of the dummy variable coefficients in the context of a re-estimated model, which would require the use of the bootstrap, the method of significance testing used in CR. In CR, it took over five days to produce 40 bootstrap samples and, therefore, it would be impractical to employ a re-estimated model with accompanying bootstrap tests for the various estimation specifications employed in this study.} This is a direct test of Brown's hypothesis, but we use a different dataset and an entirely different statistical methodology for estimating abnormal performance. The coefficient on the dummy variable that indicates whether Woods is in the field is 0.051 strokes per round and has a p-value of 0.0211. Thus, ignoring other factors involving Woods' interaction with other participants during tournament play, we find a statistically significant effect associated with having Woods in the field. In general, scores of players other than Woods are 0.051 strokes higher (worse) in tournaments in which Woods participates. Although statistically significant, this finding has little practical significance, since over the four rounds of play in a typical PGA Tour event, the total effect of having Woods in the field would be $0.051 \times 4 = 0.24$ strokes, an insufficient amount to change the final total score of any player whose score is 0.051 strokes per round higher due to Woods' presence in the tournament. By contrast, Brown finds that when Woods is in the field, the performance of regular PGA Tour players deteriorates by 0.2 strokes per round. In our second test, we separate players into two groups for tournaments in which Woods is in the field. The first group includes those who are playing with Woods, and the second group includes those with whom Woods is not playing. In these regressions of $\eta$ residuals, the coefficient associated with playing with Woods is 0.478, and with a p-value of 0.0004, is statistically significant. The coefficient associated with not being paired with Woods when he is in the field is 0.043, with a p-value of 0.0505. Thus, it appears that Woods' adverse impact on the play of others is real and statistically significant for those with whom he is paired but very small for those with whom is is not paired. In test 3, we regress $\eta$ residuals against a dummy variable indicating whether the player associated with the $\eta$ residual score is actually playing with Woods in the same group. This is a slightly different test than test 2, since all other players are treated the same, even if Tiger is not playing in the tournament. The estimated coefficient is 0.462 with a p-value of 0.0005. Consistent with test 2, being paired with Woods, rather than just having Tiger in the field, appears to have an adverse impact on player scores. In test 4, we examine the effects of playing with Woods on a round-by-round basis. As is evident from the total winning scores shown in Table 1, all but two of the tournaments in which Woods participated in the 1998-2001 period involved four rounds of play, with the exceptions being the 1998 Buick Invitational and the 1999 ATT Pebble Beach National Pro Am, which were cut short due to adverse weather conditions. Test 4 shows positive coefficients for each round in which players are paired with Woods, but the coefficients are statistically significant only for the first and fourth rounds, where the estimated coefficients are 0.611 and 0.857, respectively. It should be noted that in almost all PGA Tour events, players who are paired together in the first round are also paired together in the second. If we were forced to tell a story based on the results of test 4, the coefficient estimates suggest that players paired with Woods in the first round of a tournament may be nervous and intimidated, but by the second day they settle down and play more to their norm. The coefficient of 0.857 for round 4 suggests that players succumb to more pressure playing with Woods on the final day of a tournament when the ``money is on the line.'' In test 5 we address the question of whether the apparent adverse effect of playing with Woods in round 4 is simply the effect of playing in the final round rather than the effect of playing with Woods. In this test we separate scores from the final scheduled round of a tournament into two groups.\footnote{In all but the Bob Hope Chrysler Classic and the Las Vegas Invitational, which are 5-round tournaments, the final scheduled round is round 4.} The first group includes scores of players who are playing in the final scheduled round with Woods, and the second group includes those playing in the final scheduled round but not with Woods. The coefficient associated with playing in the final round without Woods is not significantly different from zero, while that associated with playing with Woods in the final scheduled round is 0.858 with a p-value of 0.0030. Thus, there does appear to be an adverse effect associated with playing with Woods in the final round separate from any overall final round effect. To further test the hypothesis that players may succumb to more pressure playing with Woods on the final day of a tournament when the ``money is on the line,'' we separate players paired with Woods in round 4 into two groups. The first are players who are paired with Woods when he is ``in contention,'' with ``in contention'' defined in various ways, below. The second are players paired with Woods when he is not in contention. Our first definition of ``in contention'' is Woods playing in the final group in a final scheduled tournament round. In all PGA Tour stroke-play events, after the cut, the order of play in subsequent rounds is determined by a player's position at the end of the previous round. After the cut, those who lead a tournament at the end of round $t$ are paired in the final group and tee off last on day $t + 1$. Thus, if a player is playing in the last group with Woods on the final day of a tournament, the two must be among the top two players in the field entering the final round and, obviously, with the exception of others who may be tied with Woods and his playing partner(s), are in the best position to win the tournament.\footnote{In some tournaments, where there is a risk that play will not be completed before dark, three players per group may compete in the final round.} One might think that if there was ever a time of intimidation associated with playing with Woods, it would be when one is paired with Woods on the final day in a strong position to win a tournament. However, the data don't bear this out. The initial specification of test 6 shows that those who play with Woods on the final day of a tournament when Woods is \emph{not} in the final group shoot 1.122 strokes worse than normal (p-value = 0.0019). By contrast, those who are paired with Woods in the final round when he \emph{is} playing in the final group shoot only 0.389 strokes worse than normal, which, with a p-value of 0.4184, is not statistically different from zero. We note that there are only 31 instances of players being paired with Woods in the final group during the 1998-2001 period, and, therefore, we may not have a powerful test. We also define Woods being in contention as Tiger being within ten, eight, six, four, two or zero strokes of the lead going into the final round. As shown in the second through fifth specifications of test 6, if being in contention is defined as Woods being within four to ten strokes of the lead, those who are paired with Tiger in the final round when he is in contention score 0.80 to 0.88 strokes worse than normal, with p-values ranging from 0.0087 to 0.0297. Note that this effect is essentially the same as the 0.857 adverse strokes per round associated with simply playing with Tiger in the final round (test 4). In essence, since Woods is almost always within four to ten strokes of the lead going into the final round of a tournament, defining Woods ``being in contention'' in this fashion is hardly more than identifying that Tiger is playing in the final round. The final two specifications of test 6 show that when ``being in contention'' is defined as Woods being within two strokes of the lead or tied for the lead or better, the adverse affect associated with playing with Tiger in the final round falls to 0.690 and 0.329 strokes, respectively, but neither estimate is statistically significant. When these results are paired alongside those where ``being in contention'' is defined as Woods playing in the final group, contrary to conventional wisdom, we find no statistically significant adverse affect associated with being paired with Tiger in the final round at times when Tiger is truly in a position to win. Perhaps the best explantation for this misperception is that Woods is so much more skilled than those with whom he might be paired in the final round of a tournament that when his playing partners play close to their norm, compared to Woods they appear to be playing poorly.\footnote{It is important to note that in a typical PGA Tour event, player pairings for the first two rounds are determined prior to the start of play and do not reflect tournament performance. However, after the cut (typically after round 2), player pairings are based on cumulative performance in the tournament's previous rounds. Generally, those who are paired together have recorded the same score, or very close to the same score, going into the round for which they are paired. Since Tiger Woods is so highly skilled, if a player is paired with Woods in the third or fourth rounds, he probably played with an exceptionally degree of good luck during the tournament's previous rounds. Thus, on average, those who are paired with Woods in the third or fourth rounds of a tournament should have had a negative cumulative tournament residual score prior to being paired with Woods. Since $\theta$ residual scores must sum to zero, and $\eta$ residuals sum close to zero (on average, over all 64,364 observations, the average $\eta$ is 0.00006), an expected negative cumulative tournament $\eta$ residual, conditional on being paired with Woods in the third or fourth round, implies a positive expected residual in subsequent rounds. This is not because players who are paired with Woods tend to choke, but because $\eta$ residuals must sum (close) to zero. In a second set of tests whose results we do not report, we adjust residual scores for rounds in which players are paired with Woods by an amount that reflects the player's performance in the previous rounds of the same tournament. For example, consider a player who records a total of 102 scores in our four-year sample period. Assume this player is paired with Woods in round 3 of a particular tournament. During rounds 1 and 2 of the same tournament, the player's $\eta$ residual scores are $-2$ and $-3$, respectively. Since the sum of the player's $\eta$ residual scores must be close to zero, we would expect the remaining 100 $\eta$ residual scores for the same player to sum to 5, or 0.05 strokes per round. Therefore, in this instance, we would subtract 0.05 strokes from the player's actual third-round $\eta$ residual when computing the effect of being paired with Woods in the third round, and we would make similar adjustments to $\eta$ residual when running regressions involving pairings with Woods in the fourth round. As it turns out, these adjustments make little if any difference in our test results. Consistent with our assumption that players who are paired with Woods should have recorded negative $\eta$ residual scores on average in a tournament's previous rounds, all coefficients associated with being paired with Woods are lower than those reported in Table 3. However, at most, the coefficients are reduced by 0.03 strokes, and all that are statistically significant prior to the adjustment remain so after the adjustment.} It should be noted that when a player is paired with Woods in the final round of a tournament and Woods is within `X' strokes of the lead, the player paired with Woods is probably also within `X' strokes of the lead or very close. Therefore, it is possible that any adverse effect on a player's score when paired with Woods in the final round has nothing to do with Woods but, rather, reflects the effect of being within `X' strokes of the lead. To test this hypothesis, we ran a series of six regressions of $\eta$ residuals in final scheduled tournament rounds against dummy variables reflecting whether a player (not Woods) is or is not in contention and whether he is paired with Woods.\footnote{Some tournaments are cut short due to adverse weather conditions. In most cases, a decision is made to end the tournament after the next-to-last scheduled round is played. Since our intent is to estimate the effect of being in contention, if a tournament is cut short, a player who turned out to be in contention going into the final round most likely did not know it at the time. Therefore, in these tests, we only consider final round scoring in tournaments played the full number of originally scheduled rounds.} The six regressions vary according to the specification of being in contention, with ``in contention'' defined as being within ten, eight, six, four, two or zero strokes of the lead in tests 7-12, respectively. Coefficient estimates and associated p-values are summarized in Table 4. In all tests, the coefficient associated with being paired with Tiger Woods is statistically significant and of the same order of magnitude as in test 5 (summarized in Table 3). In tests 8-10, where being in contention is defined as being within four to eight strokes of the lead, the ``in contention'' coefficient is also significant and on an order of magnitude of 0.104 to 0.171 strokes per round. In tests 5 and 6, where ``in contention'' is defined as being within two stokes of the lead or actually being in the lead going into the final scheduled round, the estimated ``in contention'' coefficients are slightly higher but insignificant, most likley due to insufficient observations. Overall, the results suggest that those who are in contention going into the final scheduled round of a tournament with some possibility of winning play a little worse than normal. This might be due to nervousness or a tendency to take more risks, which could lead to higher scores. Overall, our results contrast with those of Guryan, Kroft and Notowidigdo (2008) who study the effect of pairings in PGA Tour events during the 2002, 2005 and 2006 seasons. Examining all parings, not just those involving Tiger Woods, these authors find no evidence that the ability of playing partners affects the performance of professional golfers. For pairings involving Woods, they find that players perform 0.354 strokes per round \emph{better} than average, but this amount is not statistically significant. Guryan, Kroft and Notowidigdo use a statistical method for estimating abnormal performance entirely different from ours and estimate the effects of player pairings over a different period of time. Thus, our findings and theirs may not be directly comparable. \section{How Poorly did Phil Mickelson Perform in ``Majors''} Until Phil Mickelson won the 2004 Masters, he was accused, almost unmercifully, of choking in the four major golf championships.\footnote{A Google search on the strings "mickelson" and "choke" yields 23,000 references.} Despite his stellar record in other PGA Tour events, his lack of success in majors would have defined him as a good player, but not a great player, if he had never won a major championship in golf. At the same time, it was generally understood that Tiger Woods stepped up his game in major championships, winning a total of five out of a possible 16 over the 1998-2001 period. Table 5 summarizes Phil Mickelson's performance in the 16 major golf championships over our four-year sample period. Mickelson missed the cut in the 1999 British Open but, otherwise, made the cut in the remaining 15 events. Column 1 shows the winning score for each of the 16 tournaments, and column 2 shows Mickelson's score. Phil's total four-round $\theta$ residual score is shown in column 4. (Here, we focus on $\theta$ residuals, rather than $\eta$ residuals, because we are concerned about the extent to which Mickelson played better or worse than normal, regardless of whether his abnormal performance resulted from a carryover of abnormal performance from previous rounds.) Subtracting this total residual from his actual score gives his expected score in column 3. For example, in the 1998 Masters, Mickelson's actual total score was 286, and his $\theta$ residual score was $-4.66$. Thus, Mickelson played a total of 4.66 strokes better than normal over the four rounds of play. If he had played ``normal'' his four-round total would have been $286 - \left( { - 4.66} \right) = 290.66$, the value shown as his expected score in column 3. Note that in two of the 16 tournaments, Mickelson's total residual score was very low, $-12.31$ strokes in the 1999 U.S. Open and $-9.49$ strokes in the 2001 PGA Championship. As we demonstrate in CR, the average total $\theta$ residual winning score per tournament over the 181 tournaments in our sample was $-9.64$ strokes. Thus, Mickelson, being more highly skilled than the typical winner of a PGA Tour event, could have won most tournaments in which he played if his total four-round residual score had been $-9.49$ strokes or better, assuming, of course, that the scores of all other tournament participants would have remained the same.. We now address the question of how many majors Mickelson could have won by performing at the level associated with his $-12.31$ four-round total residual score in the 1999 U.S. Open. For example, Mickelson's expected score in the 1998 Masters was 290.66. If his residual score in the 1998 Masters had been $-12.31$ as in the 1999 U.S. Open, Mickelson's four-round total score would have been $290.66 - 12.31 = 278.35$. This is lower than the actual winning score of 279, so Mickelson would have won the 1998 Masters by playing at the same level relative to his norm as he did in the 1999 U.S. Open (ignoring that golf is not played with fractional scores and that his higher level of play would have had no impact on the play of others in the tournament). Stated differently, the degree of luck, or temporary abnormal performance, that Mickelson experienced in the 1999 U.S. Open would have been sufficient to have enabled him to win the 1998 Masters. Applying this same logic to all 15 majors for which Mickelson made the cut, we see that Phil's abnormal performance in the 1999 U.S. Open, if applied to the other majors, would have enabled him to win 11 of 16 times. Unfortunately for Phil, Payne Stewart's total $\theta$ residual score of $-14.301$ strokes in the 1999 U.S. Open (not shown in Table 5), was a sufficient departure from his norm to have enabled Stewart to place one stroke ahead of Phil. Did Mickelson choke? Certainly not, but, apparently, he did not have quite as much good luck as Payne Stewart. If we apply Mickelson's total residual score of $-9.49$ strokes in the 2001 PGA Championship to all 15 tournaments in which he made the cut, we see that he played sufficiently well to have won three. Although not as compelling as his performance in the 1999 U.S. Open, the results summarized in Table 5 suggest that Phil played well enough during the 1998 to 2001 period to have won one or two major golf championships. Table 6 shows the average $\theta$ residual score in the 16 majors for both Woods and Mickelson on a round-by-round basis. Note that both tended to play better than their norms in the first two rounds. Woods performed an average of 0.632 and 0.610 strokes better than normal in the first two rounds of majors compared with Mickelson's 0.903 and 1.421-stroke superior-than-normal performance. Thus, in the first two rounds, Mickelson actually played better than Woods, relative to his norm. The story changes somewhat in the third and fourth rounds. Woods tended to play closer to his norm in the final two rounds, and Mickelson played approximately one-half stroke worse than his norm. Perhaps those who accused Mickelson of choking in the final two rounds of major championships did not realize that he had generally played exceptionally well in the first two rounds but had gotten back to (roughly) normal in the second two. Playing one-half stroke per round worse than normal should not be considered choking. Overall, Woods played 0.382 strokes better than normal in the 16 majors, and Mickelson's performance averaged 0.317 strokes better than his norm. To ensure that these averages are not dominated by outliers, we calculated the proportion of rounds in majors that both players played better than ``normal.'' Woods recorded a negative $\theta$ residual in 35 or 64 total rounds in majors, or 54.7\% of his rounds, while Mickelson played better than ``normal'' in 37 of 62 rounds, or 59.7\% overall. Thus, the general consensus that Woods stepped up his game in majors and Mickelson choked is hardly fair, at least from 1998 to 2001. Compared with Mickelson, Woods' success in majors lay mainly from his superior skill level. Over our entire four-year sample period, the average value of Woods' spline-based estimated skill was 68.18 strokes per round. The same average for the next-best-player, David Duval, was 69.15, almost a full stroke difference. Mickelson's average was 69.51. Therefore, Woods had an average $69.51 - 68.18 = 1.33$ per round stroke advantage over Mickelson based on skill alone, or $1.33 \times 4 = 5.32$ strokes over a full four-round tournament. This is a lot of ground to have to make up in golf. So it is not surprising that by playing in majors roughly at the same skill levels relative to their norms, Woods won five majors and Mickelson won none. It is unfortunate, however, that Mickelson was characterized as an incomplete player who could not handle the pressure of golf's major championships. \section{Summary and Conclusions} Using the model in Connolly and Rendleman (2008), we demonstrate that by playing ``normal,'' Tiger Woods could have won some tournaments and placed no worse than fourth in the tournaments in which he participated in year 2000, his best on the PGA Tour. More generally, his average finishing position would have been 11th, 6th, 2nd, and 7th in years 1998-2001, respectively, if he had played ``normal.'' No other PGA Tour player in our sample came close to such a feat. We also quantify the intimidation factor associated with playing with Woods. On average, players who were paired with Woods during the 1998-2001 period scored 0.462 strokes per round worse than normal. Although we find that Wood's presence in a tournament may have had a small, but statisically significant adverse impact on the entire field, this effect was swamped by the apparent intimidation factor associated with having to play with Woods side-by-side. However, contrary to popular belief, the adverse effect associated with being paired with Woods was the smallest when Woods and his playing partners were in contention to win. It is also commonly held that Phil Mickelson performed poorly in majors prior to winning the Masters in 2004. However, our data suggest the opposite. Although Mickelson won no majors during our 1998-2001 sample period, he played sufficiently well to have won one or two majors under normal circumstances. Moreover, his overall performance in majors, relative to his estimated skill level, was comparable to that of Tiger Woods, the winner of five major golf championships from 1998 to 2001. Thus, the general characterization of Woods as golf's dominant player over the 1998-2001 period was accurate, but the frequent characterization of Phil Mickelson performing poorly in majors and choking was not. We believe that the methods used in the analysis here should lend themselves favorably to modeling performance in a number of other sports. The list of sports where athletes compete against one another indirectly is substantial: skiing, track and field, bowling, diving, equestrian, figure skating, gymnastics, rowing, shooting, swimming, weightlifting, and yachting. In each case, the athletes do not have to contend with direct play of competitors as in basketball, volleyball, or tennis, but compete against a course and other athletes indirectly. In some settings, the importance of random effects may be very small. In diving, for example, the board is the same height for everyone. Unlike golf, there is no variation in the athletic environment. In skiing, however, athletes compete on multiple mountains over a season, and weather and course conditions may vary over the term of a competition. This suggests that controls for these effects might be important. Besides measuring the relative importance of skill and luck in athletic performance, the methods we used might also be used to construct athlete rankings. Using a proper model, the spline fit at a point in time is a measure of athletic performance that accounts for multiple factors affecting measured outcomes. We believe it would prove to be an interesting exercise to compare rankings generated by proper statistical models of athletic performance to those commonly used. \clearpage \pagestyle{empty} \singlespacing \begin{center} \large{\textbf{References}} \end{center} \begin{small} \noindent Brown, J., 2008, ``Quitters Never Win: The (Adverse) Incentive Effects of Competing with Superstars,'' working paper, Department of Agricultural and Resource Economics, University of California at Berkeley (April).\vspace{0.12in} \noindent Connolly, R. A. and R. J. Rendleman, Jr., 2008, ``Skill, Luck and Streaky Play on the PGA Tour,'' {\em The Journal of The American Statistical Association} 103, 74-88.\vspace{0.12in} \noindent Gladwell, B., 2008, ``How to Intimidate Golfers Like Tiger Woods,'' {\em Ezine Articles},\\ URL: http:\/\/ezinearticles.com\/?How-to-Intimidate-Golfers-Like-Tiger-Woods\&id=1167718 (May 10).\vspace{0.12in} \noindent \noindent Golf Today, 2000, ``Tiger Woods Geared up for Masters Challenge,'' {\em golftoday.co.uk},\\ URL: http:\/\/www.golftoday.co.uk\/tours\/2000\/masters\/preview11.html.\vspace{0.12in} \noindent Guryan,J, K. Kroft and M. Notowidigdo, 2008, ``Peer Effects in the Workplace: Evidence from Random Groupings in Professional Golf Tournaments,'' NBER working paper.\vspace{0.12in} \noindent Ljung, G. M. and Box, G. E. P., 1978, ``On a Measure of Lack of Fit in Time Series Models,'' {\em Biometrika} 65, 297 - 303.\vspace{0.12in} \noindent Smith, M., 2001, ``Mickelson Seeks Elusive Major,'' {\em CBCsports.ca},\\ URL: http:\/\/www.cbc.ca\/sports\/story\/2001\/08\/15\/mickelson010815.html (August 15).\vspace{0.12in} \noindent Wang, Y., 1998, ``Smoothing Spline Models with Correlated Random Errors,'' {\em The Journal of The American Statistical Association} 93, 341-348.\vspace{0.12in} \end{small} \clearpage \begin{figure}[loc=h] \centerline{\includegraphics[width=6in]{Figure_1.pdf}} \caption{Spline-based estimates of mean skill levels.} \end{figure} \vspace{25mm} Plots show 18-hole scores reduced by random round-course and player-course effects along with corresponding spline fits (smooth lines). Scaled golf time for each player represents the chronological sequence of rounds for the player scaled to the \{0, 1\} interval. \clearpage \begin{table}[loc=h] \caption{How Tiger Woods would have Placed in PGA Tour Stroke-play Events by Playing ``Normal''} \label{id} \begin{center} \begin{tabular}{lrrrrrrr} \hline & & & Woods' & Woods' & & Players & Woods' \\ & Winning & Woods' & expected & residual & Woods' & tied with & place \\ & score & score & score & score & place & Woods & if ``normal'' \\ & (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline 98 Mercedes & 271 & 272 & 275.87 & -3.87 & 2 & 2 & 6 \\ 98 Buick Inv & 204 & 205 & 208.09 & -3.09 & 3 & 3 & 21 \\ 98 Nissan & 272 & 272 & 279.75 & -7.75 & 1 & 2 & 15 \\ 98 Doral & 278 & 283 & 283.71 & -0.71 & 9 & 6 & 15 \\ 98 Bay Hill & 274 & 284 & 283.77 & 0.23 & 13 & 4 & 13 \\ 98 Players Champ & 278 & 290 & 286.17 & 3.83 & 35 & 7 & 18 \\ 98 Masters & 279 & 285 & 285.86 & -0.86 & 7 & 4 & 11 \\ 98 BellSouth & 271 & 271 & 277.87 & -6.87 & 1 & 1 & 14 \\ 98 Byron Nelson & 265 & 272 & 271.26 & 0.74 & 12 & 3 & 12 \\ 98 Memorial & 271 & 288 & 279.21 & 8.79 & 51 & 6 & 11 \\ 98 U.S. Open & 280 & 290 & 286.85 & 3.15 & 17 & 5 & 7 \\ 98 Western & 271 & 281 & 280.87 & 0.13 & 9 & 8 & 9 \\ 98 British & 280 & 281 & 288.42 & -7.42 & 3 & 1 & 10 \\ 98 Buick Open & 271 & 275 & 274.14 & 0.86 & 4 & 2 & 4 \\ 98 PGA Champ & 271 & 279 & 279.92 & -0.92 & 10 & 3 & 13 \\ 98 NEC & 269 & 275 & 276.23 & -1.23 & 5 & 2 & 9 \\ 98 Disney & 272 & 277 & 275.55 & 1.45 & 7 & 4 & 4 \\ 98 Tour Champ & 274 & 289 & 279.14 & 9.86 & 20 & 1 & 5 \\ 99 Mercedes & 266 & 277 & 277.38 & -0.38 & 5 & 3 & 8 \\ & & & & & & & \\ & & & & & & Average & 10.94 \\ & & & & & & & \\ 99 Phoenix & 273 & 276 & 281.19 & -5.19 & 3 & 1 & 6 \\ 99 ATT & 206 & 219 & 216.01 & 2.99 & 51 & 13 & 37 \\ 99 Buick Inv & 266 & 266 & 272.57 & -6.57 & 1 & 1 & 4 \\ 99 Nissan & 270 & 272 & 273.49 & -1.49 & 2 & 3 & 7 \\ 99 Bay Hill & 274 & 290 & 278.54 & 11.46 & 55 & 4 & 5 \\ 99 Players Champ & 285 & 291 & 288.86 & 2.14 & 10 & 7 & 4 \\ 99 Masters & 280 & 289 & 284.39 & 4.61 & 17 & 5 & 6 \\ 99 MCI & 274 & 280 & 274.36 & 5.64 & 17 & 9 & 4 \\ 99 Byron Nelson & 262 & 271 & 267.58 & 3.42 & 7 & 2 & 3 \\ 99 Memorial & 273 & 273 & 278.05 & -5.05 & 1 & 1 & 3 \\ 99 U.S. Open & 279 & 281 & 285.33 & -4.33 & 3 & 2 & 6 \\ 99 Western & 273 & 273 & 276.59 & -3.59 & 1 & 1 & 3 \\ 99 British & 290 & 294 & 291.49 & 2.51 & 5 & 3 & 4 \\ 99 PGA Champ & 277 & 277 & 280.89 & -3.89 & 1 & 1 & 5 \\ 99 NEC & 270 & 270 & 272.57 & -2.57 & 1 & 1 & 3 \\ 99 Disney & 271 & 271 & 270.87 & 0.13 & 1 & 1 & 1 \\ 99 Tour Champ & 269 & 269 & 269.39 & -0.39 & 1 & 1 & 2 \\ 99 Am Express & 278 & 278 & 279.32 & -1.32 & 1 & 2 & 3 \\ & & & & & & & \\ & & & & & & Average & 6.00 \\ \end{tabular} \end{center} \end{table} \clearpage \begin{center} \begin{tabular}{lrrrrrrr} \multicolumn{ 1}{r}{} & & & Woods' & Woods' & & Players & Woods' \\ \multicolumn{ 1}{r}{} & Winning & Woods' & expected & residual & Woods' & tied with & place \\ \multicolumn{ 1}{r}{} & score & score & score & score & place & Woods & if ``normal'' \\ & (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline 00 Mercedes & 276 & 276 & 279.54 & -3.54 & 1 & 2 & 3 \\ 00 ATT & 273 & 273 & 273.42 & -0.42 & 1 & 1 & 2 \\ 00 Buick Inv & 270 & 274 & 271.68 & 2.32 & 2 & 2 & 2 \\ 00 Nissan & 272 & 279 & 270.24 & 8.76 & 18 & 7 & 1 \\ 00 Bay Hill & 270 & 270 & 274.29 & -4.29 & 1 & 1 & 3 \\ 00 Players Champ & 278 & 279 & 282.29 & -3.29 & 2 & 1 & 3 \\ 00 Masters & 278 & 284 & 280.77 & 3.23 & 5 & 1 & 2 \\ 00 Byron Nelson & 269 & 270 & 269.33 & 0.67 & 4 & 2 & 4 \\ 00 Memorial & 269 & 269 & 274.25 & -5.25 & 1 & 1 & 4 \\ 00 U.S. Open & 272 & 272 & 285.88 & -13.88 & 1 & 1 & 2 \\ 00 Western & 274 & 281 & 270.71 & 10.29 & 23 & 5 & 1 \\ 00 British & 269 & 269 & 273.67 & -4.67 & 1 & 1 & 2 \\ 00 Buick Open & 268 & 275 & 267.46 & 7.54 & 11 & 5 & 1 \\ 00 PGA Champ & 270 & 270 & 274.46 & -4.46 & 1 & 2 & 3 \\ 00 NEC & 259 & 259 & 267.76 & -8.76 & 1 & 1 & 2 \\ 00 Canadian Open & 266 & 266 & 268.64 & -2.64 & 1 & 1 & 3 \\ 00 Disney & 262 & 265 & 264.73 & 0.27 & 3 & 1 & 3 \\ 00 Tour Champ & 267 & 269 & 269.69 & -0.69 & 2 & 1 & 3 \\ 00 Am Express & 277 & 281 & 277.05 & 3.95 & 5 & 3 & 2 \\ & & & & & & & \\ & & & & & & Average & 2.42 \\ & & & & & & & \\ 01 Mercedes & 274 & 280 & 273.21 & 6.79 & 8 & 4 & 1 \\ 01 Phoenix & 256 & 271 & 269.21 & 1.79 & 5 & 2 & 4 \\ 01 ATT & 272 & 280 & 273.60 & 6.40 & 13 & 7 & 3 \\ 01 Buick Inv & 269 & 271 & 269.28 & 1.72 & 4 & 1 & 4 \\ 01 Nissan & 276 & 279 & 272.35 & 6.65 & 11 & 7 & 1 \\ 01 Bay Hill & 273 & 273 & 277.68 & -4.68 & 1 & 1 & 3 \\ 01 Players Champ & 274 & 274 & 282.62 & -8.62 & 1 & 1 & 10 \\ 01 Masters & 272 & 272 & 277.85 & -5.85 & 1 & 1 & 4 \\ 01 Byron Nelson & 263 & 266 & 264.84 & 1.16 & 3 & 3 & 3 \\ 01 Memorial & 271 & 271 & 280.59 & -9.59 & 1 & 1 & 6 \\ 01 U.S. Open & 276 & 283 & 281.94 & 1.06 & 10 & 3 & 6 \\ 01 Westchester & 268 & 280 & 275.79 & 4.21 & 16 & 3 & 10 \\ 01 Western & 267 & 280 & 275.66 & 4.34 & 20 & 11 & 5 \\ 01 British & 274 & 283 & 280.42 & 2.58 & 16 & 3 & 14 \\ 01 PGA Champ & 265 & 279 & 275.74 & 3.26 & 26 & 7 & 12 \\ 01 NEC & 268 & 268 & 275.14 & -7.14 & 1 & 2 & 10 \\ 01 Canadian Open & 266 & 276 & 271.42 & 4.58 & 21 & 7 & 7 \\ 01 Disney & 266 & 272 & 270.61 & 1.39 & 14 & 5 & 11 \\ 01 Tour Champ & 270 & 276 & 274.96 & 1.04 & 13 & 2 & 10 \\ & & & & & & & \\ & & & & & & Average & 6.53 \\ \end{tabular} \end{center} \clearpage \begin{table}[loc=h] \caption{Actual Scores and Residual Scores for Selected Players in 1999 ATT Pebble Beach National Pro Am, Played Over a Total of Three Rounds} \label{id} \begin{center} \begin{tabular}{lrrrrr} \hline & & & Total & Total & \\ & & & round- & player- & \\ & & Total $\theta$ & course & course & Total \\ Player & Score & residual & effect & effect & residual \\ \hline \textbf{Top 14 }& \\ Payne Stewart & 206 & -8.991 & -1.859 & -1.859 & -12.708 \\ Frank Lickliter & 207 & -10.796 & -1.859 & 0.021 & -12.634 \\ Craig Stadler & 209 & -8.962 & -1.336 & -0.039 & -10.336 \\ Fred Couples & 210 & -5.585 & -1.336 & -0.021 & -6.941 \\ Jay Williamson & 210 & -10.808 & -1.859 & -0.035 & -12.702 \\ Justin Leonard & 210 & -4.531 & -1.859 & -0.007 & -6.397 \\ Ronnie Black & 210 & -10.296 & -2.062 & -0.023 & -12.381 \\ Neal Lancaster & 211 & -9.066 & -1.336 & -0.028 & -10.430 \\ Tommy Tolles & 211 & -10.380 & -1.336 & -0.009 & -11.726 \\ Brett Quigley & 212 & -13.248 & 3.195 & 0.010 & -10.043 \\ Davis Love III & 212 & -1.338 & -1.336 & 0.001 & -2.673 \\ Paul Azinger & 212 & -5.119 & -1.336 & -0.018 & -6.473 \\ Tim Herron & 212 & -5.479 & -1.336 & -0.022 & -6.836 \\ Vijay Singh & 212 & -2.399 & -1.859 & -0.026 & -4.284 \\ & & & & & \\ Tiger Woods & 219 & 2.995 & 3.382 & 0.008 & 6.385 \\ & & & & & \\ \textbf{Bottom 16} & \\ Guy Boros & 231 & 4.740 & -2.062 & 0.005 & 2.683 \\ Omar Uresti & 231 & 5.882 & 3.382 & 0.011 & 9.275 \\ Sandy Lyle & 231 & 4.693 & 3.195 & 0.022 & 7.910 \\ Scott Simpson & 231 & 5.173 & 3.195 & 0.009 & 8.377 \\ Steve Jurgensen & 231 & 0.077 & 3.195 & -0.001 & 3.270 \\ Fulton Allem & 232 & 9.626 & -1.336 & 0.015 & 8.305 \\ Larry Rinker & 232 & 5.058 & 3.195 & 0.016 & 8.269 \\ Brian Henninger & 233 & 8.170 & 3.195 & 0.020 & 11.385 \\ Jim Carter & 233 & 9.331 & 3.195 & 0.023 & 12.549 \\ Joe Durant & 233 & 12.828 & -2.062 & 0.021 & 10.787 \\ Rich Beem & 233 & 5.681 & 3.195 & 0.007 & 8.882 \\ Tommy Armour III & 233 & 13.932 & -1.336 & 0.046 & 12.642 \\ Trevor Dodds & 233 & 6.757 & 3.195 & 0.003 & 9.955 \\ J.P. Hayes & 234 & 14.870 & -1.859 & 0.030 & 13.041 \\ Cameron Beckman & 237 & 12.456 & -1.336 & 0.037 & 11.157 \\ Mark Wiebe & 242 & 17.460 & 3.195 & 0.026 & 20.681 \\ \end{tabular} \end{center} \end{table} \clearpage \begin{table}[loc=h] \caption{Effects of Playing with Tiger Woods} \label{id} \begin{center} \begin{tabular}{clrr} \hline Test & Dummy variable & Coef. & p-value \\ \hline 1 & Woods in the field & 0.051 & 0.0211 \\ & & & \\ 2 & Woods in the field & & \\ & \;\; Playing with Woods & 0.478 & 0.0004 \\ & \;\; Not playing with Woods & 0.043 & 0.0505 \\ & & & \\ 3 & Playing with Woods & 0.462 & 0.0005 \\ & & & \\ 4 & Playing with Woods & & \\ & \;\; Round 1 & 0.611 & 0.0141 \\ & \;\; Round 2 & 0.317 & 0.2006 \\ & \;\; Round 3 & 0.063 & 0.8268 \\ & \;\; Round 4 & 0.857 & 0.0030 \\ & & & \\ 5 & Playing in last scheduled round of tournament & & \\ & \;\; Playing with Woods & 0.858 & 0.0030 \\ & \;\; Not playing with Woods & 0.008 & 0.7790 \\ & & & \\ 6 & Playing with Woods & & \\ & \;\; Round 1 & 0.611 & 0.0141 \\ & \;\; Round 2 & 0.317 & 0.2006 \\ & \;\; Round 3 & 0.063 & 0.8268 \\ & \;\; Round 4 playing with Woods in final group & 0.389 & 0.4184 \\ & \;\; Round 4 playing with Woods but not in final group & 1.122 & 0.0019 \\ & & & \\ & \;\; Round 4 playing with Woods, Tiger within 10 strokes of the lead & 0.796 & 0.0087 \\ & \;\; Round 4 playing with Woods, Tiger not within 10 strokes of the lead & 1.460 & 0.1232 \\ & & & \\ & \;\; Round 4 playing with Woods, Tiger within 8 strokes of the lead & 0.801 & 0.0088 \\ & \;\; Round 4 playing with Woods, Tiger not within 8 strokes of the lead & 1.349 & 0.1308 \\ & & & \\ & \;\; Round 4 playing with Woods, Tiger within 6 strokes of the lead & 0.812 & 0.0118 \\ & \;\; Round 4 playing with Woods, Tiger not within 6 strokes of the lead & 1.044 & 0.1083 \\ & & & \\ & \;\; Round 4 playing with Woods, Tiger within 4 strokes of the lead & 0.878 & 0.0297 \\ & \;\; Round 4 playing with Woods, Tiger not within 4 strokes of the lead & 0.837 & 0.0430 \\ & & & \\ & \;\; Round 4 playing with Woods, Tiger within 2 strokes of the lead & 0.690 & 0.1125 \\ & \;\; Round 4 playing with Woods, Tiger not within 2 strokes of the lead & 0.991 & 0.0104 \\ & & & \\ & \;\; Round 4 playing with Woods, Tiger tied for lead or better & 0.329 & 0.5649 \\ & \;\; Round 4 playing with Woods, Tiger not tied for lead or better & 1.040 & 0.0019 \\ \hline \end{tabular} \end{center} \end{table} \clearpage \begin{table}[loc=h] \caption{Effects of Being in Contention in Final Scheduled Round} \begin{center} \begin{tabular}{clcc} \hline Test & Dummy variable & Coef. & p-value \\ \hline 1 & Player within 10 strokes of lead & 0.042 & 0.2380 \\ & Player not within 10 strokes of lead & -0.036 & 0.3699 \\ & Playing with Woods & 0.822 & 0.0046 \\ & & & \\ 2 & Player within 8 strokes of lead & 0.104 & 0.0160 \\ & Player not within8 strokes of lead & -0.049 & 0.1514 \\ & Playing with Woods & 0.775 & 0.0078 \\ & & & \\ 3 & Player within 6 strokes of lead & 0.159 & 0.0043 \\ & Player not within 6 strokes of lead & -0.034 & 0.2775 \\ & Playing with Woods & 0.743 & 0.0109 \\ & & & \\ 4 & Player within 4 strokes of lead & 0.171 & 0.0297 \\ & Player not within 4 strokes of lead & -0.011 & 0.6997 \\ & Playing with Woods & 0.780 & 0.0075 \\ & & & \\ 5 & Player within 2 strokes of lead & 0.185 & 0.1124 \\ & Player not within 2 strokes of lead & -0.001 & 0.9788 \\ & Playing with Woods & 0.794 & 0.0066 \\ & & & \\ 6 & Player in the lead (or tied) & 0.193 & 0.3115 \\ & Player not in the lead & 0.005 & 0.8675 \\ & Playing with Woods & 0.822 & 0.0048 \\ \hline \end{tabular} \end{center} \end{table} \clearpage \begin{sidewaystable}[loc=h] \caption{Phil Mickelson's Performance in Majors, 1998-2001} \label{id} \begin{center} \begin{tabular}{lrrrrrrlrrl} \hline & & & Mickelson's & Mickelson's & & & & & & \\ & Winning & Mickelson's & expected & residual & & \multicolumn{ 2}{c}{If played like} & & \multicolumn{ 2}{c}{If played like} \\ & score & score & score & score & & \multicolumn{ 2}{c}{1999 U.S. Open} & & \multicolumn{ 2}{c}{2001 PGA} \\ & (1) & (2) & (3) & (4) & & \multicolumn{ 2}{c}{(5)} & & \multicolumn{ 2}{c}{(6)} \\ \cline{1-5} \cline{7-8} \cline{10-11} 98 Masters & 279 & 286 & 290.66 & -4.66 & & 278.35 & Win & & 281.17 & Lose \\ 98 U.S. Open & 280 & 288 & 291.59 & -3.59 & & 279.28 & Win & & 282.10 & Lose \\ 98 British Open & 280 & 308 & 293.38 & 14.62 & & 281.07 & Lose & & 283.89 & Lose \\ 98 PGA & 271 & 285 & 285.04 & -0.04 & & 272.73 & Lose & & 275.55 & Lose \\ & & & & & & & & & & \\ 99 Masters & 280 & 285 & 290.67 & -5.67 & & 278.35 & Win & & 281.17 & Lose \\ 99 U.S. Open & 279 & 280 & 292.31 & -12.31 & & 280.00 & Lose & & 282.82 & Lose \\ 99 British Open & \multicolumn{ 6}{c}{Missed Cut} & Lose & & & Lose \\ 99 PGA & 277 & 295 & 288.48 & 6.52 & & 276.17 & Win & & 278.99 & Lose \\ & & & & & & & & & & \\ 00 Masters & 278 & 286 & 288.67 & -2.67 & & 276.36 & Win & & 279.18 & Lose \\ 00 U.S. Open & 272 & 293 & 293.42 & -0.42 & & 281.11 & Lose & & 283.93 & Lose \\ 00 British Open & 269 & 281 & 280.96 & 0.04 & & 268.65 & Win & & 271.47 & Lose \\ 00 PGA & 270 & 279 & 281.33 & -2.33 & & 269.02 & Win & & 271.84 & Lose \\ & & & & & & & & & & \\ 01 Masters & 272 & 275 & 280.91 & -5.91 & & 268.59 & Win & & 271.41 & Win \\ 01 U.S. Open & 276 & 282 & 283.48 & -1.48 & & 271.17 & Win & & 273.99 & Win \\ 01 British Open & 274 & 285 & 280.66 & 4.34 & & 268.35 & Win & & 271.17 & Win \\ 01 PGA & 265 & 266 & 275.49 & -9.49 & & 263.18 & Win & & 266.00 & Lose \\ \hline \end{tabular} \end{center} \end{sidewaystable} \clearpage \begin{table}[loc=h] \caption{Mean $\theta$ Residual by Round in Majors, 1998-2001} \label{id} \begin{center} \begin{tabular}{lrrrr} \hline & & Woods & & Mickelson \\ \cline{3-3} \cline{5-5} Round 1 & & -0.632 & & -0.903 \\ Round 2 & & -0.610 & & -1.421 \\ Round 3 & & -0.042 & & 0.651 \\ Round 4 & & -0.244 & & 0.519 \\ \cline{3-3} \cline{5-5} & & & & \\ Overall & & -0.382 & & -0.317 \\ & & & & \\ Std error & & 0.297 & & 0.352 \\ \hline \end{tabular} \end{center} \end{table} \end{document}
{ "attr-fineweb-edu": 2.359375, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd605qhLACA4PwCeq
\section{Introduction} The success in every team sport significantly depends on the analysis of the semantics. Most team sports, such as football, basketball, and ice hockey, involve very complex interactions between players. Researchers and data analysts propose various methods for modeling these interactions. For this aim, they need to follow the movements of players and the ball from the video. However, this task is strenuous due to the large speed of players and of the ball in the playfield, and tracking usually fails in the cases of overlaps, poor light conditions, and low quality of the videos. During the past decades, computer vision researchers developed several optical tracking algorithms by analyzing video image pixels and by extracting the features of the objects of interest, such as players and the ball. On this data, the movement, action, intention, and gesture of the players can be analyzed. The most common analysis is performed over player and ball tracking data, also known as trajectory data. The distilled knowledge can help coaches and scouts in several aspects, such as game strategy and tactics, goal analysis, pass and shot prediction, referee decisions, player evaluation, and talent identification. In order to automatize the end-to-end analytics procedure, the tracking methods require visual data (video frames) as the data source and produce tracking data (player and ball trajectories) for further data mining. The proposed methods majorly contribute to effectively evaluate the performance at individual and team levels in team sports. E.g., at the individual level, the characteristic style of a player, while at the team level, the combination of all players' trajectories can be evaluated. The work in this paper is motivated by the following observations. First, researchers in sports analytics are continuously searching for the most accurate, but a cost-effective method for the player and ball tracking. The above-mentioned goals of tracking prove the importance of opting for an accurate method for extracting player and ball trajectories in sports analytics. Second, player and ball tracking are one of the broadest areas for research in sports analytics. In the literature, there are many published works without proper classification. Recently, the automatic feature extraction capability of deep learning in computer vision encourages sports analysts to experiment with neural networks for player and ball tracking tasks. Thus, a wider range of tracking options are available to the researchers and this survey helps them to choose their suitable method depending on the task at hand. Furthermore, understanding all these methods requires deep knowledge of computer vision for quantitative analysts in sports, which is not realistic. Therefore, in this paper, we have the following goals: to provide a robust classification of methods for the two tasks of detection and tracking and to give insights about the applied computer vision techniques of extracting trajectories to the quantitative analysts in sports. Several papers made attempts to present the myriad of state-of-the-art object tracking algorithms. A broad description of object tracking methods was given in \citet{yilmaz2006}, and a more recent one in \citet{RasoolReddy2015}. Moreover, \citet{Dhenuka2018} presented a survey on Multiple Object Tracking (MOT) methods, while a survey for solving occlusion problems was published in \citet{Lee2014}. The first survey on the application of deep learning models in MOT is presented in \citet{Gioele2019}. All these surveys cover the description of tracking methods of generic objects, such as humans or vehicles. It was in \citet{Manafifard2017} where the authors summarized the state-of-the-art player tracking methods focusing on soccer videos. Although, these surveys show the following shortcomings. Most of these papers are not dedicated to team sports and survey all kinds of object tracking algorithms. On the other side, the sport dedicated survey like \citet{Manafifard2017}, is too technical, suitable only for computer vision analysts, and dedicated to tracking. This survey contributes to the state-of-the-art player and ball tracking methods as follows. First, the methods in detection and tracking tasks are classified separately. Second, this paper is not only listing the methods but also gives an insight about the computer vision techniques to the quantitative analysts in sports, who need the extracted trajectories for their quantitative models. Third, the application of deep learning in team sports is surveyed for the first time in the literature. Fourth, we provide a cost analysis of the methods according to their computational and infrastructure requirements. This paper is organized as follows. In Section~\ref{camera} we explain our paper collection process and the camera setup requirements of the published works. We list the methods for the player and ball detection in Section~\ref{sec:det}, and the player and ball tracking in Section~\ref{sec:tra}. We evaluate the categorized techniques in terms of their applied theoretical methods and analyze their cost in Section~\ref{sec:eval}, and finally, we conclude the work in Section~\ref{sec:ind}. \section{Eligibility and data collection} \label{camera} This survey is conducted to help quantitative sports analysts choose the best method to create their own tracking data from sports videos. For this task, the eligible papers are collected from Science Direct, Google Scholar, Scopus databases, and ACM, IEEE, Springer digital libraries using the following keywords for filtering papers and minimizing bias: ``Sports analytics'', ``soccer'', ``player tracking'', ``ball tracking'', ``player detection'', ``ball detection'', ``deep learning for tracking'', ``fixed camera'', ``moving camera'', ``broadcast sports video''. In the first round of collection, 125 papers have been identified and we carefully inspected their contributions in terms of 1) detection or tracking, 2) camera setup, and 3) deep learning-based or traditional methodologies. In order to make the best structure of this survey, we excluded the papers in which tracking was not the main focus. An example is a method called DeepQB in American football proposed by \citet{Bryan2019}. This paper proposes a deep learning approach applied to player tracking data to evaluate quarterback decisions, which is clearly not a direct contribution in player tracking methods. As a result of filtering those papers and focusing on player or ball detection and tracking, 50 papers were eligible for this survey. Furthermore, we also classified eligible papers according to their camera setup as follows. One of the most important criteria for the evaluation of the methods in this work is the required camera setup. Depending on the camera setup, the frame extraction methods are different. Several studies in sports video analytics are limited to a single fixed camera. In these methods preprocessing steps are simpler and faster, as they do not require time and location synchronization. However, as they need to cover the whole playfield, the frames are mostly blurry and difficult to use for detection \citet{Needham2001, Gonzalo2012, SabirinHiroshi2015, Adria2019}. An alternative setting to improve resolution and accuracy is to use multiple fixed cameras. In these videos occlusion problems can be handled easily, as the occluded player or ball in one frame can be recognized with the frame captured by another camera from other angles \citet{Jinchang2008, Jinchan2009, Lan2008, Yazdi2018}. Another option is to use multiple moving cameras, which makes the video processing more complex, but it provides more flexibility in the analysis. These types of video require significant synchronization effort, but finally, they produce longer trajectories, as the cameras try to follow ball controllers \citet{Ming2004,Neus2010,Dipen2014,hosseinAlavi2017}. In this paper, we classify each of the citetd papers according to their required video inputs in terms of the cameras being fixed or moving, and of their cardinality in the arena. \section{Player and ball detection} \label{sec:det} Tracking data, i.e., the exact location of the players and the ball on the field at each moment of the match, is the most important data for a quantitative model developer. Player and ball detection methods are computer vision techniques that allow the analyst to identify and locate players and the ball in a frame of a sports video. Detection methods provide the input to tracking, which would be a simple task if all players and the ball were totally visible in each frame and there were no occlusion. However, in real-world videos, most frames are blurry and continuous tracking fails due to e.g., occlusion, poor light, or posture changes. Therefore, the detection task should be combined with an appropriate tracking method to accurately track the players and the ball (See Figure~\ref{bb}). In this section, we focus on detection methods that aim to find the bounding box of the players and the ball, and to localize the different detection features inside each bounding box. Bounding boxes are imaginary boxes around players and the ball (see Figure~\ref{bb}) that are used to separate each player and ball from other objects in a video frame. We classify detection methods into the categories of traditional and deep learning-based methods. As Figure~\ref{wf} shows, while in the traditional methods the features of the input objects need to be described and extracted by the analyzer and depend on the detection algorithms, a deep learning method performs this process automatically through the layers of a neural network. Therefore, data quality, computational power, domain expertise, training time, and required accuracy specify the selection of the suitable choice of method to apply. We briefly describe each group of methods separately, and give a summary of published research papers, along with their important attributes, in Table~\ref{table:detection}. \begin{figure}[h] \center{\includegraphics[width=.7\textwidth]{ bbbb2.png}} \captionsetup{justification=centering} \caption{\label{bb}Player detection (top) and tracking (bottom) results from \citet{Ming2004}} \end{figure} \begin{figure}[h]% \centering \subfloat[\centering Traditional]{{\includegraphics[width=8.5cm]{ traditional1.png} }}% \qquad \subfloat[\centering Deep Learning]{{\includegraphics[width=7cm]{ dl1.png} }}% \captionsetup{justification=centering} \caption{Player and ball detection workflow}% \label{wf}% \end{figure} \begin{table}[] \caption{Review of playfield and player detection methods} \centering \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \small \label{table:detection} \begin{tabular}{|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{4cm}|} \hline \textbf{Reference} & \textbf{Playfield detection} & \textbf{Player detection} & Team sport \& camera type & \textbf{Evaluation} \\ \hline \citet{Slawomir2010} & Hough transform for court line detection & HOG descriptor & Football video broadcast & performs well in SD and HD test sequences, different light condition, and various positions of the cameras ; 78\% precision \\ \hline \citet{Evan2015} & Hough transform & Pedestrian detection with HOG \& color-based detection & Basketball video broadcast & miss rate: 70\% for pedestrian detection \\ \hline \citet{Ming2009} & Peak value of RGB & Adaptive GMM & Football video with single moving camera & Powerful segmentation result, but only in the absence of shadows \\ \hline \citet{Mazzeo2008} & Background subtraction & Moving object segmentation by calculating energy information for each point & Football video with single stationary camera & Copes with light changes by proposing pixel energy evaluation \\ \hline \citet{Direkoglu2018} & Binary edge detection of court line with Canny edge detector & Using shape information of an object by solving heat diffusion equation & Hockey video with single stationary camera & Highly accurate method between 75\% to 98\%, but computationally less efficient in time required for detection \\ \hline \citet{Naushad2012,Upendra2015} & RGB color extraction if G>R>B & Sobel gradient algorithm & Football video broadcast & Accurately detects the ball when it is attached to the lines; but in crowded places, it fails to detect the player \\ \hline \citet{Branko2015} & - & Face recognition with adaboost & Basketball video with single moving camera &Detection accuracy: 70\%\\ \hline \citet{Guangyu2006} & GMM & SVM for player classification & Soccer, hockey, American football video broadcast & Detection accuracy: 91\% \\ \hline \citet{Chengjun2018} & Background subtraction & One-class SVM & Football video broadcast & Proposes automatic labeling of training dataset that significantly reduces cost and training time \\ \hline \citet{Sebastian2015} & - & CNN for number recognition & Football video broadcast & Number level accuracy: 83\% \\ \hline \citet{Gen2018} & - & CNN for classification \& Spatial Transformer Network for localization of jersey numbers & Live football video with single moving camera & Number level accuracy: 87\% \& digit level accuracy: 92\% \\ \hline \citet{Hengyue2019} & Region Proposal Network & R-CNN for digit localization and classification & Footbal video with single pan-tilt zooming camera & Number level accuracy: 92\% \& digit level accuracy: 94\% \\ \hline \end{tabular} \end{adjustbox} \end{table} \subsection{Traditional methods for detection} In the traditional methods of detection, the features of players, ball, and playfield must be precisely described and extracted by the analyzer. In this section, we classify the methods according to their description of the features, and their extraction types. \subsubsection{Histogram of Oriented Gradients} Histogram of Oriented Gradients \uppercase{(hog)} is a feature descriptor and is essentially used to detect multiple objects in an image by building histograms of pixel gradients through different parts of the image. HOG considers these oriented gradients as features. An example of a calculating histogram of gradients is illustrated in Figure~\ref{fighog}. As the first step, the frame is divided into $8\times8$ cells. For each cell, the gradient magnitude (arrows' length) and gradient direction (arrows' direction) will be identified. Consequently, the histogram containing 9 bins corresponding to angles 0, 20, 40, \dots, 160 is calculated. This feature vector can be used to classify objects into different classes, e.g., player, background, and ball. This method is used by \citet{Slawomir2010} and \citet{Evan2015}. \begin{figure}[] \center{\includegraphics[width=8.5cm]{hog2.png}} \captionsetup{justification=centering} \caption{\label{fighog}Calculating Histogram of Gradients} \end{figure} In these methods, the court lines can be detected with Hough transform, another feature extraction technique that searches for the presence of straight lines in an image. This algorithm fits a set of line segments to a set of image pixels. \subsubsection{Background modelling} Background modeling is another method for detecting players and the ball, and is a complex task as the background in sports videos frequently changes due to camera movement, shadows of players, etc. Most of the methods in the background modeling domain consider image pixel values as the features of the input objects. In the domains of player and ball detection, the following two methods are proposed by researchers for background modeling: Gaussian Mixture Model (GMM) and Pixel energy evaluation.. \textbf{Gaussian Mixture Model (GMM): } GMM is proposed by \citet{Ming2009} where playfield detection is performed first by taking the peak values of RGB histograms through the frames. This is because they assume the playfield is the largest area in the frames. Then each of these extracted background pixels is modelled by $k$ Gaussian distributions; different Gaussians are for different colors. Thus, the probability of a pixel having value $X_t$ can be calculated as: \begin{equation} \label{eq1} P(X_t)= \sum_{i=1}^{k} \omega_i \eta (X_t) \end{equation} where $\omega_i$ is the weight for the $i^{th}$ component (all summing to 1), and $\eta(X_t)$ is a normal distribution density function. Based on these probabilities and by setting arbitrary thresholds on the value of the pixels, the background pixels can be subtracted and the players or the ball will be detected. This algorithm cannot recognize players in shadows.\\ \textbf{Pixel energy evaluation}: Another background model is proposed by \citet{Mazzeo2008}. In this method, the energy information of each point is analyzed in a small window: first, the information, i.e, mean and standard deviation, of the pixels at each frame is calculated. Then, by subtracting the information of the first image of the window and each subsequent image, the energy information of each point can be identified. Consequently, the slower energy points (static ones) represent the background, and higher energy points (moving ones) represent the players or the ball. \subsubsection{Edge detection} Edge detection is a method for detecting the boundaries of objects within frames as the features. This method works by detecting discontinuities in brightness. The researchers who choose this method for players and ball detection, mostly utilize the following 2 operators: Canny edge detector, and Soble filtering. Figure~\ref{edge} demonstrates the edge detection methods on a sample frame of a player. \textbf{Canny edge detection:} is a popular method in \texttt{OpenCV} for binary edge detection. (Figure~\ref{edge}(b)). \citet{Direkoglu2018} proposed using the Canny edge detection method for extracting image data and features. However, there might be missing or disconnected edges, and it does not provide shape information of the players and the ball. Thus, given a set of binary edges, they solve a particular heat equation to generate a shape information image (Figure~\ref{edge}(c, d)). In mathematics, the heat equation is a partial differential equation that demonstrates the evolution of a quantity like heat (here heat is considered as binary edges) over time. The solution of this equation is filling the inside object shape. This information image removes the appearance variation of the object, e.g., color or texture, while preserving the information of the shape. The result is the unique shape information for each player, which can be used for identification. This method works only for videos recorded with fixed cameras. \textbf{Sobel filtering:} In the method by \citet{Naushad2012} and \citet{Upendra2015}, the Sobel gradient algorithm is used to detect horizontal and vertical edges (Figure~\ref{edge}(e, f)). The gradient is the vector with the components of (x,y) and the direction is calculated as $\tan^{-1} ({\Delta}y / {\Delta}x)$. Due to the similar color of the ball and the court lines, if the Sobel gradient algorithm is applied for background elimination instead of color segmentation, overlapping of the ball and court lines will not be a problem. However, general overlapping problems, e.g., player occlusion, cannot be handled with this method. \begin{figure}[ht]% \centering \subfloat[]{{\includegraphics[height=2cm]{player} }}% \qquad \subfloat[]{{\includegraphics[height=2cm]{binaryedge} }}% \qquad \subfloat[]{{\includegraphics[height=2cm]{shapeinfo} }}% \qquad \subfloat[]{{\includegraphics[height=2cm]{coloredshapeinfoimage} }}% \qquad \subfloat[]{{\includegraphics[height=2cm]{horizontalsobel} }}% \qquad \subfloat[]{{\includegraphics[height=2cm]{verticalsobel} }}% \caption{Edge detection methods: (a) original frame, (b) binary edges with Canny method, (c) shape information image \citet{Direkoglu2018}, (d) colored shape information image \citet{Direkoglu2018}, (e) horizontal Sobel operator, (f) vertical Sobel operator}% \label{edge}% \end{figure} \subsubsection {Supervised learning} In many proposed methods a robust classifier is trained to distinguish positive samples, i.e., players and/or ball, and negative samples, i.e., other objects or parts of the playfield. Any classification method, such as Support Vector Machine or Adaboost algorithms, can be trained for accurate detection of the players. Some examples of positive and negative sample frames are given in Figure~\ref{posneg}. \textbf{Support Vector Machine:} Several related works state that the advantages of \uppercase{svm} compared to other classifiers include better prediction, unique optimal solution, fewer parameters, and lower complexity. In the method of \citet{Guangyu2006}, the playfield is subtracted with a GMM. The results of background subtraction are thousands of objects, which SVM can help to classify into player and not player objects. However, in this method, the training dataset is manually labelled, which is time-consuming. In order to solve this problem, \citet{Chengjun2018} proposed fuzzy decision making for automatic labelling of the training dataset. \textbf{Adaboost algorithm:} Adaboost, short for Adaptive Boosting is used to make a strong classifier by a linear combination of many weak classifiers to improve detection accuracy. The main idea is to set the weights of the weak classifiers and to train on the data sample in each iteration until it can accurately classify the unknown objects. \citet{Branko2015} used this algorithm for basketball players' face and body parts recognition. Although, they concluded that Adaboost is not accurate enough for object detection in sports events. Furthermore, \citet{Antoine2007} showed that deep learning methods outperform the Adaboost algorithm for player detection. \begin{figure}[h] \center{\includegraphics[scale=0.5]{posneg}} \captionsetup{justification=centering} \caption{\label{posneg} Positive (bottom) and negative (top) samples for training classifier} \end{figure} \subsection{{Deep learning methods for detection}} In the task of player detection, researchers usually use deep learning to recognize and localize jersey numbers. Most of the works in this area use a Convolutional Neural Network (CNN) which is a deep learning model. The general architecture of CNN for digit recognition is illustrated in Figure~\ref{nn}. As the first step, players' bounding boxes should be detected. Then digits inside each bounding box should be accurately localized. These localized digits will be the input of CNN. Several convolution layers in CNN will assign importance to various features of the digits. Consequently, the neurons in the last layer will classify the digits from 0 to 9 classes. In this area, different works propose the following methods for improving the performance of detection: 1) how to localize digits inside each frame, 2) how to recognize multiple digits, 3) how to automatically label the training dataset, i.e, which benchmark dataset to use. \begin{figure}[h] \center{\includegraphics[ width=10cm]{ jersey.png}} \captionsetup{justification=centering} \caption{\label{nn} Neural network architecture for digit localization and detection} \end{figure} The first CNN-based approach for automatically recognizing jersey numbers from soccer videos was proposed by \citet{Sebastian2015}. However, this method cannot recognize numbers in case of perspective distortion of the camera. To solve this problem, \citet{Gen2018} used a Spatial Transformer Network (STN) to localize jersey numbers more precisely. STN helps to crop and normalize the appropriate region of the numbers and improves the performance of classification. Another digit localization technique is Region Proposal Network (RPN), which is a convolutional network that has a classifier and a regressor, and is trained end-to-end to generate high-quality region proposals for digits. RPN is used by \citet{Hengyue2019} for classification and bounding-box regression over the background, person, and digits. While these methods can be more accurate than some traditional methods for player detection and they eliminate the necessities of manual feature description and extraction, they are also more expensive due to more computation and training time. Most of these methods require special versions of GPUs to be applied. Moreover, training and testing CNNs might be more time-consuming than running traditional methods. \section{Player tracking} \label{sec:tra} Detection methods calculate the location of each player and the ball at each frame of the videos. There are always some frames for which the detection fails due to the blurriness of the frame, poor light conditions, occlusions, etc. In these cases, the detection methods cannot provide the location of the same player and ball in consecutive frames to construct continuous trajectories. Therefore, a player tracking method is needed to associate the partial trajectories, and to provide long tracking information of each of the players and the ball (see Figure~\ref{bb}). Player tracking involves the design of a tracker that can robustly match each observation to the trajectory of a specific player. This tracker can be designed for a single object or for multiple objects. The biggest challenge in tracking is the overlapping of players, namely the occlusion. Several studies suggested solutions for making a unique, continuous trajectory for each player by solving the occlusion problem. Those methods mostly follow filtering and data association. However, each method follows a different description for interest points (features) for filtering, and data association depends on the custom definition of probabilistic distributions. In this section, we survey the tracking methods classified by whether they are based on traditional or deep learning models. \subsection{Traditional methods for tracking} Same as the previously mentioned traditional detection models, the traditional tracking algorithms also require manual extraction and description of the player and ball features. The main categories of tracking methods in the literature of sports analytics are the following: point tracking, contour tracking, silhouette tracking, graph-based tracking, and data association methods. \subsubsection{Point tracking} The methods using point tracking mostly consider some points in the shape of the player and ball as the features, and choose the right algorithm (e.g., Point Distribution Model, Kalman filter, Particle filter) to associate those points through consecutive frames (see Figure~\ref{Point}). \textbf{Point Distribution Model:} In these methods, the idea is to describe the statistical models of the shape of players and ball, called Point Distribution Model \uppercase{(pdm)}. This method is used by several studies such as \citet{Mathes2006, Hayet2005, Li2012}. The shape is interpreted as the geometric information of the player, which is the residue once location and scaling are removed. As the first step, they extract the vector of features using 2 methods: Harris detector, or Scale Invariant Feature Transform (SIFT). Harris detector is the corner detection operator to extract corners and infer features of an image. Example results of Harris detector are shown as some points in Figure~\ref{Point}. SIFT is a feature detector algorithm to describe local features in images. These extracted features are detectable even under modifications in scale, noise, and illumination. \begin{figure}[h] \center{\includegraphics[scale=0.5]{point}} \captionsetup{justification=centering} \caption{\label{Point} Point tracking} \end{figure} Then, by learning the spatial relationships between these points, they construct the PDM to concatenate all feature vectors, i.e., interest points, of players (Figure~\ref{pdm}). We provide a review and comparison of point tracking methods in Table~\ref{pointtracking}. \begin{figure}[ht]% \centering \subfloat[Players features from frames 0, 28, 48]{{\includegraphics[height=3cm]{points} }}% \qquad \subfloat[PDM corresponding to the normalized players' shape]{{\includegraphics[height=3cm]{pdm} }}% \captionsetup{justification=centering} \caption{Describing shape by PDM from \citet{Mathes2006}}% \label{pdm}% \end{figure} \begin{table}[] \caption{Review of tracking methods with PDM} \centering \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \small \label{pointtracking} \begin{tabular}{|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{4.5cm}|} \hline \textbf{ Reference} & \textbf{Tracking Method} & \textbf{Point Extraction Method} & \textbf{Input video stream} & \textbf{Evaluation} \\ \hline \citet{Li2012} &Features tracking in consecutive frames & SIFT features & Football video with multiple stationary cameras & Average reliability of tracking, i.e., the number of correctly tracked players divided by the number of players in each frame is 99.7\% ; Occlusion can be handled by comparing different viewpoints of cameras\\ \hline \citet{Hayet2005} & Matching points of the PDM & Harris detector & Football video broadcast & Copes with the problem of rotating \& zooming cameras by continuous image-to-model homography estimation; Occlusion can be handled by interpolation in the PDM\\ \hline \citet{Mathes2006} & Points matching by maximum-gain using Hungarian algorithm & Harris detector & Football video broadcast & Can only track the non-rigid but textured objects in crowded scenes; Occlusion can be handled by tracking sparse sets of local features\\ \hline \end{tabular} \end{adjustbox} \end{table} \textbf{Particle filter:} All particle filter tracking systems aim to estimate the state of a system ($x_t$), given a set of noisy observations ($z_{1:t}$). Thus the goal is to estimate $P(x_t | z_{1:t})$. If we consider this problem as a Markov process, the solution can be found if the system is assumed to be linear and each conditional probability distribution is being modeled as a Gaussian. However, these assumptions cannot be made, as they decrease the accuracy of prediction. Particle filtering can help to eliminate the necessity of extra assumptions. This method approximates the probability distribution with a weighted set of N samples: \begin{equation} \label{eq5} P(x) \sim \sum_{i=1}^N \omega^i (x-x_i), \end{equation} where $\omega^i$ is the weight of the sample $x_i$. Now the questions are how to assign the weights, and how to sample the particles. Several studies suggested different methods for these questions. In the methods by \citet{Kataoka2011, Manafifardd2017}, particles are players' positions. Linear uniform motion is used to model the movement of particles, and the Bhattacharyya coefficient is applied for assigning weights, i.e., likelihood to each particle. In statistics, Bhattacharyya coefficient (BC) is a measure for the amount of overlap between two statistical samples $(p,q)$ over the same domain $x$, and is calculated as $BC(p,q)= \sum_x \sqrt{p(x)q(x)}$. In the works by \citet{Panagiotis2016, Yang2017}, each particle is estimated by the updated location of the player, knowing the last location plus a noise: $ x_k = x_{k-1} + v_k $, which noise $v_k$ is assumed to be i.i.d. following a zero-mean Gaussian distribution. Moreover, in \citet{Yang2017}, particles are created based on color and edge features of players, and the weight of each particle is computed by contrast to the similarity between the particles and targets. \citet{Anthony2006} introduced Sample Importance Resampling to show that the shape of a player can be represented by a set of particles, e.g., edge, center of mass, and color pixels. Also, those points can represent a probabilistic distribution of the state of the player (Figure~\ref{particle}). Another method is proposed by \citet{Pedro2015}, in which players are detected by adaptive background subtraction method based on a mixture of Gaussians, and each detected player is automatically tracked by a separate particle filter and weighted average of particles. We show the above-mentioned methods for particle filtering in Table~\ref{particlefiltering}. \begin{figure}[ht]% \centering \subfloat[Set of 500 particles for $P(x_t,y_t|y_t)$ of a player]{{\includegraphics[width=4cm]{playerparticle} }}% \qquad \subfloat[Posterior probability distribution function given the current state of a particle $P(y_t|x_t = x)$. Darker points represent higher probability]{{\includegraphics[width=5cm]{particle} }}% \captionsetup{justification=centering} \caption{Particle filtering from \citet{Anthony2006}}% \label{particle}% \end{figure} \begin{table}[ht!] \caption{Review of particle filtering methods} \centering \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \small \label{particlefiltering} \begin{tabular}{|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{4cm}|} \hline \textbf{ Reference} & \textbf{Particles type} & \textbf{Weight assignment method} & \textbf{Input video stream} & \textbf{Evaluation}\\ \hline \citet{Kataoka2011} & Players' position \& center of gravity & Bhattacharyya coefficient & Football video with single swing motion camera & Tracking rate for players: 83\% \& ball: 98\%; Occlusion handling by combining particle filter and real AdaBoost\\ \hline \citet{Panagiotis2016} & Players' position & Weighted average of particles & Football video with single stationary camera & Not real-time; Occlusion cannot be handled\\ \hline \citet{Yang2017} & Color \& edge features & Bhattacharyya coefficient & Football video broadcast & Occlusion handling by comparing color \& edge features \\ \hline \citet{Anthony2006} & Edge points, center of mass, color pixels & Sample Importance Resampling & Football video from single moving camera & Overcomes the problem of non-linear and non-Gaussian nature of the noise model\\ \hline \citet{Manafifardd2017} & Ellipse surrounded by the player bounding box & Bhattacharya coefficient & Football video broadcast & 92\% of accuracy; occlusion can be handled by combination of particle swarm optimization \& multiple hypothesis tracking\\ \hline \end{tabular} \end{adjustbox} \end{table} \textbf{Kalman filter:} (KF) method is mostly used in systems with the state-space format. In the state-space models, we have a set of states evolving over time. However, the observations of these states are noisy and we are sometimes unable to directly observe the states. Thus, state-space models help to infer information of the states, given the observations, as new information arrives. In player and ball tracking, the observations of two inputs, i.e., time and noisy position measurements, continuously update the tracker. The role of KF is to estimate the $x_t$, given the initial estimate of $x_0$, and time-series of measurements (observations), $z_1, z_2, ..., z_t$. The KF process defines the evolution of state from time $t-1$ to $t$ as: \begin{equation} \label{eq6} x_t = Fx_{t-1} + Bu_{t-1} + \omega_{t-1}, \end{equation} where $F$ is the transition matrix for state vector $x_{t-1}$, $B$ is the control-input matrix for control vector $u_{t-1}$, and $\omega_{t-1}$ is the noise following a zero-mean Gaussian distribution. A typical KF process is shown in Figure~\ref{kf}. As we can see, the Kalman filter and particle filter are both recursively updating an estimate of the state, given a set of noisy observations. Kalman filter performs this task by linear projections \eqref{eq6}, while the Particle filter does so by estimating the probability distribution \eqref{eq5}. The following studies use Kalman filter for player and ball tracking: \citet{Aziz2018, Jong2009, Liu2011}. We summarize the KF methods in Table~\ref{KF}. \begin{figure}[] \center{\includegraphics[scale=0.5]{ kf1.png}} \captionsetup{justification=centering} \caption{\label{kf} Typical Kalman filter process} \end{figure} \begin{table}[ht!] \caption{Summary of player and ball tracking methods with Kalman filter} \centering \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \small \label{KF} \begin{tabular}{|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{4cm}|} \hline \textbf{ Reference} & \textbf{KF type} & \textbf{KF inputs} & \textbf{Input video stream} & \textbf{Evaluation}\\ \hline \citet{Aziz2018} & KF with motion information & Players motion information (moving or static) & Volleyball video broadcast & Non-linear \& non-Gaussian noise are ignored, which decreases the accuracy of tracking \\ \hline \citet{Jong2009} & Dynamic KF & Position \& velocity of state vector & Football video broadcast & Copes with the problem of player-ball occlusion in KF\\ \hline \citet{Liu2011} & Kinematic model of KF & Position state from Mean-Shift algorithm & Basketball video broadcast & KF is used to confirm the target location to empower Mean-shift algorithm for ball tracking \\ \hline \end{tabular} \end{adjustbox} \end{table} \subsubsection{Contour tracking} Contour tracking for dynamic sports videos provides basic data, such as orientation and position of the players, and is used when we have deforming objects, i.e., players and ball, over consecutive frames. Figure~\ref{contour} shows some examples of such contours. \begin{figure}[!htb] \center{\includegraphics[height=3cm]{kernel}} \captionsetup{justification=centering} \caption{\label{contour} Contour tracking} \end{figure} Many methods have been proposed to track these contours. In an easy approach, the centroid of these contours plus the bounding box of players will be obtained, and the player can be traced \citet{Bikramjot2013, Michael2007}. Researchers in this area tried to propose several methods for assigning a suitable contour to the players and the ball. \citet{Pratik2018} find player's contours as curves, joining all the continuous points (along the boundary), having the same color or intensity. So they could track these contours and decide whether the player is in an offside position or not. Another method by \citet{Sebastien2000, Sebastien2002, Maochun2017} suggests snake or active contour tracking, which does not include any position prediction. In such methods, the algorithm fits open or close splines (i.e., a special function defined piecewise by polynomial) to lines or edges of the players. An active contour can be represented as a curve: $[x_t , y_t], t \in [0,1]$ segmenting players from the rest of the image, which can be closed or not. Then this curve should be iteratively deformed and converged to target contour (Figure~\ref{snake1}) to minimize an energy function and to fit the curve to the lines or edges of the players. The energy function is presented as physical properties of the contours, i.e., the shape of the contour, plus the gradient and intensity of the pixels in the contour. A review of contour representation of the above-mentioned tracking methods is in Table~\ref{contourtracking}. \begin{figure}[!htb] \center{\includegraphics[height=4cm]{ active.png}} \captionsetup{justification=centering} \caption{\label{snake1} Active contour model for fitting curves to the players' edges and lines} \end{figure} \begin{table}[ht!] \caption{Summary of contour tracking methods} \centering \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \small \label{contourtracking} \begin{tabular}{|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{4cm}|} \hline \textbf{Reference} & \textbf{Contour representation} & \textbf{Tracking method} & \textbf{Input video stram} & \textbf{Evaluation}\\ \hline \citet{Pratik2018} & A curve that joins all continuous pixels & Contour filtering for players \& ball with Gaussian blurring & Football video with multiple stationary cameras & Fast tracking; Occlusion can be handled by placing cameras on both sides of the field\\ \hline \citet{Sebastien2000,Sebastien2002} & Snake initialization & Snake deformation & Football video with single moving camera & Robustly solves occlusion\\ \hline \citet{Bikramjot2013} & Edge pixels form contour boundaries & Contour centroid tracking & Football video with 3 stationary and 1 moving cameras & Handles occlusion by comparing contour area of player \& mean of that for all players\\ \hline \citet{Michael2007} & K-means clustering of pixels on marked regions & Multiple Hypothesis Tracker & Football video broadcast & Tracks players up to 20 minutes without getting lost; Detection rate is over 90\%\\ \hline \citet{Maochun2017} & Motion curve of shooting arm & Iterative convergence of dynamic contour with Lagrange equation & Basketball video broadcast & Occlusion can be handled by minimizing the potential energy of the system image\\ \hline \end{tabular} \end{adjustbox} \end{table} \subsubsection{Silhouette tracking} When the information provided by contour and simple geometric shapes are not enough for the tracking algorithm, extracting the silhouette of the players and of the ball can provide extra information on the appearance of the object in consecutive frames. Unlike contours, the silhouette of a player is not a curved shape. Thus, it does not require deformation and convergence to the target shape of players and the ball. Instead, this method proposes some aspect ratios to describe the invariant shape. An example of this shape extraction for a specific player is illustrated in Figure~\ref{silhouette}. In such cases, shape analysis can help the tracking process as follows. \begin{figure}[!htb] \center{\includegraphics[width=7cm]{silhouette}} \captionsetup{justification=centering} \caption{\label{silhouette} Silhouette tracking} \end{figure} \textbf{Shape matching:} In the literature, the shape of an object is defined by its local features not determined or altered by additive contextual effects, e.g., location, scale and rotation. This method is mostly used for ball tracking. The problem in this area is that the shape of the ball varies significantly in each frame, and does not look like a circle at all (Figure~\ref{shape}). Different studies suggest some aspect ratios, i.e, shape descriptors, to get the near-circular ball images. \citet{Bodhisattwa2013} suggest using the degree of compaction $C_d$ which is the ratio of the square of the perimeter of the given shape to the area of the given shape: $C_d = P^2 / 4\pi A $. Therefore, if $C_d > 50\%$, the shape can be filtered as a ball. Another shape descriptor is eccentricity, proposed by \citet{Wayne2006}, and it is defined as the ratio of the longest diameter to the shortest diameter of a shape. The form factor indicates how circular an object is, and if the result is between [0.2,0.65] they will consider it as a ball. Besides these shape descriptors, \citet{Huang2007} proposed using skeletons to separate a shape's topological properties from its geometries. To extract the skeleton for every foreground blob, they use the Euclidean distance transform. Table~\ref{shapeanalysis} shows a review of shape analysis in player and ball tracking methods. \begin{figure}[!htb] \center{\includegraphics[height=2cm]{ballshape2}} \captionsetup{justification=centering} \caption{\label{shape} Shape of the moving ball from \citet{Bodhisattwa2012}} \end{figure} \begin{table}[ht!] \caption{Summary of shape analysis in player and ball tracking methods} \centering \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \small \label{shapeanalysis} \begin{tabular}{|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{4cm}|} \hline \textbf{ Reference} & \textbf{Tracking method} & \textbf{Input video stream} & \textbf{Evaluation}\\ \hline \citet{Bodhisattwa2013} & Shape, size and compaction filtering & Basketball video broadcast & 93\% of accuracy of ball detection and tracking; Occlusion can be handled by trajectory interpolation with regression analysis\\ \hline \citet{Wayne2006} & Moore-Neighbour tracing algorithm & Football video with single stationary camera & Shape analysis in this method is failing in case of the shadow of players or the ball\\ \hline \citet{Huang2007} & Euclidean distance transform & Football video broadcast & Occlusion cannot be handled \\ \hline \end{tabular} \end{adjustbox} \end{table} \subsubsection{Graph-based tracking} Some works explore graph-based multiple-hypothesis to perform player tracking. In these cases, a graph is constructed that shows all the possible trajectories of players, and it models their positions along with their transition between frames. The correct trajectory is found with the help of, e.g., similarity measure, linear programming, multi-commodity network flow, or the problem is modeled as a minimum edge cover problem. An example of graph tracking in consecutive frames is shown in Figure~\ref{graph}. The method shown by \citet{Figueroa2004} builds the graph in such a way that nodes represent blobs and edges represent the distance between these blobs. Then tracking of each player is performed by searching the shortest path in the graph. However, occlusion is difficult to be handled with this method. Authors of \citet{Pallavi2008} used dynamic programming to find the optimal trajectory of each player in the graph. The proposed method by \citet{Junliang20011} builds an undirected graph to model the occlusion relationships between different players. In \citet{Chen2017}, the method constructs a layered graph for detected players, which includes all probable trajectories. Each layer corresponds to a frame and each node represents a player. Two nodes of adjacent layers are linked by an edge if their distance is less than a pre-defined threshold. Finally, the authors used the Viterbi algorithm in dynamic programming to extract the shortest path of the graph. Ball tracking with graphs was proposed in \citet{Andrii2015}, where they build a ball graph to formulate the Mixed Integer Programming model, and each node is associated with a state, i.e., location of the ball at a time instance. Table~\ref{graphbased} shows a review of node and edge representation, along with tracking methods defined on the graph. \begin{figure}[!htb] \center{\includegraphics[scale=0.5]{graphs1}} \captionsetup{justification=centering} \caption{\label{graph} An example of weighted graph for player tracking in 4 consecutive frames} \end{figure} \begin{table}[ht!] \caption{Summary of graph-based player and ball tracking methods} \centering \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \small \label{graphbased} \begin{tabular}{|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{4cm}|} \hline \textbf{Reference} & \textbf{Node representation} & \textbf{Edge representation} & \textbf{Tracking method} & \textbf{Input video stream} & \textbf{Evaluation}\\ \hline \citet{Figueroa2004} & Blobs & Distance between blobs & Minimal path of graph & Football video with multiple stationary cameras & The algorithm is tested for 3 players of defender, mid-fielder, and forwarder, and shows 88\% of solved occlusions\\ \hline \citet{Pallavi2008} & Probable player candidates & Candidates link between frames & Dynamic programming with acyclic graph & Football video broadcast & 93\% of accuracy in tracking \& 80\% of solved occlusions\\ \hline \citet{Junliang20011} & Player & Relationship ratio between 2 players & Dual-mode two-way Bayesian inference approach & Football and basketball video broadcast & Uses undirected graph to model the occlusion relationships \& reports 119 Mostly Tracked trajectories \& 12 ID switches \\ \hline \citet{Chen2017} & Players position & Degree of closeness between players & Viterbi algorithm to find shortest path & Basketball video broadcast & 88\% of precision in player tracking; Occlusion is handled by layered graph connections\\ \hline \citet{Andrii2015} & Ball's location & State-Time instant connection & Mixed Integer Programming & Football, volleyball, and basketball video with multiple cameras & 97\% of accuracy in player tracking \& 74\% in ball tracking \\ \hline \end{tabular} \end{adjustbox} \end{table} \subsubsection{Data association methods} Simulation-based approaches, including Monte Carlo methods and joint probabilistic data association, are usually used for solving multitarget tracking problems, as these methods perform well for nonlinear and non-Gaussian data models. \textbf{Markov Chain Monte Carlo data association \uppercase{(mcmc)}:} \citet{Septier2011} compared several \uppercase{mcmc} methods, such as 1) sequential importance resampling algorithm, 2) resample-move, 3) \uppercase{mcmc}-based particle method. The difference between these methods stems from the sampling strategy from posterior by using previous samples. Simulations show that the \uppercase{mcmc}-based Particle approach exhibits better tracking performance and thus clearly represents interesting alternatives to Sequential Monte Carlo methods. The authors of \citet{Liu2009} designed a Metropolis-Hastings sampler for \uppercase{mcmc}, which increased the efficiency of the method. \textbf{Joint probabilistic data association \uppercase{(jpda)}:} The JPDA method can be used when the mapping from tracks to observations is not clear, and we do not know which observations are valid and which are just noise. In these cases, \uppercase{jpda} implements a probabilistic assignment. \citet{Robert2009} used \uppercase{jpda} to assign the probability of association between each observation and each track. \subsection{Deep learning-based tracking} \label{sec:dee} Despite the effectiveness of traditional methods, they fail in many real-world scenarios, e.g., occlusion, and processing videos from several viewpoints. On the other hand, deep learning models benefit from the learning ability of neural networks on large and complex datasets, and they eliminate the necessities of features extraction by the human/expert. Therefore, deep learning-based trackers are recently getting much attention in computer vision. These trackers are categorized into online and offline methods: online trackers are trained from scratch during the test and are not taking advantage of already annotated videos for improving performance, while offline trackers train on offline data. Several recent studies have attempted to assess the performance of deep learning methods in sports analytics. The core idea of all methods is to use CNN. However, each study proposes a different structure of the network and training method for increasing the performance. In this section, we summarize the state-of-the-art networks and their application in sports analytics. Table~\ref{table:deep} is a brief review of these methods. \textbf{Visual Geometry Group (VGG): }VGG-M is a CNN architecture, designed by the Visual Geometry Group (VGG) at the University of Oxford. This network is used by several studies such as \citet{Kamble2019, Adria2019}. VGG-M is a small type of CNN, and its pre-trained weights are publicly available. This network gets the image as input, and classifies the detected object as player, ball, or background, along with the probability of the classes. The architecture of VGG-M CNN is illustrated in Figure~\ref{vggm}. \begin{figure}[!htb] \center{\includegraphics[width=10cm]{vgmn2}} \captionsetup{justification=centering} \caption{\label{vggm} VGG-M CNN architecture from \citet{Kamble2019}} \end{figure} After the classification of the players and ball, the metric called Intersection Over Union (IOU) is used to track them. IOU is the ratio of intersection of the ground truth bounding box from the previous frame ($BB_A$), and predicted bounding box in the current frame ($BB_B$), and it is calculated as in \eqref{eq7}: \begin{equation} \label{eq7} IOU= \frac{|BB_A \cap BB_B|}{|BB_A \cup BB_B|}, \end{equation} where $\cap$ and $\cup$ are intersection and union in terms of the number of pixels. Thus, if the intersection is non-zero between consecutive frames, the player or ball can be traced. \textbf{Cascade-CNN: } is a novel deep learning architecture consisting of multiple CNNs. This network is trained on labeled image patches and classifies the detected objects into the two classes of player and non-player. Football and basketball player tracking using this method is suggested by \citet{KeyuLu2017}. The illustrated pipeline in Figure~\ref{ccnn} shows the classification process and a dilation strategy for accurate player tracking with the help of IOU metric. \begin{figure}[!htb] \center{\includegraphics[width=8cm]{ccnn2}} \captionsetup{justification=centering} \caption{\label{ccnn} Classification process with Cascade-CNN from \citet{KeyuLu2017}} \end{figure} \textbf{YOLO: } This network is used by \citet{Matija2019} for handball player and ball tracking, and \citet{Young2019} for basketball player movement recognition. YOLO applies a single neural network to the full image. Then the network divides the image into cells and predicts bounding boxes and probabilities for each cell. The weights of the bounding boxes are the predicted probabilities. Then IOU metric can help for tracking purposes and solving the occlusion problem of the players and the ball (Figure ~\ref{yolo}). \begin{figure}[ht]% \centering \subfloat[\centering Player and ball classification with YOLO]{{\includegraphics[ width=10cm]{yolo} }}% \qquad \subfloat[\centering IOU evaluation for tracking]{{\includegraphics[width=2cm]{iou} }}% \captionsetup{justification=centering} \caption{Player and ball tracking with YOLO from \citet{Young2019}}% \label{yolo}% \end{figure} \textbf{SiamCNN: } In this network, there are sister network 1 and sister network 2 with the same network structure, parameters, and weights. The structure looks like VGG-M except for the adjustment of the sizes of each layer. The inputs of SiamCNN are 3-color channels (R,G,B) from frames, and the output is the Euclidean distance between the characteristics/features of the inputs. \citet{Long2019} used this network to extract players' characteristics through trajectories. Then they compare the similarities between search areas and a target template, so players can be tracked. The structure of this network is given in Figure~\ref{siam}. \begin{figure}[!htb] \center{\includegraphics[width=5cm]{siam}} \captionsetup{justification=centering} \caption{\label{siam} SiamCNN network structure for player tracking} \end{figure} \begin{table}[] \caption{Summary of deep learning methods and application in team sports (AUC: Area Under Curve; mAP: mean Average Precision; MAPE: Mean Absolute Percentage Error; MOTA: Multi Object Tracking Accuracy)} \centering \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \small \label{table:deep} \begin{tabular}{|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{3.5cm}|>{\centering\arraybackslash}m{4cm}|} \hline \textbf{Reference} & \textbf{Network structure } & \textbf{Input video stream} & \textbf{Required computational resource(s)} & \textbf{Performance}\\ \hline \citet{KeyuLu2017} & Cascade CNN & Football and basketball video broadcast & Intel i7-6700HQ; NVIDIA GTX1060 & AUC of player detection is 0.97\\\hline \citet{Kamble2019} & VGG-M & Football video with multiple stationary cameras & MATLAB 2018a; Intel i7; NVIDIA GTX1050Ti & 87\% of accuracy in player, ball, and event detection \\\hline \citet{Long2019} & Full-convolution Siamese NN & Football video broadcast & Matlab2014a; Intel i7; NVIDIA GTX 960 M & Mean value of target tracking effect of SiamCNN is 60\% \\\hline \citet{Young2019} & YOLO & Basketball video with single moving camera & Intel i7; NVIDIA GeForce GTX 1080Ti & 74\% of precision in recognizing Jersey numbers; MAPE is at most 34\% \\\hline \citet{Adria2019} & VGG-19 & Basketball video with single moving camera & - & Detection precision: 98\% ; MOTA of tracking: 68\%\\\hline \citet{Matija2019} & YOLO & Handball video with multiple stationary cameras & 12 core E5-2680v3 CPU; GeForce GTX TITAN & mAP in players \& ball detection: 37\%\\\hline \end{tabular} \end{adjustbox} \end{table} \section{Evaluation and model selection} \label{sec:eval} If a clean set of tracking information is not provided to a sports analyzer who is developing a quantitative model, his/her core task is to choose the most suitable method for tracking players and the ball, and construct the required dataset for further analysis. In the detection and tracking domains, model selection, i.e., DL or traditional, heavily depends on the task at hand. The selection will be difficult by merely reviewing the performance metrics of the methods, as the tracking performance relies on the specific task at hand and the quality of the videos. However, there are some concrete criteria in this domain, which can help the analyst to rapidly choose the desired tracking method. Figure~\ref{methods} compares the number of publications in detection and tracking domains categorized by team sports. Note that 74\% of methods are applied on football videos, whereas deep learning methods (i.e., CNN, VGG, Cascade, Siam, Yolo) are covering only 20\% of all publications. In this section, we review the benefits and drawbacks of each method, and compare them in terms of their estimated costs. \begin{figure}[ht]% \centering \subfloat[\centering Detection task]{{\includegraphics[height=5cm]{ detects.png} }}% \qquad \subfloat[\centering Tracking task]{{\includegraphics[height=5cm]{ tracks.png} }}% \captionsetup{justification=centering} \caption{Number of the published papers for each method categorized by their application in team sports}% \label{methods}% \end{figure} \subsection{Deep learning-based vs. traditional methods} In general, traditional methods are domain-specific, thus the analyzer must specifically describe and select the features (e.g., edge, color, points, etc.) of the ball, football player, basketball player, background, etc. in detail. Therefore, the performance of the traditional models depends on the analyzer's expertise and how accurate the features are defined. DL methods, on the other hand, demonstrate superior flexibility and automation in detection and tracking tasks, as they can be trained offline on a huge dataset, and then automatically extract features of any object type. In this case, the necessities of manual feature extraction are eliminated, and consequently, DL requires less expertise from the analyzer. In another point of view, DL models are more like a black box on the detection tasks. On the contrary, traditional methods provide more visibility and interpretability to the analyzer on how the developed algorithm can be performed in different situations such as sports types, lighting conditions, cameras, video quality, etc. So, traditional models can give a better opportunity to improve the tracker accuracy, when the system components are visible. Also in the case of failure, system debugging are more straightforward in traditional models than DL-based ones. In addition to the pros and cons that are listed in this survey for each method, few criteria can help sports analysts to choose their desired method. Table~\ref{criteria} lists these criteria that can help analyzers to choose the suitable detection and tracking methods in the direction of DL-based or traditional ones. \begin{table}[] \caption{Model selection criteria} \centering \begin{adjustbox}{width=.7\textwidth,center=\textwidth} \small \label{criteria} \begin{tabular}{|>{\centering\arraybackslash}m{5cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|} \hline \textbf{Criteria} & \textbf{Deep Learning } & \textbf{Traditional} \\ \hline Availability of huge training dataset & \checkmark & \\\hline Accessing to high computational power & \checkmark & \\\hline Lack of storage & & \checkmark \\\hline Looking for cheaper solution & & \checkmark \\\hline Certainty and expertise in the object features & & \checkmark \\\hline Less domain expertise & \checkmark & \\\hline Flexibility in terms of objects and training dataset & \checkmark & \\\hline Flexibility of deployment on different hardware & & \checkmark \\\hline Short training and annotation time & & \checkmark \\\hline \end{tabular} \end{adjustbox} \end{table} \subsection{Cost analysis} \label{cost} The cost of the method is one of the most important characteristics of model selection for researchers and analysts: they are looking for a method with maximum accuracy and reasonable cost. Here we give an insight into the cost of the state-of-the-art methods, both for infrastructure and computation, and classify them into 3 categories: high, medium, low. The classification is based on the following facts. In the computational aspect, deep learning methods which require GPUs are more expensive than traditional methods with only CPUs. From an infrastructure perspective, different methods require different sets of camera settings to record the sports video. Methods that require a set of moving or stationary camera(s) to be set up in the arena are more expensive than the methods that can trace players and the ball on broadcast video. Table~\ref{costs} shows the cost approximation of all methods along with their most significant limitations. \begin{table}[] \caption{Comparing cost of the methods} \begin{adjustbox}{width=1.\textwidth,center=\textwidth} \label{costs} \centering \begin{tabular}{|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{3cm}|} \hline \textbf{Reference} & \textbf{Detection or tracking method} & \textbf{Cost approximation} & \multicolumn{3}{c|}{\textbf{Infrastructure requirements for sport video}} & \multicolumn{2}{c|}{\textbf{Computational requirements}} & \textbf{Limitation}\\ \hline & & & \textbf{Moving camera(s)} & \textbf{Stationary camera(s)} &\textbf{ Broadcast }& \textbf{GPU} & \textbf{CPU} & \\ \hline \citet{Sebastian2015} & deep learning & middle & & & \checkmark & \checkmark & & poor performance in perspective distortion of camera \\ \hline \citet{Gen2018} & deep learning & high & \checkmark & & & \checkmark & & expensive manual force for labeling \\ \hline \citet{Hengyue2019} & deep learning & high & & \checkmark & & \checkmark & & high network training time \\ \hline \citet{Young2019}& deep learning & high & \checkmark & & & \checkmark & & very low accuracy \\ \hline \citet{Adria2019} & deep learning & high & \checkmark & & & \checkmark & & difficult network tuning \\ \hline \citet{Kamble2019} & deep learning & high & & \checkmark & & \checkmark & & manual network parameters is needed to be assigned \\ \hline \citet{Matija2019} & deep learning & high & & \checkmark & & \checkmark & & -\\ \hline \citet{KeyuLu2017} & deep learning & middle & & & \checkmark & \checkmark & & unrealistic uniform background color is assumed \\ \hline \citet{Long2019}& deep learning & middle & & & \checkmark & \checkmark & & too sensitive on number of convolutional layers\\ \hline \citet{Slawomir2010} & HOG & low & & & \checkmark & & \checkmark & cannot detect occluded players \\ \hline \citet{Evan2015}& HOG & low & & & \checkmark & & \checkmark & -\\ \hline \citet{Ming2009} & GMM & middle & \checkmark & & & & \checkmark & performs only in absence of shadow \\ \hline \citet{Mazzeo2008} & energy evaluation & middle & &\checkmark & & & \checkmark & high computational time \\ \hline \citet{Direkoglu2018}& edge detection & middle & &\checkmark & & & \checkmark & high computational time and only with fixed camera \\ \hline \citet{Naushad2012,Upendra2015} & sobel gradient & low & & & \checkmark & & \checkmark & low performance in crowded places \\ \hline \citet{Branko2015} & adaboost & middle & \checkmark & & & & \checkmark & highly training time \\ \hline \citet{Guangyu2006} & SVM & low & & & \checkmark & & \checkmark & time consuming due to manual labeling \\ \hline \citet{Chengjun2018} & SVM & low & & & \checkmark & & \checkmark & - \\ \hline \citet{Li2012} & point tracking & middle & & \checkmark & & & \checkmark & - \\ \hline \citet{Hayet2005}& point tracking& low & & & \checkmark & & \checkmark & extracted trajectories are too short \\ \hline \citet{Mathes2006} & point tracking& low & & & \checkmark & & \checkmark & cannot track untextured objects \\ \hline \citet{Kataoka2011} & particle filter & middle & \checkmark & & & & \checkmark & - \\ \hline \citet{Panagiotis2016} & particle filter & middle & & \checkmark & & & \checkmark & tracking fails in case of player jumping or falling\\ \hline \citet{Yang2017} & particle filter & low & & & \checkmark & & \checkmark & inadequacy of identifying players\\ \hline \citet{Anthony2006} & particle filter & middle & \checkmark & & & & \checkmark & can track players only in image space, not in real-time moving camera system \\ \hline \citet{Manafifardd2017} & particle filter & low & & & \checkmark & & \checkmark & players' color features are pre-selected, but they are changing in each game\\ \hline \citet{Pedro2015} & particle filter & low & & \checkmark & & & \checkmark & lots of tracking id switches \\ \hline \citet{Aziz2018} & Kalman filter & low & & & \checkmark & & \checkmark & non-linear, non-Gaussian noises are ignored \\ \hline \citet{Jong2009} & Kalman filter & low & & & \checkmark & & \checkmark & this algorithm fails in the frames with crowded players \\ \hline \citet{Liu2011} & Kalman filter & low & & & \checkmark & & \checkmark & shot event is required for ball tracking \\ \hline \citet{Pratik2018} & contour tracking & middle & & \checkmark & & & \checkmark & offside event is required for tracking \\ \hline \citet{Sebastien2000,Sebastien2002} & contour tracking & middle & \checkmark & & & & \checkmark & high computational time \\ \hline \citet{Bikramjot2013} & contour tracking & middle & \checkmark & \checkmark & & & \checkmark & manual camera setting and zooming is required \\ \hline \citet{Michael2007} & contour tracking & low & & & \checkmark & & \checkmark & - \\ \hline \citet{Maochun2017} & contour tracking & low & & & \checkmark & & \checkmark & shooting arm is required for tracking\\ \hline \citet{Bodhisattwa2013} & shape matching & low & & & \checkmark & & \checkmark & long shot sequences are required for ball tracking\\ \hline \citet{Wayne2006} & shape matching & middle & & \checkmark & & & \checkmark & this algorithm fails in shadow \\ \hline \citet{Huang2007}& shape matching & low & & & \checkmark & & \checkmark & - \\ \hline \citet{Figueroa2004} & graph-based & middle & & \checkmark & & & \checkmark & -\\ \hline \citet{Pallavi2008} & graph-based & low & & & \checkmark & & \checkmark & short tracking sequence as it focuses on solving occlusion\\ \hline \citet{Junliang20011}& graph-based & low & & & \checkmark & & \checkmark & - \\ \hline \citet{Chen2017}& graph-based & low & & & \checkmark & & \checkmark & -\\ \hline \citet{Andrii2015} & graph-based & middle & \checkmark & & & & \checkmark & - \\ \hline \end{tabular} \end{adjustbox} \end{table} \section{Conclusion and future research directions} \label{sec:ind} According to a large number of citetd papers in this survey, computer vision researchers intensively investigate robust methods of optical tracking in sports. In this survey, we have categorized the literature according to the applied methods and video type they build on. Moreover, we elaborated on the detection phase, as a necessary preprocessing step for tracking by conventional and deep learning methods. We believe that this survey can significantly help quantitative analysts in sports to choose the most accurate, while cost-effective tracking method suitable for their analysis. Furthermore, the combination of traditional and deep learning methods can be rarely seen in the literature. Traditional models are time-consuming and require domain expertise due to some manual feature extraction tasks, while deep learning models are quite expensive to run in terms of computing resources. As possible future work, research may aim to combine those methods to increase the performance of tracking systems, along with the robust quantitative evaluation of the games. Another avenue for future work might be to minimize the computational costs of tracking systems with the aid of sophisticated data processing methods. We hope that this survey can give an insight to sports analytics researchers to recognize the gaps of state-of-the-art methods, and come up with novel solutions of tracking and quantitative analysis. \section*{Acknowledgment} Project no. 128233 has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the FK\_18 funding scheme. \bibliographystyle{unsrtnat}
{ "attr-fineweb-edu": 2.71875, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUgC8241xiQRY-Tj5F
\section{Introduction} From 2017 to 2019, the Oklahoma City Thunder faced four elimination games across three playoff series. In each of these games, Russell Westbrook attempted over 30 shots and had an average usage rate of 45.5\%.\footnote{Usage percentage is an estimate of the percentage of team plays used by a player while they were on the floor. For a detailed formula see \url{www.basketball-reference.com/about/glossary.html}} The game in which Westbrook took the most shots came in the first round of the 2017-18 National Basketball Association (NBA) playoffs, where he scored 46 points on 43 shot attempts in a 96-91 loss to the Utah Jazz. At the time, many popular media figures conjectured that having one player dominate field goal attempts in this way would limit the Thunder's success. While scoring 46 points in a playoff basketball game is an impressive feat for any one player, its impact on the overall game score is moderated by the fact that it required 43 attempts. Perhaps not coincidentally, the Thunder lost three of these four close-out games and never managed to make it out of the first round of the playoffs. At its core, this critique is about shot efficiency. The term `shot efficiency' is used in various contexts within the basketball analytics community, but in most cases it has some reference to the average number of points a team or player scores per shot attempt. Modern discussion around shot efficiency in the NBA typically focuses on either shot selection or individual player efficiency. The concept of shot selection efficiency is simple: 3-pointers and shots near the rim have the highest expected points per shot, so teams should prioritize these high-value shots. The idea underlying individual player efficiency is also straightforward; scoring more points on the same number of shot attempts increases a team's overall offensive potential. However, when discussing a player's individual efficiency it is critical to do so in context of the lineup. Basketball is not a 1-v-1 game, but a 5-v-5 game. Therefore, when a player takes a shot, the opportunity cost not only includes all other shots this player could have taken later in the possession, but also the potential shots of their four teammates. So regardless of a player's shooting statistics relative to the league at large, a certain dimension of shot efficiency can only be defined relative to the abilities of a player's teammates. Applying this to the Oklahoma City Thunder example above, if Westbrook were surrounded by dismal shooters, 43 shot attempts might not only be defensible but also desirable. On the other hand, if his inordinate number of attempts prevented highly efficient shot opportunities from his teammates, then he caused shots to be inefficiently distributed and decreased his team's scoring potential. This aspect of efficiency---the optimal way to allocate shots within a lineup---is the primary focus of our paper. Allocative efficiency is spatially dependent. As illustrated in Figure \ref{fig:simpsons}, the distribution of shots within a lineup is highly dependent on court location. The left plot in Figure \ref{fig:simpsons} shows the overall relationship between shooting frequency (x-axis) and shooting skill (y-axis), while the four plots on the right show the same relationship conditioned on various court regions. Each dot represents a player, and the size of the dot is proportional to the number of shots the player took over the 2016-17 NBA regular season. To emphasize how shot allocation within lineups is spatially dependent, we have highlighted the Cleveland Cavaliers starting lineup, consisting of LeBron James, Kevin Love, Kyrie Irving, JR Smith, and Tristan Thompson. \begin{figure}[H] \begin{center} \includegraphics[trim={0 0cm 0 0cm}, clip, width=.49\textwidth]{overall_rate_by_pps.pdf} \includegraphics[trim={0 0cm 0 0cm}, clip, width=.49\textwidth]{region_rate_by_pps.pdf} \end{center} \caption{Left: overall relationship between field goal attempt rate (x-axis) and points per shot (y-axis). Right: same relationship conditioned on various court regions. The Cleveland Cavaliers 2016-17 starting lineup is highlighted in each plot. The weighted least squares fit of each scatter plot is overlaid in each plot by a dotted line.} \label{fig:simpsons} \end{figure} When viewing field goal attempts without respect to court location (left plot), Kyrie Irving appears to shoot more frequently than both Tristan Thompson and LeBron James, despite scoring fewer points per shot than either of them. However, after conditioning on court region (right plots), we see that Irving only has the highest FGA rate in the mid-range region, which is the region for which he has the highest PPS for this lineup. James takes the most shots in the restricted area and paint regions---regions in which he is the most efficient scorer. Furthermore, we see that Thompson's high overall PPS is driven primarily by his scoring efficiency from the restricted area and that he has few shot attempts outside this area. Clearly, understanding how to efficiently distribute shots within a lineup must be contextualized by spatial information. Notice that in the left panel of Figure \ref{fig:simpsons}, the relationship between field goal attempt (FGA) rate and points per shot (PPS) appears to be slightly negative, if there exists a relationship at all. Once the relationship between FGA rate and PPS is spatially disaggregated (see right hand plots of Figure \ref{fig:simpsons}), the previously negative relationship between these variables becomes positive in every region. This instance of Simpson's paradox has non-trivial implications in the context of allocative efficiency which we will discuss in the following section. The goal of our project is to create a framework to assess the strength of the relationship between shooting frequency and shooting skill spatially within lineups and to quantify the consequential impact on offensive production. Using novel metrics we develop, we quantify how many points are being lost through inefficient spatial lineup shot allocation, visualize where they are being lost, and identify which players are responsible. \subsection{Related Work} \label{sec:related_work} In recent years, a number of metrics have been developed which aim to measure shot efficiency, such as true shooting percentage \citep{kubatko2007}, qSQ, and qSI \citep{chang2014}. Additionally, metrics have been developed to quantify individual player efficiency, such as Hollinger's player efficiency rating \citep{bbref_per}. While these metrics intrinsically account for team context, there have been relatively few studies which have looked at shooting decisions explicitly in context of lineup, and none spatially. \cite{goldman2011} coined the term `allocative efficiency', modeling the decision to shoot as a dynamic mixed-strategy equilibrium weighing both the continuation value of a possession and the value of a teammate's potential shot. They propose that a team achieves optimal allocative efficiency when, at any given time, the lineup cannot reallocate the ball to increase productivity on the margin. Essentially, they argue that lineups optimize over all dimensions of an offensive strategy to achieve equal marginal efficiency for every shot. The left plot of Figure \ref{fig:simpsons} is harmonious with this theory---there appears to be no relationship between player shooting frequency and player shooting skill when viewed on the aggregate. However, one of the most important dimensions the players optimize over is court location. Once we disaggregate the data by court location, (as shown in the right plots of Figure \ref{fig:simpsons}), we see a clear relationship between shooting frequency and shooting skill. A unique contribution of our work is a framework to assess this spatial component of allocative efficiency. 'Shot satisfaction' \citep{cervone2016} is another rare example of a shot efficiency metric that considers lineups. Shot satisfaction is defined as the expected value of a possession conditional on a shot attempt (accounting for various contextual features such as the shot location, shooter, and defensive pressure at the time of the shot) minus the unconditional expected value of the play. However, since shot satisfaction is marginalized over the allocative and spatial components, these factors cannot be analyzed using this metric alone. Additionally, shot satisfaction is dependent on proprietary data which limits its availability to a broad audience. \subsection{Data and Code} The data used for this project is publicly available from the NBA stats API (stats.nba.com). Shooter information and shot $(x, y)$ locations are available through the 'shotchartdetail' API endpoint, while lineup information can be constructed from the 'playbyplayv2' endpoint. Code for constructing lineup information from play-by-play data is available at: \url{https://github.com/jwmortensen/pbp2lineup}. Using this code, we gathered a set of 224,567 shots taken by 433 players during the 2016-17 NBA regular season, which is the data used in this analysis. Code used to perform an empirical version of the analysis presented in this paper is also available online: \url{https://github.com/nsandholtz/lpl}. \section{Models} The foundation of our proposed allocative efficiency metrics rest on spatial estimates of both player FG\% and field goal attempt (FGA) rates. With some minor adjustments, we implement the FG\% model proposed in \cite{cervone2016}. As this model is the backbone of the metrics we propose in Section 3, we thoroughly detail the components of their model in Section 2.1. In Section 2.2, we present our model for estimating spatial FGA rates. \subsection{Estimating FG\% Surfaces} Player FG\% is a highly irregular latent quantity over the court space. In general, players make more shots the closer they are to the hoop, but some players are more skilled from a certain side of the court and others specialize from very specific areas, such as the corner 3-pointer. In order to capture these kinds of non-linear relationships, \cite{cervone2016} summarizes the spatial variation in player shooting skill by a Gaussian process represented by a low-dimensional set of deterministic basis functions. Player-specific weights are estimated for the basis functions using a Bayesian hierarchical model \citep{gelman2013bayesian}. This allows the model to capture nuanced spatial features that player FG\% surfaces tend to exhibit, while maintaining a feasible dimensionality for computation. We model the logit of $\pi_j(\mathbf{s})$, the probability that player $j$ makes a shot at location $\mathbf{s}$, as a linear model: \begin{align} \text{log}\Big(\frac{\pi_j(\mathbf{s})}{1 - \pi_j(\mathbf{s})}\Big) = \pmb{\beta}^\prime \mathbf{x} + Z_j(\mathbf{s}). \label{eq:main_model} \end{align} Here $\pmb{\beta}$ is a $4 \times 1$ vector of covariate effects and $\mathbf{x}$ is a $4 \times 1$ vector of observed covariates for the shot containing an intercept, player position, shot distance, and the interaction of player position and shot distance. $Z_j(\mathbf{s})$ is a Gaussian process which accounts for the impact of location on the probability of player $j$ making a shot and is modeled using a functional basis representation, \begin{equation} Z_j(\mathbf{s}) = \mathbf{w}_j^\prime \pmb{\Lambda} \pmb{\Psi}(\mathbf{s}), \label{eq:gp_construction} \end{equation} where $\mathbf{w}_j = (\text{w}_{j1}, \dots, \text{w}_{jD})^\prime$ denotes the latent basis function weights for player $j$ and $\pmb{\Lambda} \pmb{\Psi}(\mathbf{s})$ denotes the basis functions. Specifically, $\pmb{\Lambda} = (\pmb{\lambda}_1^\prime, \dots, \pmb{\lambda}_D^\prime)^\prime$ is a $D \times K$ matrix, where each row vector $\pmb{\lambda}_d$ represents the projection of the $d$th basis function onto a triangular mesh with $K$ vertices over the offensive half court (more details on the construction of $\pmb{\Lambda}$ follow below). We use the mesh proposed in \cite{cervone2016}, which was selected specifically for modeling offensive spatial behaviour in basketball. $\pmb{\Psi}(\mathbf{s}) = (\psi_1(\mathbf{s}),\dots,\psi_K(\mathbf{s}))^\prime$ is itself a vector of basis functions where each $\psi_k(\mathbf{s})$ is 1 at mesh vertex $k$, 0 at all other vertices, and values at the interior points of each triangle are determined by linear interpolation between vertices (see \cite{Lindgren2011} for details). Finally, we assume $\mathbf{w}_j \sim \mathcal{N}(\pmb{\omega}_j, \pmb{\Sigma}_j)$, which makes (\ref{eq:gp_construction}) a Gaussian process with mean $\pmb{\omega}_j^\prime \pmb{\Lambda} \pmb{\Psi}(\mathbf{s})$ and covariance function Cov$(\mathbf{s}_1, \mathbf{s}_2) = \pmb{ \Psi}(\mathbf{s}_1)^\prime \pmb{\Lambda}^\prime \pmb{\Sigma}_j \pmb{\Lambda} \pmb{\Psi}(\mathbf{s}_2)$. Following \cite{Miller2014}, the bases of shot taking behavior, $\pmb \Lambda$, are computed through a combination of smoothing and non-negative matrix factorization (NMF) \citep{Lee1999}. Using integrated nested Laplace approximation (INLA) as the engine for our inference, we first fit a log Gaussian Cox Process (LGCP) \citep{Banerjee2015} independently to each player's point process defined by the $(x,y)$ locations of their made shots using the aforementioned mesh.\footnote{Players who took less than five shots in the regular season are treated as ``replacement players.''} Each player's estimated intensity function is evaluated at each vertex, producing a $K$-dimensional vector for each of the $L = 433$ players in our data. These vectors are exponentiated and gathered (by rows) into the $L \times K$ matrix $\mathbf{P}$, which we then factorize via NMF: \begin{align} \mathbf{P} \approx \bigg(\underset{L\times D}{\mathbf{B}}\bigg) \bigg(\underset{D\times K}{\pmb{\Lambda} }\bigg). \label{eq:NMF} \end{align} This yields $\pmb \Lambda$, the deterministic bases we use in \eqref{eq:gp_construction}. While the bases from (\ref{eq:NMF}) are constructed solely with respect to the spatial variation in the FGA data (i.e. no basketball-specific structures are induced a priori), the constraint on the number of bases significantly impacts the basis shapes. In general, the NMF tends to first generate bases according to shot distance. After accounting for this primary source of variation, other systematic features of variation begin to appear in the bases, notably asymmetry. We use D = 16 basis functions, aligning with \cite{Miller2014} which suggests the optimal number of basis functions falls between 15 and 20. Collectively, these bases comprise a comprehensive set of shooting tendencies, as shown in Figure \ref{fig:Lambda}. We have added labels post hoc to provide contextual intuition. \begin{figure}[H] \begin{center} \includegraphics[trim={0 7cm 0 7cm}, clip, width=1\textwidth]{Lambda_1_to_8.pdf} \includegraphics[trim={0 7cm 0 7cm}, clip, width=1\textwidth]{Lambda_9_to_16.pdf} \end{center} \caption{Deterministic bases resulting from the non-negative matrix factorization of $\mathbf{P}$. The plots are arranged such that the bases closest to the hoop are on the left (e.g. Under Hoop) and the bases furthest from the hoop are on the right (e.g. Center Arc 3). The residual basis, comprising court locations where shots are infrequently attempted from, is shown in the bottom-right plot.} \label{fig:Lambda} \end{figure} Conceptually, the $Z_j(\mathbf{s})$ term in (\ref{eq:main_model}) represents a player-specific spatial `correction' to the global regression model $\pmb{\beta}^\prime \mathbf{x}$. These player-specific surfaces are linear combinations of the bases shown in Figure \ref{fig:Lambda}. The weights of these combinations, $\mathbf{w}_j$, are latent parameters which are jointly estimated with $\pmb{\beta}$. Since these player weights can be highly sensitive for players with very little data, it is imperative to introduce a regularization mechanism on them, which is accomplished using a conditionally autoregressive (CAR) prior. Conveniently, the NMF in (\ref{eq:NMF}) provides player-specific loadings onto these bases, $\mathbf{B}$, which we use in constructing this CAR prior on the basis weights, $\mathbf{w}_j$ \citep{Besag1974}. The purpose of using a CAR prior on the basis weights is to shrink the FG\% estimates of players with similar shooting characteristics toward each other. This is integral for obtaining realistic FG\% estimates in areas where a player took a low volume of shots. With only a handful of shots from an area, a player's empirical FG\% can often be extreme (e.g. near 0\% or 100\%). The CAR prior helps to regularize these extremes by borrowing strength from the player's neighbors in the estimation. In order to get some notion of shooting similarity between players, we calculate the Euclidean distance between the player loadings contained in $\mathbf{B}$ and, for a given player, define the five players with the closest player loadings as their neighbors. This is intentionally chosen to be fewer than the number of neighbors selected by Cervone, recognizing that more neighbors defines a stronger prior and limits player-to-player variation in the FG\% surfaces. We enforce symmetry in the nearest-neighbors relationship by assuming that if player $j$ is a neighbor of player $\ell$, then player $\ell$ is also a neighbor of player $j$, which results in some players having more than five neighbors. These relationships are encoded in a player adjacency matrix $\mathbf{H}$ where entry $(j, \ell)$ is 1 if player $\ell$ is a neighbor of player $j$ and 0 otherwise. The CAR prior on $\mathbf{w}_{j}$ can be specified as \begin{align} (\mathbf{w}_{j} | \mathbf{w}_{-(j)}, \tau^2) &\sim \mathcal{N}\Bigg(\frac{1}{n_j}\sum_{\ell:H_{j\ell} = 1} \mathbf{w}_{\ell}, \frac{\tau^2}{n_j}\mathbf{I}_D \Bigg) \\ \tau^2 &\sim \text{InvGam}(1,1). \nonumber \end{align} where $n_j$ is the total number of neighbors for player $j$. Lastly, we set a $\mathcal{N}(\mathbf{0}, 0.001 \times \mathbf{I})$ prior on $\pmb{\beta}$, and fit the model using INLA. This yields a model that varies spatially and allows us to predict player-specific FG\% at any location in the offensive half court. In order to get high resolution FG\% estimates, we partition the court into 1 ft. by 1 ft. grid cells (yielding a total of $M$ = 2350 cells) and denote player $j$'s FG\% at the centroid of grid cell $i$ as $\xi_{ij}$. The projection of the FG\% posterior mean ($\widehat{\pmb{\xi}}_{j}$) for LeBron James is depicted in Figure \ref{fig:example_fgp_surf}. In order to have sufficient data to reliably estimate these surfaces, we assume that player FG\%s are lineup independent. We recognize this assumption may be violated in some cases, as players who draw significant defensive attention can improve the FG\%s of their teammates by providing them with more unguarded shot opportunities. Additionally, without defensive information about the shot opportunities, the FG\% estimates are subject to systematic bias. Selection bias is introduced by unequal amounts of defensive pressure applied to shooters of different skill levels. The Bayesian modeling framework can amplify selection bias as well. Since the FG\% estimates are regularized in our model via a CAR prior, players FG\% estimates shrink toward their neighbors (which we've defined in terms of FGA rate). While this feature stabilizes estimates for players with low sample sizes, it can be problematic when entire neighborhoods have low sample sizes from specific regions. For example, there are many centers who rarely or never shoot from long range. Consequently, the entire neighborhood shrinks toward the global mean 3-point FG\%, inaccurately inflating these players' FG\%s beyond the 3-point line. These are intriguing challenges and represent promising directions for future work. \begin{figure}[H] \includegraphics[trim={.5cm 5cm 0cm 5cm}, clip, width=1\textwidth]{example_fgp_surf_2017_2544.pdf} \caption{LeBron James 2016-17 FG\% posterior mean (left) and posterior standard deviation (right) projected onto the offensive half court. The prediction surfaces shown here and throughout the figures in this paper utilize projections onto a spatial grid of 1 ft. by 1 ft. cells.} \label{fig:example_fgp_surf} \end{figure} \subsection{Determining FGA Rate Surfaces} We determine a player's FGA rate surface by smoothing their shot attempts via a LGCP. This model has the form $$\log{\lambda(\mathbf{s})} = \beta_0 + Z(\mathbf{s}),$$ where $\lambda(\mathbf{s})$ is the Poisson intensity indicating the number of expected shots at location $\mathbf{s}$, $\beta_0$ is an intercept, and $Z(\mathbf{s})$ is a Gaussian process. We fit this model separately for each player using INLA, following the approach in \cite{Simpson2015}. In brief, they demonstrate that the likelihood for the LGCP can be approximated using a finite-dimensional Gaussian random field, allowing $Z(\mathbf{s})$ to be represented by the basis function expansion $Z(\mathbf{s}) = \sum_{b=1}^B z_b\phi_b(\mathbf{s})$. The basis function $\phi_b(\mathbf{s})$ projects shot location onto a triangular mesh akin to the one detailed for \eqref{eq:gp_construction}. The expected value of $\lambda(\mathbf{s})$ integrated over the court is equal to the number of shots a player has taken, however there can be small discrepancies between the fitted intensity function and the observed number of shots. In order to ensure consistency, we scale the resulting intensity function to exactly yield the player's observed number of shot attempts in that lineup. We normalize the surfaces to FGA per 36 minutes by dividing by the total number of minutes played by the associated lineup and multiplying by 36, allowing us to make meaningful comparisons between lineups who differ in the amount of minutes played. As with the FG\% surfaces ($\pmb{\xi}$), we partition the court into 1 ft. by 1 ft. grid cells and denote player $j$'s FGA rate at the centroid of grid cell $i$ as $A_{ij}$. Note that we approach the FGA rate estimation from a fundamentally different perspective than the FG\% estimation. We view a player's decision to shoot the ball as being completely within their control and hence non-random. As such, we incorporate no uncertainty in the estimated surfaces. We use the LGCP as a smoother for observed shots rather than as an estimate of a player's true latent FGA rate. Other smoothing methods (e.g. kernel based methods \citep{Diggle1985}) could be used instead. Depending on the player and lineup, a player's shot attempt profile can vary drastically from lineup to lineup. Figure \ref{fig:kyrie_fga_differences} shows Kyrie Irving's estimated FGA rate surfaces in the starting lineup (left) and the lineup in which he played the most minutes without LeBron James (middle). Based on these two lineups, Irving took 9.2 more shots per 36 minutes when he didn't share the court with James. He also favored the left side of the court far more, which James tends to dominate when on the court. \begin{figure}[H] \centering \includegraphics[trim={0cm 2cm 0cm 1.75cm}, clip, width=1\textwidth]{kyrie_fga_differences.pdf} \caption{Left: Kyrie Irving's FGA rate per 36 minutes in the starting lineup (in which he shared the most minutes with LeBron James). Center: Kyrie Irving's FGA rate per 36 minutes in the lineup for which he played the most minutes without LeBron James. Right: The difference of the center surface from the left surface.} \label{fig:kyrie_fga_differences} \end{figure} Clearly player shot attempt rates are not invariant to their teammates on the court. We therefore restrict player FGA rate estimation to lineup-specific data. Fortunately, the additional sparsity introduced by conditioning on lineup is a non-issue. If a player has no observed shot attempts from a certain region (e.g, Tristan Thompson from 3-point range), this simply means they chose not to shoot from that region---we don't need to borrow strength from neighboring players to shed light on this area of ``incomplete data". \section{Allocative Efficiency Metrics} \label{sec:lpl} The models for FG\% and FGA rate described in Section 2 are the backbone of the allocative efficiency metrics we introduce in this section: lineup points lost (LPL) and player LPL contribution (PLC). Before getting into the details, we emphasize that these metrics are agnostic to the underlying FG\% and FGA models; they can be implemented using even crude estimates of FG\% and FGA rate, for example, by dividing the court into discrete regions and using the empirical FG\% and FGA rate within each region.\footnote{Section \ref{sec:empirical_example} in the appendix shows how LPL can be calculated using empirical estimates of FG\% and FGA rate. We use the Cavaliers starting lineup to compare these empirical LPL surfaces to the more sophisticated versions presented in the main text.} Also note that the biases affecting FG\% and FGA rate described in Section 2 may affect the allocative efficiency metrics as well. Section 4 includes a discussion of the causal limitations of the approach. LPL is the output of a two-step process. First, we redistribute a lineup's observed distribution of shot attempts according to a proposed optimum. This optimum is based on ranking the five players in the lineup with respect to their FG\% and FGA rate and then redistributing the shot attempts such that the FG\% ranks and FGA rate ranks match. Second, we estimate how many points could have been gained had a lineup's collection of shot attempts been allocated according to this alternate distribution. In this section, we go over each of these steps in detail and conclude by describing PLC, which measures how individual players contribute to LPL. \subsection{Spatial Rankings Within a Lineup} With models for player FG\% and player-lineup FGA rate, we can rank the players in a given lineup (from 1 to 5) on these metrics at any spot on the court. For a given lineup, let $\pmb{R}_{i}^{\xi}$ be a discrete transformation of $\pmb{\xi}_i$---the lineup's FG\% vector in court cell $i$---yielding each player's FG\% rank relative to their four teammates. Formally, \begin{align} R_{ij}^{\xi} = \{(n_{{\xi_i}} + 1) - k : \xi_{ij} \equiv \xi^{(k)}_i\}, \label{eq:fgp_rank} \end{align} where $n_{{\xi_i}}$ is the length of $\pmb{\xi}_i$, the vector being ranked (this length will always be 5 in our case), and $\xi^{(k)}_i$ is the $k$th order statistic of $\pmb{\xi}_i$. Since $\xi_{ij}$ is a stochastic quantity governed by a posterior distribution, $R_{ij}^{\xi}$ is also distributional, however its distribution is discrete, the support being the integers $\{1,2,3,4,5\}$. The distribution of $R_{ij}^{\xi}$ can be approximated by taking posterior samples of $\pmb{\xi}_i$ and ranking them via (\ref{eq:fgp_rank}). Figure \ref{fig:example_fgp_ranks} in the appendix shows the 20\% quantiles, medians, and 80\% quantiles of the resulting transformed variates for the Cavaliers starting lineup. We obtain ranks for FGA rates in the same manner as for FG\%, except these will instead be deterministic quantities since the FGA rate surfaces, $\pmb{A}$, are fixed. We define $R_{ij}^A$ as \begin{align} R_{ij}^{A} = \{(n_{{A_i}} + 1) - k : A_{ij} \equiv A^{(k)}_i\}, \label{eq:fga_rank} \end{align} where $n_{{A_i}}$ is the length of $\pmb{A}_i$ and $A^{(k)}_i$ is the $k$th order statistic of $\pmb{A}_i$. Figure \ref{fig:example_fga_ranks} shows the estimated maximum a posteriori\footnote{For the FG\% rank surfaces we use the MAP estimate in order to ensure the estimates are always in the support of the transformation (i.e. to ensure $\widehat{R}_{ij}^{\xi} \in \{1,\ldots, 5\}$). For parameters with continuous support, such as $\widehat{\pmb{\xi}}$, the hat symbol denotes the posterior mean.} (MAP) FG\% rank surfaces, $\widehat{\pmb{R}}^{\xi}$, and the deterministic FGA rate rank surfaces, $\pmb{R}^{A}$, for the Cleveland Cavaliers starting lineup. \begin{figure}[H] \includegraphics[trim={0cm 5cm 0cm 5.5cm}, clip, width=1\textwidth]{fgp_map_ranks_CLE_1.pdf} \includegraphics[trim={0cm 1cm 0cm 11cm}, clip, width=1\textwidth]{fga_ranks_CLE_1.pdf} \caption{Top: Estimated FG\% ranks for the Cleveland Cavaliers' starting lineup. Bottom: Deterministic FGA rate ranks.} \label{fig:example_fga_ranks} \end{figure} The strong correspondence between $\widehat{\pmb{R}}^{\xi}$ and $\pmb{R}^{A}$ shown in Figure \ref{fig:example_fga_ranks} is not surprising; all other factors being equal, teams would naturally want their most skilled shooters taking the most shots and the worst shooters taking the fewest shots in any given location. By taking the difference of a lineup's FG\% rank surface from its FGA rate rank surface, $\pmb{R}^{A} - \widehat{\pmb{R}}^{\xi}$, we obtain a surface which measures how closely the lineup's FG\% ranks match their FGA rate ranks. Figure \ref{fig:example_rank_corr} shows these surfaces for the Cavaliers' starting lineup. \begin{figure}[H] \includegraphics[trim={0cm 4.5cm 0cm 4.4cm}, clip, width=1\textwidth]{rank_corr_CLE_1.pdf} \caption{Rank correspondence surfaces for the Cleveland Cavaliers' starting lineup.} \label{fig:example_rank_corr} \end{figure} Note that rank correspondence ranges from -4 to 4. A value of -4 means that the worst shooter in the lineup took the most shots from that location, while a positive 4 means the best shooter took the fewest shots from that location. In general, positive values of rank correspondence mark areas of potential under-usage and negative values show potential over-usage. For the Cavaliers, the positive values around the 3-point line for Kyrie Irving suggest that he may be under-utilized as a 3-point shooter. On the other hand, the negative values for LeBron James in the mid-range region suggest that he may be over-used in this area. We emphasize, however, that conclusions should be made carefully. Though inequality between the FG\% and FGA ranks may be indicative of sub-optimal shot allocation, this interpretation may not hold in every situation due to bias introduced by confounding variables (e.g. defensive pressure, shot clock, etc.). \subsection{Lineup Points Lost} By reducing the FG\% and FGA estimates to ranks, we compromise the magnitude of player-to-player differences within lineups. Here we introduce lineup points lost (LPL), which measures deviation from perfect rank correspondence while retaining the magnitudes of player-to-player differences in FG\% and FGA. LPL is defined as the difference in expected points between a lineup's actual distribution of FG attempts, $\pmb{A}$, and a proposed redistribution, $\pmb{A}^*$, constructed to yield perfect rank correspondence (i.e. $\pmb{R}^{A^*} - \pmb{R}^{\xi} = \pmb{0}$). Formally, we calculate LPL in the $i$th cell as \begin{align} \text{LPL}_i &= \sum_{j = 1}^5 \text{v}_i \cdot \xi_{ij} \cdot \big(A_{i[g(R^{\xi}_{ij})]} - A_{ij}\big) \label{eq:lpl1} \\ &= \sum_{j = 1}^5 \text{v}_i \cdot \xi_{ij} \cdot \big(A^*_{ij} - A_{ij}\big), \label{eq:lpl2} \end{align} where $\text{v}_i$ is the point value (2 or 3) of a made shot, $\xi_{ij}$ is the FG\% for player $j$ in cell $i$, $A_{ij}$ is player $j$'s FG attempts (per 36 minutes) in cell $i$, and $g(R^{\xi}_{ij}) = \{k:~ R^{\xi}_{ij} \equiv R^A_{ik}\}$. The function $g(\cdot)$ reallocates the observed shot attempt vector $\pmb{A}_{i}$ such that the best shooter always takes the most shots, the second best shooter takes the second most shots, and so forth. Figure \ref{fig:toy_LPL} shows a toy example of how LPL is computed for an arbitrary 3-point region, contextualized via the Cleveland Cavaliers starting lineup. In this hypothetical scenario, James takes the most shots despite both Love and Irving being better shooters from this court region. When calculating LPL for this region, Irving is allocated James' nine shots since he is the best shooter in this area. Love, as the second best shooter, is allocated Irving's four shots (which was the second most shots taken across the lineup). James, as the third best shooter, is allocated the third most shot attempts (which is Love's three shots). Smith and Thompson's shot allocations are unchanged since their actual number of shots harmonizes with the distribution imposed by $g(\cdot)$. Each player's actual expected points and optimal expected points are calculated by multiplying their FG\% by the corresponding number of shots and the point-value of the shot (3 points in this case). LPL is the difference (in expectation) between the optimal points and the actual points, which comes out to 0.84. \begin{figure}[H] \centering \includegraphics[trim={0cm 0cm 0cm 0cm}, clip, width=1\textwidth]{toy_lpl.pdf} \caption{A toy LPL computation in an arbitrary 3-point court region for the Cleveland Cavalier's starting lineup. The players are ordered from left to right according to FG\% (best to worst). Below each player's picture is the number of actual shots the player took from this location. The black arrows show how the function $g(\cdot)$ reallocates these shots according to the players' FG\% ranks. The filled gray dots show the number of shots the player would be allocated according to the proposed optimum. Below the horizontal black line, each player's actual expected points and optimal expected points are calculated by multiplying their FG\% by the corresponding number of shots and the point value of the shot. LPL is the difference (in expectation) between the optimal points and the actual points.} \label{fig:toy_LPL} \end{figure} The left plot of Figure \ref{fig:example_LPL} shows $\widehat{\text{LPL}}$ (per 36 minutes) over the offensive half court for Cleveland's starting lineup, computed using the posterior mean of $\pmb{\xi}$.\footnote{Since $\text{LPL}_i$ is a function of $\pmb{\xi}_i$, which is latent, the uncertainty in $\text{LPL}_i$ is proportional to the posterior distribution of $\sum_{j = 1}^5 \xi_{ij}$. Figures \ref{fig:CLE_lpl_distribution}-\ref{fig:LPL_uncertainty_CLE_1} in the appendix illustrate the distributional nature of LPL.} Notice that the LPL values are highest around the rim and along the 3-point line. These regions tend to dominate LPL values because the density of shot attempts is highest in these areas. \begin{figure}[H] \centering \includegraphics[trim={0cm 2.5cm 0cm 2.4cm}, clip, width=1\textwidth]{LPL_mean_fg_CLE_1.pdf} \caption{$\widehat{\text{LPL}}$ and $\widehat{\text{LPL}}^{Shot}$ surfaces for the Cleveland Cavaliers starting lineup.} \label{fig:example_LPL} \end{figure} If we re-normalize LPL with respect to the number of shots taken in each court cell we can identify areas of inefficiency that do not stand out due to low densities of shot attempts: \begin{align} \text{LPL}_i^{Shot} &= \frac{\text{LPL}_i}{\sum_{j = 1}^5 A_{ij}}. \label{eq:lpl_per_shot} \end{align} This formulation yields the average lineup points lost per shot from region $i$, as shown in the right plot of Figure \ref{fig:example_LPL}. LPL incorporates an intentional constraint---for any court cell $i$, $\pmb{A}^*_{i}$ is constrained to be a \textit{permutation} of $\pmb{A}_{i}$. This ensures that no single player can be reallocated every shot that was taken by the lineup (unless a single player took all of the shots from that region to begin with). It also ensures that the total number of shots in the redistribution will always equal the observed number of shots from that location \big(i.e. $\sum_{j = 1}^5 A_{ij} = \sum_{j = 1}^5 A^*_{ij}$, for all $i$\big). Ultimately, LPL aims to quantify the points that could have been gained had a lineup adhered to the shot allocation strategy defined by $\pmb{A}^*$. However, as will be detailed in Section \ref{sec:optimality}, there is not a 1-to-1 relationship between `lineup points' as defined here, and actual points. In other words, reducing the total LPL of a team's lineup by 1 doesn't necessarily correspond to a 1-point gain in their actual score. In fact, we find that a 1-point reduction in LPL corresponds to a 0.6-point gain (on average) in a team's actual score. One reason for this discrepancy could be because LPL is influenced by contextual variables that we are unable to account for in our FG\% model, such as the shot clock and defensive pressure. Another may be due to a tacit assumption in our definition of LPL. By holding each player's FG\% constant despite changing their volume of shots when redistributing the vector of FG attempts, we implicitly assume that a player's FG\% is independent of their FGA rate. The basketball analytics community generally agrees that this assumption does not hold---that the more shots a player is allocated, the less efficient their shots become. This concept, referred to as the `usage-curve' or `skill-curve', was introduced in \cite{oliver2004basketball} and has been further examined in \cite{goldman2011}. Incorporating usage curves into LPL could be a promising area of future work. \subsection{Player LPL Contribution} LPL summarizes information from all players in a lineup into a single surface, compromising our ability to identify how each individual player contributes to LPL. Fortunately, we can parse out each player's contribution to LPL and distinguish between points lost due to undershooting and points lost due to overshooting. We define player $j$'s LPL contribution (PLC) in court location $i$ as \begin{align} \text{PLC}_{ij} &= \text{LPL}_{i} \times \Bigg(\frac{A^*_{ij} - A_{ij}}{\sum_{j = 1}^5 |A^*_{ij} - A_{ij}|}\Bigg), \label{eq:plc} \end{align} where all terms are as defined in the previous section. The parenthetical term in (\ref{eq:plc}) apportions $\text{LPL}_{i}$ among the 5 players in the lineup proportional to the size of their individual contributions to $\text{LPL}_{i}$. Players who are reallocated more shots under $\pmb{A}^*_{i}$ compared to their observed number of shot attempts will have $\text{PLC}_{ij} > 0$. Therefore, positive PLC values indicate potential undershooting and negative values indicate potential overshooting. As in the case of LPL, if we divide PLC by the sum of shot attempts in cell $i$, we obtain average PLC per shot from location $i$: \begin{align} \text{PLC}_i^{Shot} &= \frac{\text{PLC}_i}{\sum_{j = 1}^5 A_{ij}}. \label{eq:plc_per_shot} \end{align} The $\text{PLC}_i^{Shot}$ surfaces for the Cleveland Cavaliers' 2016-17 starting lineup are shown in Figure \ref{fig:example_PLC}. We see that Kyrie Irving is potentially being under-utilized from beyond the arc and that LeBron James is potentially over-shooting from the top of the key, which is harmonious with our observations from Figure \ref{fig:example_rank_corr}. However, it is worth noting that the LPL per 36 plot (left plot in Figure \ref{fig:example_LPL}) shows very low LPL values from the mid-range region since the Cavaliers have a very low density of shots from this area. So while it may be true that LeBron tends to overshoot from the top of the key relative to his teammates, the lineup shoots so infrequently from this area that the inefficiency is negligible. \begin{figure}[H] \centering \includegraphics[trim={0cm 3.8cm 0cm 3.8cm}, clip, width=1\textwidth]{PLC_map_fg_CLE_1.pdf} \caption{$\widehat{\text{PLC}}^{Shot}$ surfaces for the Cleveland Cavaliers starting lineup.} \label{fig:example_PLC} \end{figure} For every red region in Figure \ref{fig:example_PLC} (undershooting) there are corresponding blue regions (overshooting) among the other players. This highlights the fact that LPL is made up of balancing player contributions from undershooting and overshooting; for every player who overshoots, another player (or combination of players) undershoots. By nature of how LPL is constructed, there cannot be any areas where the entire lineup overshoots or undershoots. For this reason, our method does not shed light on shot selection. LPL and PLC say nothing about whether shots from a given region are efficient or not, instead they measure how efficiently a lineup adheres to optimal allocative efficiency given the shot attempts from that region. \section{Optimality - Discussion and Implications} \label{sec:optimality} We have now defined LPL and given the theoretical interpretation (i.e. overuse and underuse), but we have not yet established that this interpretation is valid in practice. The utility of LPL as a diagnostic tool hinges on the answers to four questions, which we explore in detail in this section: \begin{center} \begin{quote} \normalsize{ 1. Do lineups minimize LPL? \\ 2. Does LPL relate to offensive production? \\ 3. How can LPL inform strategy?\\ 4. Is minimizing LPL always optimal? } \end{quote} \end{center} \subsection{Do lineups minimize LPL?} \label{sec:minimize} In Figure \ref{fig:example_LPL}, cell values range from 0 to 0.008, and the sum over all locations in the half court is 0.68. While this suggests that the Cavaliers' starters were minimizing LPL, we need a frame of reference to make this claim with certainty. The frame of reference we will use for comparison is the distribution of LPL under completely random shot allocation. This is not to suggest offenses select shooting strategies randomly. Rather, a primary reason why lineups fail to effectively minimize LPL is because the defense has the opposite goal; defenses want to get the opposing lineup to take shots from places they are bad at shooting from. In other words, while the offense is trying to minimize LPL, the defense is trying to maximize LPL. By comparing LPL against random allocation, this provides a general test for whether offenses are able to pull closer to the minimum than defenses are able to pull toward the maximum, or the absolute worst allocation possible. In statistical terms, this comparison can be stated as a hypothesis test. We are interested in testing the null hypothesis that offenses minimize and defenses maximize LPL with equal magnitudes. We consider a one-sided alternative---that the offensive minimization outweighs the defensive response (as measured on by LPL). A permutation test allows us to test these hypotheses by comparing a lineup's observed total LPL (summing over all court locations, $\sum_i^M \textnormal{LPL}_i$, where $M$ is the total number of 1 ft. by 1 ft. cells in the half court) against the total LPL we would expect under completely random shot allocation. To ensure the uncertainty in $\pmb{\xi}$ is accounted for, we simulate variates of the test statistic $T$ as \begin{align} T &= \sum_{i=1}^M \widetilde{\textnormal{LPL}}_{i=1}^{H_0} - \sum_{i=1}^M \widetilde{\textnormal{LPL}}_i \label{eq:lpl_random1} \\ &= \Bigg(\sum_{i=1}^M \sum_{j = 1}^5 \text{v}_i \cdot \widetilde{\xi}_{ij} \cdot \big(A^*_{ij} - A^{\dagger}_{ij}\big)\Bigg) - \Bigg(\sum_{i=1}^M \sum_{j = 1}^5 \text{v}_i \cdot \widetilde{\xi}_{ij} \cdot \big(A^*_{ij} - A_{ij}\big)\Bigg) \label{eq:lpl_random2} \\ &= \sum_{i=1}^M \sum_{j = 1}^5 \text{v}_i \cdot \widetilde{\xi}_{ij} \cdot \big(A_{ij} - A^{\dagger}_{ij}\big), \label{eq:lpl_random3} \end{align} where $\widetilde{\xi}_{ij}$ is a sample from player $j$'s posterior distribution of FG\% in cell $i$, $A^{\dagger}_{ij}$ is the $j$th element of a random permutation of the observed FGA rate vector $\pmb{A}_{i}$, and all other symbols are defined as in (\ref{eq:lpl1})-(\ref{eq:lpl2}). Note that a \textit{different} random permutation is drawn for each court cell $i$. After simulating 500 variates from the null distribution, we approximate the one-sided p-value of the test as the proportion of variates that are less than 0. Figure \ref{fig:lpl_permutation_test} illustrates this test for the Cleveland Cavaliers' starting lineup. The gray bars show a histogram of the variates from (\ref{eq:lpl_random1}). Bars to the left of the dashed line at 0 represent variates for which random allocation outperforms the observed allocation. The approximate p-value of the test in this case is 1/500, or 0.002. We can therefore say with high certainty that the Cleveland starters minimize LPL beyond the defense's ability to prevent them from doing so. \begin{figure}[H] \centering \includegraphics[trim={0cm 1.2cm 0cm .8cm}, clip, width=.75\textwidth]{lpl_permutation_test_version2.pdf} \caption{Permutation test for the Cleveland Cavaliers' 2016-17 starting lineup. The gray bars show a histogram of the variates from (\ref{eq:lpl_random1}). The approximate p-value for the Cavaliers starting lineup (i.e. the proportion of variates that are less than 0) is 1/500 or 0.002.} \label{fig:lpl_permutation_test} \end{figure} The computational burden of performing the test precludes performing it for every lineup, but we did perform the test for each team's 2016-17 starting lineup. The results are shown in Table \ref{tab:p-value}. Across the NBA's starting lineups, only two teams had no variates less than 0---the Golden State Warriors and the Portland Trailblazers. The Sacramento Kings showed the worst allocative efficiency with an approximate p-value of 0.44 for their starting lineup. Based on these results we are confident that most lineups employ shot allocation strategies that minimize LPL to some degree, though it appears that some teams do so better than others. \begin{table}[ht] \small \centering \textbf{Approximate p-values for} $\text{H}^0$ \textbf{vs.} $\text{H}^{\text{A}}$ \resizebox{\textwidth}{!}{\begin{tabular}{rlllllllllllllll} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline Team & GSW & POR & CLE & LAC & ATL & HOU & TOR & IND & LAL & DET & DEN & NOP & CHA & UTA & OKC \\ $\hat{p}$ & 0.000 & 0.000 & 0.002 & 0.002 & 0.014 & 0.014 & 0.016 & 0.020 & 0.022 & 0.024 & 0.028 & 0.030 & 0.030 & 0.038 & 0.042 \medskip \\ \hline & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 \\ \hline Team & DAL & MIA & MIN & BOS & NYK & ORL & SAS & BKN & PHI & MIL & WAS & PHX & MEM & CHI & SAC \\ $\hat{p}$ & 0.044 & 0.046 & 0.054 & 0.056 & 0.058 & 0.064 & 0.104 & 0.106 & 0.130 & 0.134 & 0.144 & 0.148 & 0.170 & 0.210 & 0.442 \\ \hline \end{tabular}} \caption{Approximate p-values for $\text{H}^0$ vs. $\text{H}^{\text{A}}$ for each team's starting lineup in the 2016-17 NBA regular season.} \end{table} \label{tab:p-value} \subsection{Does LPL relate to offensive production?} \label{sec:dixon_coles} We next want to determine whether teams with lower LPL values tend to be more proficient on offense. In order to achieve greater discriminatory power, we've chosen to make this assessment at the game level. Specifically, we regress a team's total game score against their total LPL generated in that game, accounting for other relevant covariates including the team's offensive strength, the opponents' defensive strength, and home-court advantage. This framework is analogous to the model proposed in \cite{dixon1997modelling}. We calculate game LPL (GLPL) by first dividing the court into three broad court regions (restricted area, mid-range, and 3-pointers). Then, for a given game and lineup, we calculate GLPL in each of these court regions (indexed by $c$) by redistributing the lineup's observed vector of shot attempts using on a weighted average of each player's $\widehat{\pmb{\xi}}_j$: \begin{align} \text{GLPL}_c &= \sum_{j = 1}^5 \text{v}_c \cdot f_c(\widehat{\pmb{\xi}}_{j}) \cdot \big(A^*_{cj} - A_{cj}\big), ~~~ \text{where} ~~~ f_c(\widehat{\pmb{\xi}}_{j}) = \frac{\sum_{i \in c}w_{ij}\widehat{\xi}_{ij}}{\sum_{i \in c}w_{ij}}. \label{eq:fg_weight} \end{align} In (\ref{eq:fg_weight}), $w_{ij}$ is a weight proportional to player $j$'s total observed shot attempts in court cell $i$ over the regular season. The notation $\sum_{i \in c}$ means we are summing over all the 1 ft. by 1 ft. grid cells that are contained in court region $c$. Finally, for a given game $g$ and team $a$, we calculate the team's total game LPL (TGLPL) by summing $\text{GLPL}_c$ over all court regions $c$ and all lineups $\ell$: \begin{align} \text{TGLPL}_{ag} &= \sum_{\ell = 1}^{L_a} \sum_{c \in C} \text{GLPL}_c^{\ell} \label{eq:TGL} \end{align} where $C = \{\text{restricted area, mid-range, 3-pointers\}}$ and $L_a$ is the total number of team $a$'s lineups. This process is carried out separately for the home and away teams, yielding two TGLPL observations per game. Equipped with a game-level covariate measuring aggregate LPL, we model team $a$'s game score against opponent $b$ in game $g$ as \begin{align} \text{Score}_{abg} &= \mu + \alpha_a + \beta_b + \gamma \times \text{I}(\text{Home}_{ag}) + \theta \times \text{TGLPL}_{ag} + \epsilon_{abg} \label{eq:dixon_coles1}\\ \epsilon_{abg} &\sim N(0, \sigma^2), \label{eq:dixon_coles2} \end{align} where $\mu$ represents the global average game score, $\alpha_a$ is team $a$'s offensive strength parameter, $\beta_b$ is team $b$'s defensive strength parameter, $\gamma$ governs home court advantage, $\theta$ is the effect of TGLPL, and $\epsilon_{abg}$ is a normally distributed error term. $\theta$ is the parameter that we are primarily concerned with. We fit this model in a Bayesian framework using Hamiltonian Monte Carlo methods implemented in Stan \citep{carpenter2017stan}. Our prior distributions are as follows: $\mu \sim N(100, 10^2)$; $\alpha_a, \beta_b, \gamma, \theta \sim N(0, 10^2)$; $\sigma \sim Gamma(\text{shape} = 2, \text{rate} = 0.2)$. The 95\% highest posterior density interval for $\theta$ is (-1.08, -0.17) and the posterior mean is -0.62.\footnote{Figure \ref{fig:theta_posterior} in the appendix shows the posterior distribution of $\theta$.} Therefore, we estimate that for each additional lineup point lost, a team loses 0.62 actual points. Put differently, by shaving roughly 3 points off of their TGLPL, a team could gain an estimated 2 points in a game. Given that 10\% of games were decided by 2 points or less in the 2016-17 season, this could have a significant impact on a team's win-loss record and could even have playoff implications for teams on the bubble. Figure \ref{fig:game_lpl_team} shows the estimated density of actual points lost per game for every team's 82 games in the 2016-17 NBA regular season (i.e. density of $\widehat{\theta} \times \text{TGLPL}_{ag},~ g \in \{1,\ldots, 82\} \text{ for each team } a)$. Houston was the most efficient team, only losing about 1 point per game on average due to inefficient shot allocation. Washington, on the other hand, lost over 3 points per game on average from inefficient shot allocation. \begin{figure}[H] \centering \includegraphics[trim={0cm 0cm 0cm 0cm}, clip, width=1\textwidth]{game_lpl_team.pdf} \caption{Estimated density of actual points lost per game for every team's 82 games in the 2016-17 NBA regular season.} \label{fig:game_lpl_team} \end{figure} \subsection{How can LPL inform strategy?} At this point, we offer some ideas for how coaches might use these methods to improve their teams' offense. First, for lineups with high LPL, coaches could explore the corresponding PLC plots to ascertain which players are primarily responsible. If the coach determines that the LPL values do indeed represent areas of inefficiency, they could consider interventions targeting the player's shooting habits in these areas. This short-term intervention could be coupled with long-term changes to their practice routines; coaches could work with players on improving their FG\% in the areas shown by the PLC plots. Also, by exploring lineup PLC charts, coaches could identify systematic inefficiency in their offensive schemes, which could prompt changes either in whom to draw plays for or whether to change certain play designs altogether. Coaches are not the only parties who could gain value from these metrics; players and front office personnel could utilize them as well. Players could use PLC plots to evaluate their shooting habits and assess whether they exhibit over-confident or under-confident shot-taking behavior from certain areas of the court. Front office personnel may find trends in the metrics that indicate a need to sign players that better fit their coach's strategy. LPL and PLC could help them identify which players on their roster to shop and which players to pursue in free agency or the trade market. Consider these ideas in context of the Utah Jazz LPL/PLC charts for the 2016-17 regular season shown in Figure \ref{fig:UTA_PLC}. \begin{figure}[H] \centering \includegraphics[trim={0cm 3.5cm 0cm 3.2cm}, clip, width=1\textwidth]{LPL_mean_fg_UTA_1.pdf} \includegraphics[trim={0cm 4.6cm 0cm 3.4cm}, clip, width=1\textwidth]{PLC_mode_UTA_1.pdf} \caption{Utah Jazz 2016-17 starting lineup $\widehat{\text{LPL}}$, $\widehat{\text{LPL}}^{Shot}$, and $\widehat{\text{PLC}}^{Shot}$ surfaces.} \label{fig:UTA_PLC} \end{figure} On reviewing the LPL per shot plot for the starting lineup, the coach might flag the left baseline and top of the key as areas of potential inefficiency to investigate. On exploring the corresponding PLC plots, they would see Derrick Favors as the driving force behind the high LPL numbers from these regions. Interestingly, from the 2013-14 season through 2016-17, the Derrick Favors baseline and elbow jump shots were go-to plays for the Jazz. Across these four seasons, Favors took over 1500 mid-range shots for an average of 0.76 points per shot (PPS). In the 2017-18 and 2018-19 seasons, the Jazz drastically altered Favors' shot policy from the mid-range. Beginning in 2017, the Jazz started focusing on running plays for 3-pointers and shots at the rim, a trend that was becoming popular throughout the league. As part of this change in play-style, they tried turning Favors into a stretch four\footnote{A stretch four is a player at the power forward position that can generate offense farther from the basket than a traditional power forward.}; he went from taking a total of 21 3-point shots over the previous four seasons, to 141 3-point shots in these two seasons alone. Unfortunately, their intervention for Favors appears to have been misguided; his average PPS for these 141 shots was 0.66. The front office eventually determined that Favors wasn't the best fit for their coach's offensive strategy; they opted not to re-sign Favors at the end of the 2019 season. This process took place over six years—perhaps it could have been expedited had LPL and PLC been available to the coaches and front office staff. \subsection{Is minimizing LPL always optimal?} While we have demonstrated that lower LPL is associated with increased offensive production, we stress that LPL is a diagnostic tool that should be used to inform basketball experts rather than as a prescriptive measure that should be strictly adhered to in all circumstances. As mentioned previously, the LPL and PLC values presented in this paper are influenced by contextual variables that we are unable to account for because they are not available in public data sources, such as the shot clock and defensive pressure. Additionally, there are certain game situations where minimizing LPL may be sub-optimal. One such situation is illustrated in Figure \ref{fig:OKC_PLC}, which shows the $\text{PLC}^{Shot}$ surfaces for the Oklahoma City 2016-17 starting lineup. \begin{figure}[H] \centering \includegraphics[trim={0cm 4.5cm 0cm 4.2cm}, clip, width=1\textwidth]{PLC_map_fg_OKC_1.pdf} \caption{Oklahoma City 2016-17 starting lineup $\widehat{\text{PLC}}^{Shot}$ surfaces.} \label{fig:OKC_PLC} \end{figure} The first panel from the left in this figure shows positive PLC values for Russell Westbrook in the corner 3-point regions, suggesting that Westbrook should be taking more shots from these areas. However, anyone who watched the Thunder play that season will know that many of these corner 3-point opportunities were created by Westbrook driving to the basket, drawing extra defenders toward him, then kicking the ball out to an open teammate in the corner. Obviously, Westbrook cannot both drive to the rim and simultaneously pass to himself in another area of the court. In this case, strictly minimizing LPL would reduce the number of these drive-and-kick plays, potentially attenuating their offensive firepower. Shot-creation is not accounted for by LPL and should be carefully considered when exploring LPL and PLC. There are game theoretic factors to be considered as well. Beyond the defensive elements discussed in Section 4.1, rigid adherence to minimizing LPL could lead to a more predictable offense and thus make it easier to defend \citep{damour2015}. Needless to say, offensive game-planning should be informed by more than LPL metrics alone. \section{Conclusion} Our research introduces novel methods to evaluate allocative efficiency spatially and shows that this efficiency has a real impact on game outcomes. We use publicly available data and have made an empirical demonstration of our methods available online, allowing our methods to be immediately accessible. Also, since LPL and PLC do not depend on specific models for FG\% and FGA rate, LPL and PLC could readily be calculated at G-league, NCAA, and international levels using a simplified model of FG\% and FGA rate. As most professional basketball teams have access to proprietary data, many of the contextual variables that we do not account for could be included in the FG\% and FGA rate models, which could make the proposed shot distribution proposed by LPL a more reliable optimum to seek. Additionally, by pairing LPL with play call data coaches could gain insight into the efficiency of specific plays. Even without access to these data, it may be possible to recreate some contextual features that aren't explicitly provided by the NBA's public-facing API. For instance, shot clock times could be reverse engineered using game clock times given in the play-by-play data. There are interesting academic questions that stem from this paper as well. Future studies could investigate the sensitivity of our metrics to model parameters that we fixed, such as the number of basis functions in the NMF and the number of neighbors in the CAR prior. We could also investigate the robustness of LPL to alternate FG\% models. As mentioned previously, we do not account for usage curves in our analysis. Doing so would turn LPL into a constrained optimization problem, which would be a fascinating challenge to tackle. Also, using LPL to inform player-specific shot policy changes, entire seasons could be simulated using the method in \cite{sandholtz2020transition} to quantify the impact of specific shot allocation changes on point production. We hope that the methods introduced in this paper will be built upon and improved. \newpage \section{Appendix} \subsection{Empirical Implementation} \label{sec:empirical_example} To illustrate some important considerations associated with this approach, we present a brief example of LPL and PLC using empirical FG\% and FGA rates. This example demonstrates that these quantities are agnostic to the underlying FG\% model. We examine the same lineup for the Cavaliers that is discussed in the main text. In order to obtain FG\% and FGA rate estimates, we divide the court into twelve discrete regions and calculate the empirical values for each player within these regions. We defined these regions based on our understanding of the court, but it is worth noting that defining these regions requires many of the same considerations as with any histogram style estimator; namely, that increasing the number of regions will decrease bias at the expense of increasing variance. In some cases, a player may have only one or two shots within an area, resulting in either unrealistically high or low field goal percentage estimates. As an \textit{ad hoc} solution to this, we give all players one made field goal and five field goal attempts within each region, which means that players with just a handful of shots in a region will have their associated field goal percentage anchored near 20 percent. Rather than perform smoothing for the field goal attempt estimates, we simply count up the number of attempts for each player within each section, and normalize them to get the attempts per 36 minutes, as before. With these FG\% and FGA estimates, we can replicate the analysis detailed in Section 3. Figure \ref{fig:empirical_ranks} shows the empirical ranks for this lineup, as well as the rank correspondence. Generally, it shows the same patterns as the model-based analysis in Figures \ref{fig:example_fga_ranks} and \ref{fig:example_rank_corr}. However, there are some key differences, including Tristan Thompson having a higher field goal percentage rank from the right midrange and a corresponding reduction in rank for Kevin Love in the same area. This pattern is also manifest in Figure \ref{fig:empirical_lpl}, which shows the empirical LPL. We observe that most lineup points appear to be lost in the right midrange and in above the break three point shots. Finally, considering the empirical PLC in Figure \ref{fig:empirical_lpl}, we notice that in addition to the Love-Thompson tradeoff in the midrange, JR Smith appears to be overshooting from the perimeter, while Kyrie Irving and LeBron James both exhibit undershooting. The persistence of the Love-Thompson connection in the midrange in this empirical analysis, and its divergence from what we saw in the model based analysis, merits a brief discussion. Kevin Love and Tristan Thompson both had a low number of shots from the far-right midrange region, with Love shooting 8 for 26 and Thompson shooting 4 for 6. Because they both shot such a low amount of shots, even with the penalty of one make and four misses added to each region, Thompson appears far better. This highlights the fact that although LPL and PLC are model agnostic, the underlying estimates for field goal percentage do matter and raw empirical estimates alone may be too noisy to be useful in calculating LPL. One simple solution may be to use a threshold and only consider players in a region if the number of their field goal attempts passes that threshold. \begin{figure}[H] \centering \includegraphics[trim={0cm .4cm 0cm 0cm}, clip, width=.85\textwidth]{empirical_ranks.pdf} \caption{Top: Empirical FG\% ranks for the Cleveland Cavaliers starting lineup. Middle: Empirical FGA ranks. Bottom: Rank correspondence.} \label{fig:empirical_ranks} \end{figure} \begin{figure}[H] \centering \includegraphics[trim={0cm 2.5cm 0cm 2.5cm}, clip, width=.85\textwidth]{empirical_lpl.pdf} \includegraphics[trim={0cm 2.5cm 0cm 2cm}, clip, width=.85\textwidth]{empirical_plc.pdf} \caption{Top: Empirical LPL and $\text{LPL}^{Shot}$ for the Cleveland Cavaliers starting lineup. Bottom: Empirical PLC for the Cleveland Cavaliers starting lineup.} \label{fig:empirical_lpl} \end{figure} \subsection{Additional Figures} \begin{figure}[H] \begin{center} \includegraphics[trim={3cm 12.1cm 3cm 0cm}, clip, width=.9\textwidth]{ranks_low_med_up_CLE_1.pdf} \includegraphics[trim={3cm 6.25cm 3cm 6.7cm}, clip, width=.9\textwidth]{ranks_low_med_up_CLE_1.pdf} \includegraphics[trim={3cm 0cm 3cm 12.5cm}, clip, width=.9\textwidth]{ranks_low_med_up_CLE_1.pdf} \end{center} \caption{Top: 20\% quantiles of the Cleveland Cavaliers starting lineup posterior distributions of FG\% ranks. Middle: medians of these distributions. Bottom: 80\% quantiles.} \label{fig:example_fgp_ranks} \end{figure} \begin{figure}[H] \centering \includegraphics[trim={0cm 0cm 0cm 0cm}, clip, width=.6\textwidth]{CLE_lpl_distribution.pdf} \caption{Histogram of $\sum_{i=1}^\text{M} \text{LPL}_i$ for the Cleveland Cavaliers starting lineup. 500 posterior draws from each $\xi_{ij}$, where $i \in \{1\ldots,M\} \text{ and } j \in \{1,\ldots,5\}$, were used to compute the 500 variates of $\sum_{i=1}^M \text{LPL}_i$ comprising this histogram.} \label{fig:CLE_lpl_distribution} \end{figure} \vspace{-.25in} \begin{figure}[H] \begin{center} \includegraphics[trim={0cm 0cm 0cm 0cm}, clip, width=1\textwidth]{LPL_uncertainty_CLE_1.pdf} \end{center} \caption{Left: 20\% quantile $\text{LPL}$ surfaces for the Cleveland Cavaliers starting lineup. Middle: median $\text{LPL}$ surfaces. Bottom: 80\% quantile $\text{LPL}$ surfaces. The top rows show $\text{LPL}^{36}$ while the bottom rows show $\text{LPL}^{\text{Shot}}$.} \label{fig:LPL_uncertainty_CLE_1} \end{figure} \vspace{-.25in} \begin{figure}[H] \begin{center} \includegraphics[trim={0cm .5cm 0cm .5cm}, clip, width=.7\textwidth]{theta_posterior.pdf} \end{center} \caption{Posterior distribution of the effect for TGLPL in model (\ref{eq:dixon_coles1})-(\ref{eq:dixon_coles2}) described in Section \ref{sec:dixon_coles}.} \label{fig:theta_posterior} \end{figure} \bibliographystyle{agsm}
{ "attr-fineweb-edu": 2.089844, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcKnxK6nrxqmoycmY
\section{Introduction} According to recent studies\footnote{\url{http://www.researchmoz.us/sports-analytics-market-shares-strategy-and-//forecasts-worldwide-2015-to-2021-report.html}}, the sport analytics market was worth 125 million dollars in 2014. Current predictions expect it to reach 4.7 billion dollars by 2021. Sport analytics is used to increase the team's competitive edge by gaining insight into the different aspects of its playing style and the performance of each of its players. For example, sports analytics was a major component of Germany's successful World Cup 2014 campaign \hide{(http://blogs.wsj.com/cio/2014/07/10/germanys-12th-man-at-the-world-cup-big-data/)}. Another important application of sports analytics is to improve scouting by identifying talented prospects in junior leagues and assessing their competitive capabilities and potential fit in a future team's roster. Sports analytics are also beneficial in fantasy leagues, giving fantasy players access to statistics that can enhance their game play. Even more impressive is the global sports betting market, which is worth up to trillion dolloars according to Statista\footnote{http://www.statista.com/topics/1740/sports-betting/}. One can imagine the value of an algorithm that can predict who will win a particular match. Core to most analytics is the ability to automatically extract valuable information from video. Being able to identify team formations and strategies as well as assessing the performance of individual players is reliant upon understanding where the actions are taking place in 3D space. Most approaches to player detection \cite{Okuma2004,Tong2011,Okuma2013,Lu2013}, game event recognition \cite{Gao2011}, and team tactical analysis \cite{Niu2012,Franks2015,Liu2006} perform field localization by either semi-manual methods \cite{Kim2000,Yamada2002,Farin2003,Watanabe2004,Fei2007,Gupta2011a,Okuma2004a,Dubrofsky2008,Hess2007} or by obtaining the game data from fixed and calibrated camera systems installed around the venue. In this paper, we tackle the challenging task of field localization as applied to a single broadcast image. We propose a method that requires no manual initialization and is applicable to any video of the game recorded with a single camera. The input to our system is a single image and the 3D model of the field, and the output is the mapping that takes the image to the model as illustrated in Fig. \ref{fig:motivation}. In particular, we frame the field localization problem as inference in a Markov Random Field. We parametrize the field in terms of four rays, cast from two automatically detected horizontal vanishing points. The rays correspond to the outer lines of the field and thus define the field's precise localization. Our MRF energy uses several potentials that exploit semantic segmentation of the image in terms of ``grass'', as well as agreement between the lines found in the image and those defined by the known model of the field. All of our potentials can be efficiently computed. We perform inference with branch-and-bound, achieving on average 0.7 seconds running time per frame. The weights in our MRF are learned using structure SVM \cite{Tsochantaridis2005}. We focus our efforts in the game of soccer as it is more challenging than other sports, such as hockey or basketball. A hockey rink or a basketball court are much smaller compared to a soccer field and are in a closed venue. In contrast, a soccer field is usually in an open stadium exposed to different weather and lightning conditions which might create difficulties in identifying the important markings of the field. Furthermore, the texture and pattern of the grass in a soccer field differs from one stadium to another in comparison to say a hockey rink which is always white. We note however that our method is sports agnostic and is easily extendable as long as the sport venue has known dimensions and primitive markings such as lines and circles. To evaluate our method, we collected a dataset of 259 images from 12 games in the World Cup 2014. We report the Intersection over Union (IOU) scores of our method against the ground truth, and show very promising results. In the following, we start with a discussion of related literature, and then describe our method. Experimental section provides an exhaustive evaluation of our method, and we finish with a conclusion and a discussion of future work. \vspace{-3mm} \section{Related Work} A variety of approaches have been developed in industry and academia to tackle the field localization problem. In the industrial setting, companies such as Pixelot and Prozone have proposed a hardware approach to field localization by developing advanced calibrated camera systems that are installed in a sporting venue. This requires expensive equipment, which is only possible at the highest performance level. Alternatively, companies such as Stathleates rely entirely on human workers for establishing the homography between the field and the model for every frame of the game. \begin{figure}[t] \vspace{-0.5cm} \centering \includegraphics[scale=0.30]{images/new/img1-eps-converted-to.pdf} \vspace{-0.3cm} \caption{We seek to find the mapping $H$ (a homography) that takes the image to the geometric model of the field.} \vspace{-0.3cm} \label{fig:motivation} \end{figure} In the academic setting, the common approach to field registration is to first initialize the system by either searching over a large parameter space (e.g. camera parameters) or by manually establishing a homography for various representative keyframes of the game and then propagating this homography throughout the consecutive frames. In order to avoid accumulated errors, the system needs to be reinitialized by manual intervention. Many methods have been developed which exploit geometric primitives such as lines and/or circles to estimate the camera parameters\cite{Kim2000,Yamada2002,Farin2003,Watanabe2004,Fei2007}. These approaches rely on hough transforms or RANSAC and require manually specified color and texture heuristics. An approach to limit the search space of the camera parameters is to find the two principal vanishing points corresponding to the field lines~\cite{Hayet2004a,Hayet2007} and only look at the lines and intersection points that are in accordance with these vanishing points and which satisfy certain cross ratios. The efficacy of the method was demonstrated only on goal areas where there are lots of visible lines. However, this approach faces problems for views of the centre of the field, where there are usually fewer lines and thus one cannot estimate the vanishing point reliably. In~\cite{Suat2007}, the authors proposed an approach that matches images of the game to 3D models of the stadium for initial camera parameter estimation \cite{Suat2007}. However, these 3D models only exist in well known stadiums, limiting the applicability of the proposed approach. Recent approaches, applied to Hockey, Soccer and American Football \cite{Gupta2011a,Okuma2004a,Dubrofsky2008,Hess2007} require a manually specified homography for a representative set of keyframe images per recording. In contrast, in this paper we propose a method that only relies on images taken from a single camera. Also no temporal information or manual initialization is required. Our approach could be used, for example in conjunction with \cite{Gupta2011a,Okuma2004a} to automatically produce smooth high quality field estimates from video. \begin{figure}[t] \vspace{-0.5cm} \centering \subfloat[]{ \includegraphics[width=0.5\linewidth]{images/new/modelPerpAndGrid1-eps-converted-to.pdf} } \subfloat[]{ \includegraphics[width=0.5\linewidth]{images/new/modelPerpAndGrid2-eps-converted-to.pdf} } \vspace{-0.3cm} \caption{(a) Field parametrization in terms of 4 rays $y_i$. (b) The grid } \label{fig:field_image} \vspace{-1mm} \end{figure} \section{3D Soccer Field Registration} The goal of this paper is to automatically compute the transformation between a broadcast image of a soccer field, and the 3D geometric model of the field. In this section, we first show how to parameterize the problem by making use of the vanishing points, reducing the effective number of degrees of freedom to be estimated. We then formulate the problem as energy minimization in a Markov random field that encourages agreement between the model and the image in terms of grass segmentation as well as the location of the primitives (i.e., lines and ellipses) that define the soccer field. Furthermore, we show that inference can be solved exactly very efficiently via branch and bound. \vspace{-2mm} \subsection{Field Model and Parameterization} Assuming that the ground is planar, a soccer field can be represented by a 2D rectangle embedded in a 3D space. The rectangle can be defined by two long line segments referred to as touchlines and two shorter line segments, each behind a goal post, referred to as goallines. Each soccer field has also a set of vertical and horizontal lines defining the goal areas, the penalty boxes, and the midfield. \hide{\raquel{ what else?}} Additionally, a full circle and two semicircles are also highlighted which define distances that opposing players should maintain from the ball at kickoff \hide{\raquel{ define this as well}}. We refer the reader to Fig. \ref{fig:motivation} for an illustration of the geometric field model. The transformation between the field in the broadcast image and our 3D model can be parameterized with a homography $H$, which is a $3\times 3$ invertible matrix defining a bijection that maps lines to lines between 2D projective spaces \cite{Hartley2004}. The matrix $H$ has 8 degrees of freedom and encapsulates the transformation of the broadcast image to the soccer field model. A common way to estimate this homography is by detecting points and lines in the image and associating them with points and lines in the soccer field model. Given these correspondences, the homography can be estimated in closed form using the Direct Linear Transform (DLT) algorithm \cite{Hartley2004}. While a closed form solution is very attractive, the problem lies on the fact that the association of lines/points between the image and the soccer model is not known a priori. Thus, in order to solve for the homography, one needs to evaluate all possible assignments. As a consequence DLT-like algorithms are typically used in the scenario where a nearby solution is already known (from a keyframe or previous frame), and search is done over a small set of possible associations. \begin{figure}[t] \vspace{-0.4cm} \centering \subfloat[]{ \includegraphics[width=0.5\linewidth]{images/new/grassPotentials-eps-converted-to.pdf} } \subfloat[]{ \includegraphics[width=0.5\linewidth]{images/new/grassUpper-eps-converted-to.pdf} } \vspace{-4mm} \caption{(a) In each plot, the green area correspond to grass and the grey area to non-grass pixels. The field $F_y$ is the region inside the highlighted lines. The yellow region is the percentage of counted grass/non-grass pixels. (b) The red line is the largest possible field and the blue line is the smallest field.} \label{fig:grass_image} \end{figure} In this paper, we follow a very different approach, which jointly solves for the association and the estimation of the homography. Towards this goal, we first reduce the effective number of degrees of freedom of the homography. In an image of the field, parallel lines intersect at two orthogonal vanishing points. If we can estimate the vanishing points reliably we can reduce the number of degree of freedom from 8 to 4. We defer the discussion about how we estimate the vanishing points to section \ref{sec:VP}. For convenience of presentation, we refer to the lines parallel to the touchlines as horizontal lines, and the lines parallel to the goallines as vertical lines. Let $x$ be an image of the field. Denote by $vp_{V}$ and $vp_{H}$ the (orthogonal) vertical and horizontal vanishing points respectively. Since a football stadium conforms to a Manhattan world, there also exists a third vanishing point which is orthogonal to both $vp_{V}$ and $vp_{H}$. We omit this third vanishing point from our model since there are usually not many lines enabling us to compute it reliably. We define a hypothesis field by four rays emanating from the vanishing points. The rays $y_1$ and $y_2$ originate from $vp_{H}$ and correspond to the touchlines. Similarly, the rays $y_3$ and $y_4$ originate from $vp_{V}$ and correspond to the goallines. As depicted in Fig. \ref{fig:field_image}, a hypothesis field is constructed by the intersection of the four rays. Let the tuple $y = (y_1, \dots, y_4) \in \mathcal{Y}$ be the parametrization of the field, where we have discretized the set of possible candidate rays. Each ray $y_i$ falls in an interval $[y_{i,min}^{init}, y_{i,max}^{init}]$ and $\mathcal{Y}=\prod_{i=1}^4\set{[y_{i,min}^{init}, y_{i,max}^{init}]}$ is the product space of these four integer intervals. Thus $\mathcal{Y}$ corresponds to a grid. \vspace{-2mm} \subsection{Field Estimation as Energy Minimization} In this section, we parameterize the problem as the one of inference in a Markov random field. In particular, given an image $x$ of the field, we obtain the best prediction $\hat{y}$ by solving the following inference task: \begin{equation} \hat{y} = \arg\max_{y \in \mathcal{Y}}\, w^T \phi (x,y) \label{eq:inference} \end{equation} with $\phi(x,y)$ a feature vector encoding various potential functions and $w$ the set of corresponding weights which we learn using structured SVMs \cite{Tsochantaridis2005}. In particular, our energy defines different potentials encoding the fact that the field should contain mostly grass, and high scoring configurations prefer the projection of the field primitives (i.e., lines and circles) to be aligned with the detected primitives in the image (i.e. detected line segments and conic edges). In the following we discuss the potentials in more detail. \vspace{-3mm} \subsubsection{Grass Potential:} This potential encodes the fact that a soccer field is made of grass. We perform semantic segmentation of the broadcast image into grass vs. non-grass. Towards this goal, we exploit the prediction from a CNN trained using DeepLab \cite{chen14semantic} for our binary segmentation task. Given a hypothesis field $y$, let $F_y$ denote the field restricted to the image $x$. We would like to maximize the number of grass pixels in $F_{y}$. Hence, we define a potential function, denoted by $\phi_{grass-in}(x,y)$, that counts the percentage of total grass pixels that fall inside the hypothesis field $F_y$. However, note that for any hypothesis $y'$ with $F_{y} \subset F_{y'}$, $F_{y'}$ would have at least as many grass pixels as $F_{y}$. This introduces a bias towards hypotheses that correspond to zoom-in cameras. We thus define three additional potentials such that we try to minimize the number of grass pixels outside the field $F_y$ and the number of non-grass pixels inside $F_y$, while maximizing the number of non-grass pixels outside $F_{y}$. We denote these potentials as $\phi_{grass-out}(x,y)$, $\phi_{non-grass-out}(x,y)$ and $\phi_{non-grass-in}(x,y)$ respectively. We refer the reader to Fig. \ref{fig:grass_image} for an illustration. \vspace{-3mm} \subsubsection{Lines Features:} The observable lines corresponding to the white marking of the soccer field provide strong clues on the location of the touchlines and goallines. This is because their positions and lengths must always adhere to the FIFA specifications. In a soccer field there are 7 vertical and 10 horizontal line segments as depicted in Fig.~\ref{fig:motivation}. Using the line detector of~\cite{Rafael2012}, we find all the line segments in the image and also the vanishing points as described in section~\ref{sec:VP}. A byproduct of our vanishing point estimation procedure is that each detected line segment is assigned to $vp_H$, $vp_V$ or none (e.g. line segments that fall on the ellipse edges) as demonstrated in Fig. \ref{fig:vp_grass}. We then define a scoring function $\phi_{\ell_i}(x,y)$ for each line $\ell_i$, $i=1,\dots,17$ that is large when the image evidence agrees with the predicted line position obtained by reprojecting the model using the hypothesis $y$. The exact reprojection can be easily obtained by using the invariance property of cross ratios~\cite{Hartley2004}, Fig. \ref{fig:cross-ratios}(a). Giving the exact position of a line $\ell_i$ on the grid $\mathcal{Y}$, the score $\phi_{\ell_i}(x,y)$ counts the percentage of line segment pixels that are aligned with the same vanishing point, Fig. \ref{fig:cross-ratios}(b). We refer the reader to the suppl. material for more in details. \begin{figure}[t] \vspace{-5mm} \centering \subfloat[]{ \includegraphics[width=0.24\linewidth]{images/new/img34-eps-converted-to.pdf} } \subfloat[]{ \includegraphics[width=0.24\linewidth]{images/new/img34Grass-eps-converted-to.pdf} } \subfloat[]{ \includegraphics[width=0.24\linewidth]{images/new/img39-eps-converted-to.pdf} } \subfloat[]{ \includegraphics[width=0.24\linewidth]{images/new/img39Grass-eps-converted-to.pdf} } \vspace{-3mm} \caption{(a),(c) Two images of the game. The detected yellow and magenta line segments correspond to $vp_V$ and $vp_H$ respectively. The blue line segments do not correspond to any vanishing point. (b),(d) The grass segmentation results for the images in (a)/(c)} \label{fig:vp_grass} \vspace{-3mm} \end{figure} \begin{figure}[t] \vspace{-0.6cm} \centering \subfloat[]{ \includegraphics[width=0.66\linewidth]{images/new/crossRatios-eps-converted-to.pdf} } \subfloat[]{ \includegraphics[width=0.33\linewidth]{images/new/crossRatios2-eps-converted-to.pdf} } \vspace{-0.3cm} \caption{(a) For line $\ell$ (red line) in the model, the cross ratio $CR = BD/BC$ must equal the cross ratio of the projection of $\ell$ on the grid given by $CR' = (A'C' \cdot B'D')/(BC'\cdot A'D')$. The projection of the endpoints of $\ell$ are computed similarly. (b) For vertical line $\ell$, the potential $\phi_{\ell}(x,y)$ counts the percentage of $vp_V$ line pixels in the yellow region for which the vertical sides are one ray away from the ray on which $\ell$ falls upon.} \label{fig:cross-ratios} \end{figure} \vspace{-3mm} \subsubsection{Circle Potentials:} \label{sec:circle} A soccer field has white markings corresponding to a full circle centered at the middle of the field and two circular arcs next to the penalty area, all three with the same radius. When the geometric model of the field undergoes a homography $H$, these circular shapes transform to conics in the image. Similar to the line potentials, we seek to construct potential functions that count the percentage of supporting pixels for each circular shape given a hypothesis field $y$. These supporting pixels are edge pixels that do not fall on any line segments belonging to $vp_V$ or $vp_H$. Unlike the projected line segments, the projected circles are not aligned with the grid $\mathcal{Y}$. However, as shown in Fig. \ref{fig:circle-pot}, we note that there are two unique inner and outer rectangles for each circular shape in the model which transform in the image $x$ to quadrilaterals aligned with the vanishing points. Their position in the grid can be computed similarly to lines using cross ratios. We define a potential $\phi_{C_i}(x,y)$ $i=1,2,3$ for each conic which simply counts the percentage of (non horizontal/vertical) line pixels inside the region defined by the two quadrilaterals. \begin{figure}[t] \vspace{-0.2cm} \centering \includegraphics[scale=0.30]{images/new/circlePot-eps-converted-to.pdf} \vspace{-0.6cm} \caption{For each circle $C$ in the model, the projections of the inner (red) and outer (blue) quadrilaterals can be obtained using cross ratios. The potential $\phi_{C}(x,y)$ is the percentage of non-vp line pixels in the yellow region.} \label{fig:circle-pot} \end{figure} \vspace{-2mm} \section{Exact Inference via Branch and Bound} Note that the cardinality of our configuration space $\mathcal{Y}$, i.e. the number of hypothesis fields, is of the order $O(N_H^2N_V^2)$, which is a very large number. In this section, we show how to solve the inference task in Eq.~\eqref{eq:inference} efficiently and exactly. Towards this goal, we design a branch and bound \cite{Lampert2009} (BBound) optimization over the space $\mathcal{Y}$ of all parametrized soccer fields. We take advantage of generalizations of integral images to 3D \cite{Schwing2012a} to compute our bounds very efficiently. Our BBound algorithm thus requires three key ingredients: \begin{enumerate} \item A branching mechanism that can divide any set into two disjoint subsets of parametrized fields. \item A set function $\bar{f}$ such that $\bar{f}(Y) \geq \max_{y\in Y} w^t\phi(x, y)$. \item A priority queue which orders sets of parametrized fields $Y$ according to $\bar{f}$. \end{enumerate} In what follows, we describe the first two components in detail. \vspace{-2mm} \subsection{Branching} Suppose that $Y = \prod_{i=1}^4[y_{i,min}, y_{i,max}] \subset \mathcal{Y}$ is a set of hypothesis fields. At each iteration of the branch and bound algorithm we need to divide $Y$ into two disjoint subsets $Y_1$ and $Y_2$ of hypothesis fields. This is achieved by dividing the largest interval $[y_{i,min}, y_{i,max}]$ in half and keeping the other intervals the same. \begin{figure}[t] \vspace{-0.6cm} \centering \subfloat[]{ \includegraphics[width=0.5\linewidth]{images/new/lineUppper-eps-converted-to.pdf} \label{fig:line-bound} } \subfloat[]{ \includegraphics[width=0.5\linewidth]{images/new/ellipseUpper-eps-converted-to.pdf} \label{fig:circle-bound} } \label{fig:bounding} \vspace{-4mm} \caption{(a) Finding the lower and upper bounds for a line correspond respectively to the $\min$ and $\max$ operations. (b) The upper/lower bound for $\phi_{C_i}(x,y)$ is the percentage of non-vp line pixels in the yellow region which is restricted by the max/min outer quadrilateral and the min/max inner quadrilateral.} \vspace{-1mm} \end{figure} \vspace{-2mm} \subsection{Bounding} We need to construct a set function $\bar{f}$ that upper bounds $w^T\phi(x,y)$ for all $y \in Y$ where $Y \subset \mathcal{Y}$ is any subset of parametrized fields. Since all potential function components of $\phi(x,y)$ are positive proportions, we decompose $\phi(x,y)$ into potential with strictly positive weights and those with weights that are either zero or negative: \begin{align} w^T\phi(x,y) = w_{neg}^T\phi_{neg}(x,y)+ w_{pos}^T\phi_{pos}(x,y) \label{eq:all} \end{align} with $w_{neg}$, $w_{pos}$ the vector of negative and positive weights respectively. We define the upper bound on Eq. \eqref{eq:all} to be the sum of an upper bounds on the positive features and a lower bound on the negative ones, \begin{align} \bar{f}(Y) = w_{neg}^T\bar{\phi}^{neg}(x,Y)+ w_{pos}^T\bar{\phi}^{pos}(x,Y) \end{align} It is trivial to see that this is a valid bound. In what follows, we construct a lower bound and an upper bound for all the potential functions of our energy. \vspace{-2mm} \subsubsection{Bounds for the Grass Potential:} Let $y_{\cap} := (y_{1,max}, y_{2,min}, y_{3,max}, y_{4,min})$ be the smallest possible field in $Y$, and let $y_{\cup} := (y_{1,min}, y_{2,max}, y_{3,min}, y_{4,max})$ be the largest. We now show how to construct the bounds for $\phi_{grass-in}(x,y)$, and note that one can construct the other grass potential bounds in a similar fashion. Recall that $\phi_{grass-in}(x,y)$ counts the percentage of grass pixels inside the field. Since any possible field $y \in Y$ is contained within the smallest and largest possible fields $y_{\cap}$ and $y_{\cup}$ (Fig. \ref{fig:grass_image}b), we can define the the upper bound as the percentage of grass pixels inside the largest possible field and the lower bound as the percentage of grass pixels inside the smallest possible field. Thus: \[ \bar{\phi}_{grass-in}^{pos}(x,Y) = \phi_{grass-in}(x,y_{\cap}), \quad \bar{\phi}_{grass-in}^{neg}(x,Y) = \phi_{grass-in}(x,y_{\cup}) \] We refer the reader to Fig. \ref{fig:grass_image}(b) for an illustration. \vspace{-3mm} \subsubsection{Bounds for the Line Potentials:} We compute our bounds by finding a lower bound and an upper bound for each line independently. Since the method is similar for all the lines, we will illustrate it only for the left vertical penalty line $\ell$ of (Fig. \ref{fig:cross-ratios}a). For a hypothesis set of fields $Y$, we find the upper bound $\bar{\phi}_{\ell}^{pos}(x,Y)$ by computing the maximum value of $\phi_{\ell}(x,y)$ in the horizontal direction (i.e. along the rays from $vp_V$) but only for the maximal extended projection of $\ell$ in the vertical direction (i.e. along the rays from $vp_H$). This is demonstrated in (Fig. \ref{fig:line-bound}). Finding a lower bound consists instead of finding the minimum $\phi_{\ell}(x,y)$ for minimally extended projections of $\ell$. \hide{ Note that for any $y \in Y$, the ray segment $r$ from $vp_V$ on which $\ell'$ falls upon depends on the goalline rays $y_3$ and $y_4$. \raquel{why do you need $r$? why complicated notation with $q$ also?} Also, the projection $\ell'$ on $r$ is restricted by rays $q_1$ and $q_2$ from $vp_H$ which depend on $y_1$ and $y_2$. For any combination of $y_1 \in [y_{1,min}, y_{1,max}]$ and $y_2 \in [y_{2,min}, y_{2,max}]$ we have a ray $r \in \set{r_1, \dots, r_{n_\ell}}$. The most left ray $r_1$ belongs to the fields restricted by touchlines $y_{3,min}$ and $y_{4,min}$ and the most right ray $r_{n_{\ell}}$ belongs to the fields restricted by $y_{3,max}$ and $y_{4,max}$. The horizontal restrictions of $\ell'$ can be found in a similar fashion by computing the upper and lower bounds of the rays $q_1$ and $q_2$. This is depicted in \todo{(Ref to fig)}. We define the upper bound function $\bar{\phi}^{pos}_{\ell}$ for the line $\ell$ to be the maximum value $\phi_{\ell}$ can take on the rays $r_1$ to $r_{n_{\ell}}$ and restricted below and above by $q_{1,\min}$ and $q_{2,\max}$ respectively as depicted in figure \todo{(Ref to fig)}. A lower bound is found in a similar fashion. } Note that for a set of hypothesis fields $Y$, this task requires a linear search over all the possible rays in the horizontal (for vertical lines) at each iteration of branch and bound. However, as the branch and bound continues, the search space becomes smaller and finding the maximum becomes faster. \vspace{-3mm} \subsubsection{Bounds for the Circle Potentials:} Referring back to the definition of the ellipse potentials $\phi_{C_i}(x,y)$ provided in section~\ref{sec:circle} and a set of hypothesis fields $Y$, we aim to construct lower and upper bounds for each circle potential. For an upper bound, we simply let $\phi_{C_i}^{pos}(x,Y)$ be the percentage of non-vp line pixels contained in the region between the smallest inner and largest outer quadrilaterals as depicted in (Fig. \ref{fig:circle-bound}). A lower bound is obtained in a similar fashion. \subsection{Integral Accumulators for Efficient Potentials and Bounds} We construct five 2D accumulators corresponding to the grass pixels, non-grass pixels, horizontal line edges, vertical line edges, and non-vp line edges. In contrast to \cite{Viola2001}, and in the same spirit of \cite{Schwing2012a}, our accumulators are aligned with the two orthogonal vanishing points and count the fraction of features in the regions of $x$ corresponding to quadrilaterals restricted by two rays from each vanishing point. In this manner, the computation of a potential function over any region in $\mathcal{Y}$ boils down to four accumulator lookups. Since we defined all the lower and upper bounds in terms of their corresponding potential functions, we use the same accumulators to compute the bounds in constant time. \vspace{-3mm} \subsection{Learning} We use structured support vector machine (SSVM) to learn the parameters $w$ of the log linear model. Given a dataset composed of training pairs $\set{x^{(n)},y^{(n)}}_{i=1}^N$, we obtain $w$ by minimizing the following objective \begin{align} \min_{w}\frac{1}{2}\norm{w}^2 + \frac{C}{N}\sum_{n=1}^N \max_{y \in \mathcal{Y}}\big( \Delta(y^{(n)},\hat{y})+ w^T\phi(x^{(n)},\hat{y}) - w^T\phi(x^{(n)},y^{(n)}) \big) \label{eq:learning} \end{align} where $C>0$ is a regularization parameter and $\Delta:\mathcal{Y} \times \mathcal{Y} \to \ensuremath \mathbb{R}^{+}\cup\set{0}$ is a loss function measuring the distance between the ground truth labeling $y^{(n)}$ and a prediction $\hat{y}$, with $\Delta(y^{(n)},y) = 0$ if and only if $y = y^{(n)}$. In particular, we employ the parallel cutting plane implementation of \cite{Schwing2013}. The loss function is defined very similarly to $\phi_{grass-in}(x,y)$. However, instead of segmenting the image $x^{(n)}$ to grass vs. non-grass pixels, we segment the grid $\mathcal{Y}$ to field vs. non-field cells by reprojecting the ground truth field into the image. Then given a hypothesis field $y$, we define the loss for a training instance $(x^{(n)}, y^{(n)})$ to be \[ \Delta(y^{(n)},y) = 1 -\frac{\big(\text{\# of field cells in $F_y$}\big) + \big(\text{\# of non-field cells outside of $F_y$ }\big) }{\text{Number of cells in $\mathcal{Y}$ }} \] Note that the loss can be computed using integral accumulators, and loss augmented inference can be performed efficiently and exactly using our BBound. \vspace{-2mm} \section{Vanishing Point Estimation} \label{sec:VP} In a Manhattan world, such as a soccer stadium, there are three principal orthogonal vanishing points. Our goal is the find the two orthogonal vanishing points $vp_V$ and $vp_H$ that correspond to the lines of the soccer field. We forgo the estimation of the third orthogonal vanishing point since in a broadcast image of the field there are not usually many lines corresponding to this vanishing point. However, a reasonable assumption is to take the direction of the third vanishing point to be in the direction of gravity since the main camera rarely rotates. We find an initial estimate of the positions of $vp_H$ and $vp_V$ by deploying the line voting procedure of \cite{Hedau2009}. This procedure is robust when there are sufficiently enough line segments for each vanishing point. In some cases, for example when the camera is facing the centre of the field (Fig. \ref{fig:vp_grass}b), there might not be enough line segments belonging to $vp_V$ to estimate its position reliably but enough to distinguish its corresponding line segments. In this case, we take the line segments that belong to neither vanishing point and fit an ellipse \cite{Fitzgibbon1999} which is an approximation to the conic in the centre of the field. We then take the 4 endpoints of the ellipses' axes and also one additional point corresponding to the crossing of the ellipses' minor axis from the grass region to non-grass region to find an approximate homography which in turn gives us an approximate $vp_V$. \vspace{-2mm} \section{Experiments} For assessing our method, we recorded 12 games from the World Cup 2014 held in Brazil. Out of these games we annotated 259 images with the ground truth fields and also the grass segmentations. We used 6 games with 154 images for training and validation sets, and 105 images from 6 other games for the test set. The images consist of different views of the field with different grass textures. Some images, due to the rain, seem blurry and lack some lines. We remind the reader that these images do not have a temporal ordering. Out of the 259 images, the vanishing point estimation failed for 5 images in the training/validation set and for 3 images in the test set. We discarded these failure cases from our training and evaluation. In what follows we assess different components of our method. \vspace{-2mm} \paragraph{\bf Grass Segmentation:} is a major component of our method since it has its own potentials and is also used for restricting the set of detected line segments in the image to the ones that correspond to white markings of the field. Most of the existing approaches mentioned in the related work's section, use heuristics based on color and hue information to segment the image into grass vs. non-grass pixels. We found these heuristics to be unreliable at times since the texture and color of the grass can be different from one stadium to another. Moreover, at some games, the spectators wear clothing with similar colors to the grass which further makes the task of grass segmentation difficult. As a result, we fine-tune the CNN component of the DeepLab network \cite{chen14semantic} on the train/validation images annotated with grass and non-grass pixels. Our trained CNN grass segmentation method achieves an Intersection over Union (IOU) score of 0.98 on the test set. Some grass segmentation examples are shown in Fig.~\ref{fig:vp_grass}. \vspace{-2mm} \paragraph{\bf Ablation studies:} In Table~\ref{table:ablation} we present the IOU score of test images based on employing different potentials in our energy function. For each set of features we used the weights corresponding to the value of $C$ that maximizes the IOU score of the validation set. We notice that just including the grass potentials achieves a very low test IOU of 0.57. This is expected since grass potentials by themselves do not take into account the geometry of the field. However, when we include line and circle potentials, the test IOU increases by about 30\%. \begin{table}[t] \centering \caption{{\bf G} correspond to 4 weights for each grass potential. {\bf L}: all the lines share the same weight. {\bf C}: all the circles share the same weight. {\bf VerL}: all the vertical lines share the same weight. {\bf VerH}: all the horizontal lines share the same weight.} \label{table:ablation} \vspace{1mm} \addtolength{\tabcolsep}{6pt} \begin{tabular}{|l|l|l|} \hline Potentials & Mean Test IOU & Median Test IOU \\ \hhline{|=|=|=|} G & 0.57 & 0.56 \\ \hline G+L & 0.85 & 0.93 \\ \hline L+E & 0.88 & 0.94 \\ \hline G+L+C & 0.89 & 0.94 \\ \hline G+VerL+HerL+C & 0.90 & 0.94 \\ \hline \end{tabular} \vspace{-2mm} \end{table} \vspace{-2mm} \paragraph{\bf Comparison of Our Method to Two Baselines:} There is currently no baseline in the literature for automatic field localization in the game of soccer. As such, we derive two baselines based on our segmentation and line segment detection methods. As the first baseline, for each test image we retrieve its nearest neighbour (NN) image from the training/validation sets based on the grass segmentation IOU and apply the homography of the training/val image on the test image. The second baseline is similar but instead of the NN based on grass, we retrieve based on the distance transform computed from the edges \cite{meijster2002}. Note that these approaches could be considered similar to the keyframe initialization methods of~\cite{Gupta2011a,Okuma2004a,Dubrofsky2008,Hess2007}. In contrast to those papers, here we retrieve the closest homography from a set of different games. In Table \ref{table:baseline}, we compare the IOU of these baseline with our learned branch and bound inference method. We observe that if we use only the grass potentials, the baseline is similar to the NN with grass segmentation. Using NN with line segment detections improves the baseline. When we introduce potential functions based on lines, the IOU metric is increased by about 30\%. Our method with the best set of features outperform the baseline by about 34\%. The best set of features that achieve an IOU of 90\% have 4 weights for the grass potentials, one shared weight for the vertical lines, one shared weight for the horizontal lines, and similarly one shared weight for the circles. By releasing our dataset and the annotations, we hope that other baselines will be established. \begin{table}[t] \centering \caption{Comparison of the branch and bound inference method with two baselines} \label{table:baseline} \begin{tabular}{|c|c|c|} \hline method & Mean Test IOU & Median Test IOU \\ \hhline{|=|=|=|} Nearest Neighb. based on grass segmentation & 0.56 & 0.64 \\ \hline Nearest Neighb. based on lines distance transform & 0.59 & 0.66 \\ \hline our method with just grass potentials & $ 0.57$ & $ 0.56$ \\ \hline our method with line potentials & $\geq 0.85$ & $\geq 0.93$ \\ \hline our method best features (G+VerL+HorL+C) & 0.90 & 0.94 \\ \hline \end{tabular} \end{table} \paragraph{\bf Qualitative Results:} In Fig. \ref{fig:qual} we project the model on a few test images using the homography obtained with our best features (G+VerL+HorL+C). We also project the image on the model of the field. We observe great agreement between the image and the model. \begin{figure}[t!] \centering \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/3-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/2-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/5-eps-converted-to.pdf} } \\ \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/3Model-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/2Model-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/5Model-eps-converted-to.pdf} } \\ \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/37-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/32-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/50-eps-converted-to.pdf} } \\ \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/37Model-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/32Model-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/50Model-eps-converted-to.pdf} }\\ \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/86-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/89-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/45-eps-converted-to.pdf} } \\ \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/86Model-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/89Model-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/45Model-eps-converted-to.pdf} } \caption{Some examples of the obtained homography. The yellow lines correspond to the projection of the model lines on the images. The image is also projected on the model using the homography.} \label{fig:qual} \end{figure} \paragraph{\bf Failure Modes:} Fig. \ref{fig:fail} shows failure modes which are mainly due to errors due to failure of the circle potential. \begin{figure}[t] \centering \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/99-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/101-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/102-eps-converted-to.pdf} } \\ \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/99Model-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/101Model-eps-converted-to.pdf} } \subfloat{ \includegraphics[width=0.33\linewidth]{images/new/102Model-eps-converted-to.pdf} } \caption{Three failure examples where the homography is not correctly estimated} \label{fig:fail} \end{figure} \paragraph{\bf Speed and Number of Iterations.} For the best set of features (denoted with G+VerL+VerH+C in Table~\ref{table:ablation}), it takes on average 0.7 seconds (a median of 0.5) to perform inference and on average 2964 BBound iterations (with median of 1848 iterations). Times clocked on one core of AMD Opteron 6136. \section{Conclusion and Future Work} In this paper, we presented a new framework for fast and automatic field localization as applied to the game of soccer. We framed this problem as a branch and bound inference task in a Markov Random Field. We evaluated our method on collection of broadcast images recorded from World Cup 2014. As was mentioned, we do not take into account temporal information in our energy function. For future work, we intend to construct temporal potential functions and evaluate our method on video sequences. We also plan to incorporate player detection and tracking in our framework. Finally, we aim to extend our method to other team sports such as hockey, basketball, rugby and American Football. \bibliographystyle{splncs}
{ "attr-fineweb-edu": 2.105469, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUcerxK6EuNCwyulV0
\section{Introduction} Unpredictability of play calls is widely accepted to be a key ingredient to success in the NFL. For example, according to several players of the 2017 Dallas Cowboys, being too predictable regarding their play calling may have been one reason for their elimination from the playoff contention of the 2017 NFL season. Being unpredictable hence is desirable, and, vice versa, it is clearly also of interest to be able to accurately predict the opponent's next play call. In earlier studies, play call predictions were carried out by simple arithmetics, such as calculating the relative frequencies of runs and passes of previous matches \citep{heiny2011predicting}. Driven by the availability of play-by-play NFL data, several studies considered statistical models for play call predictions. These studies can be divided in those where play-by-play data only is considered (see, \citealp{heiny2011predicting, teich2016nfl}) and those who consider additional data on the players on the field, such as the number of offensive players for a certain position and player ratings (see \citealp{leepredicting, joashpredicting}). The former report prediction accuracy of about 0.67, whereas the latter provide prediction accuracy of about 0.75. However, most of these studies use basic statistical models, e.g.\ linear discriminant analysis, logistic regression, or decision trees, which do not account for the time series structure of the data at hand. This contribution considers HMMs for modelling and forecasting NFL play calls. In the recent past, HMMs have been applied in different areas of research for forecasting, including stock markets (see, e.g., \citealp{de2013dynamic, dias2015clustering}), environmental science (see, e.g., \citealp{chambers2012earthquake, tseng2020forecasting}) and political conflicts \citep{schrodt2006forecasting}. Within HMMs, the observations are assumed to be driven by an underlying state variable. In the context of play calling, the underlying states serve as a proxy for the team's current propensity to make a pass (as opposed to a run). The state sequence is modelled as a Markov chain, thereby inducing correlation in the observations and hence accounting for the time series structure of the data. HMMs are fitted to data from seasons 2009 to 2017 to predict the play calls for season 2018. In practice, these predictions are helpful for defense coordinators to make adjustments in real time on the field. Offense coordinators may also benefit from these models, since they allow them to check the predictability of their own play calls. This paper is organised as follows: Section \ref{chap:data} describes the the play-by-play data and provides exploratory data analysis. Section \ref{chap:methods} explains HMMs in furhther detail, and section \ref{chap:results} presents the results. \section{Data}\label{chap:data} The data for predicting play calls in the NFL were taken from \url{www.kaggle.com}, covering (almost) all plays of regular season matches between 2009 to 2018. In total, $m = 2,526$ matches are considered\footnote{The data comprises 2,526 regular-season matches out of 2,560 matches which have taken place in the time period considered.}, each of which is split up into two time series (one for each team's offense), totalling in 5,052 time series containing 318,691 plays. The observed time series $\{y_{m,p}\}_{p=1,\ldots,P_m}$ indicates whether a run or a pass play has been called in the $p$-th play in match $m$, with $$ y_{m,p} = \begin{cases} 1, & \text{if $p$--th play is a pass;} \\ 0, & \text{otherwise} \end{cases} $$ and $P_m$ denoting the total number of plays in match $m$. For all matches considered, other plays such as field goals and kickoffs, which occur typically at the beginning or the end of drives, are ignored here. Since the main goal is to predict play calls, we divide the data into a training and a test data set. The data set for training the models covers all matches from seasons 2009 -- 2017, comprising 2,302 matches and 289,191 plays. The test data covers 224 matches, totalling in 29,500 plays. For the full data set, about 58.4\% of play calls were passes. Since the play of the offense is likely affected by intermediate information on the match (such as the current score), several covariates are considered, which have also been considered by previous studies on predicting play calls summarised above: a dummy indicating whether the match is played at home (\textit{home}), the yards to go for a first down (\textit{ydstogo}), the current down number (\textit{down1}, \textit{down2}, \textit{down3}, and \textit{down4}), a dummy indicating whether the formation is shotgun (\textit{shotgun}), a dummy indicating whether the play is a no-huddle play (\textit{no-huddle}), the difference in the intermediate score (own score minus the opponent's score) (\textit{scorediff}), a dummy indicating whether the current play is a goal-to-go play (\textit{goaltogo}), and a dummy indicating whether the team is starting within 10 yards of their own end zone (\textit{yardline90}). Table \ref{tab:nfl_descriptives} summarises the covariates and displays corresponding descriptive statistics (for the full data set). \begin{table}[h] \centering \caption{Descriptive statistics of the covariates.} \label{tab:nfl_descriptives} \scalebox{0.8}{ \begin{tabular}{@{\extracolsep{5pt}}lcccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{1}{c}{mean} & \multicolumn{1}{c}{st.\ dev.} & \multicolumn{1}{c}{min.} & \multicolumn{1}{c}{max.} \\ \hline \\[-1.8ex] \textit{pass} (response) & 0.584 & 0.493 & 0 & 1 \\ \textit{home} & 0.503 & 0.500 & 0 & 1 \\ \textit{ydstogo} & 8.634 & 3.931 & 1 & 50 \\ \textit{down1} & 0.443 & 0.497 & 0 & 1 \\ \textit{down2} & 0.333 & 0.471 & 0 & 1 \\ \textit{down3} & 0.209 & 0.407 & 0 & 1 \\ \textit{down4} & 0.015 & 0.121 & 0 & 1 \\ \textit{shotgun} & 0.525 & 0.499 & 0 & 1 \\ \textit{no-huddle} & 0.087 & 0.282 & 0 & 1 \\ \textit{scorediff} & $-$1.458 & 10.84 & $-$59 & 59 \\ \textit{goaltogo} & 0.057 & 0.232 & 0 & 1 \\ \textit{yardline90} & 0.033 & 0.178 & 0 & 1 \\ \hline \\[-1.8ex] \end{tabular}} \end{table} To investigate how the play calling varies with different downs and the shotgun formation, Figure \ref{fig:down_shotgun} shows the empirical proportions for a pass found in the data, separated for the different downs and the shotgun formation. As indicated by the figure, a pass becomes more likely with increasing number of downs, and there is a substantial increase in passes observed if the team is in shotgun formation. However, whether a run or a pass is called is also likely to depend on the yards to go for a first down, which is shown in Figure \ref{fig:scorediff}, indicating that a pass becomes more likely the more yards are needed for a first down. The colours in Figure \ref{fig:scorediff} indicate the (categorised) score difference, suggesting that a pass becomes more likely if teams are trailing. In addition to the covariates potentially affecting the decision to call a pass or a run, one example time series from the data set, corresponding to the play calls observed for the New Orleans Saints in the match against the New York Giants played in November 2015 is shown in Figure \ref{fig:timeseries}. With 101 points scored in total, this match is one of the highest scoring NFL games. The plays shown in the figure underline that there are periods with a fairly high number of passing plays (e.g.\ around play 20), and those where more runs are called (e.g.\ around play 30). \begin{figure} \centering \includegraphics[scale = 0.75]{figure_down_shotgun.pdf} \caption{Empirical proportions for a pass found in the data for different downs and the shotgun formation.} \label{fig:down_shotgun} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.75]{figure_scorediff.pdf} \caption{Empirical proportions for a pass found in the data for the different yards to go for a first down. Colours indicate the (categorised) score difference. The proportion for a pass for 10 yards to go is relatively low, since most of these observations correspond to a first down, where a run is more likely. Observations with more than 25 yards to go are excluded (the number of observations for each of these categories is less than 100).} \label{fig:scorediff} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.9]{figure_data2.pdf} \caption{Example time series found in the data: the play calls of the New Orleans Saints observed for the match against the New York Giants played on November 1, 2015.} \label{fig:timeseries}\vspace*{-9pt} \end{figure} \begin{table}[!htbp] \centering \caption{Descriptive statistics of the covariates considered.} \label{tab:descriptives} \begin{tabular}{@{\extracolsep{5pt}}lcccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{1}{c}{mean} & \multicolumn{1}{c}{std.\ dev.} & \multicolumn{1}{c}{min} & \multicolumn{1}{c}{max} \\ \hline \\[-1.8ex] \textit{pass} (response) & 0.584 & 0.493 & 0 & 1 \\ \textit{home} & 0.503 & 0.500 & 0 & 1 \\ \textit{ydstogo} & 8.634 & 3.931 & 1 & 50 \\ \textit{down1} & 0.443 & 0.497 & 0 & 1 \\ \textit{down2} & 0.333 & 0.471 & 0 & 1 \\ \textit{down3} & 0.209 & 0.407 & 0 & 1 \\ \textit{down4} & 0.015 & 0.121 & 0 & 1 \\ \textit{shotgun} & 0.525 & 0.499 & 0 & 1 \\ \textit{no-huddle} & 0.087 & 0.282 & 0 & 1 \\ \textit{scorediff} & $-$1.458 & 10.84 & $-$59 & 59 \\ \textit{goaltogo} & 0.057 & 0.232 & 0 & 1 \\ \textit{yardline90} & 0.033 & 0.178 & 0 & 1 \\ \hline \\[-1.8ex] \end{tabular} \end{table} \section{Modelling and forecasting play-calls}\label{chap:methods} To account for the periods of passes and runs as indicated by Figure \ref{fig:timeseries}, HMMs are considered for modelling and forecasting play calls. The underlying states can be interpreted as the propensity to make a pass (as opposed to a run) of the team considered. A HMM involves two components, namely an observed state-dependent process and an unobserved Markov chain with $N$ states, assuming that the observations are generated by one of $N$ pre-specified state-dependent distributions. The dependence structure of the HMM considered is shown in Figure \ref{fig:HMM}. Here, the observed time series are the play calls $\{y_{m,p}\}_{p=1,\ldots,P_m}$, which are denoted from now on by $y_p$ for notational simplicity. The unobserved state process, modelled by a $N$-state Markov chain, is denoted by $\{s_p\}_{p=1,\ldots,P_m }$. For the state transitions, a transition probability matrix (t.p.m.) $\boldsymbol{\Gamma} = (\gamma_{ij})$ is defined, with $\gamma_{ij}=\Pr(s_p = j | s_{p-1}=i$), i.e.\ the probability of switching from state $i$ at play $p-1$ to state $j$ in play $p$. For the model formulation of an HMM to be completed, the number of states $N$ and the class of the state-dependent distribution have to be selected. Since the play calls are binary, the Bernoulli distribution is chosen here. The corresponding probabilities of the observation given state $i$, i.e.\ $f(y_p\, |\, s_p = i)$ are comprised in the $i-$th diagonal element of the $N \times N$ diagonal matrix $\mathbf{P}(y_{p})$. Since assuming a team to start in its stationary distribution at the beginning of an American football match is fairly unrealistic, we estimate the initial distribution $\boldsymbol{\delta}= \big(\Pr (s_{p} = 1),\ldots,\Pr (s_{p} = N) \big)$. To include the covariates introduced above which may lead to state-switching, we allow the transition probabilities $\gamma_{ij}$ to depend on covariates at play $p$. This is done by linking $\gamma_{ij}^{(p)}$ to covariates (denoted by $x_1^{(p)},\ldots,x_k^{(p)}$) using the multinomial logit link: $$ \gamma_{ij}^{(p)} = \dfrac{\exp(\eta_{ij}^{(p)})}{\sum_{k=1}^N \exp(\eta_{ik}^{(p)})} $$ with \vspace{0.5cm} $$ \eta_{ij}^{(p)} = \begin{cases} \beta_0^{(ij)} + \sum_{l=1}^K \beta_l^{(ij)} x_l^{(p)} & \text{if }\, i\ne j; \\ 0 & \text{otherwise}. \end{cases} \vspace{1cm} $$ Since the transition probabilities depend on covariates, the t.p.m.\ as introduced above is not constant across time, and hence denoted by $\boldsymbol{\Gamma}^{(p)}$. To formulate the likelihood, we apply the forward algorithm, which allows to calculate the likelihood recursively at low computational cost \citep{zucchini2016hidden}. The likelihood for a single match $m$ is then given by: \begin{equation*} L = \boldsymbol{\delta} \mathbf{P}(y_{m,1}) \boldsymbol{\Gamma}^{(m,2)}\mathbf{P}(y_{m,2}) \dots \boldsymbol{\Gamma}^{({m,P_m})}\mathbf{P}(y_{m,P_m}) \mathbf{1} \end{equation*} with column vector $\mathbf{1}=(1,\ldots,1)' \in \mathbb{R}^N$ \citep{zucchini2016hidden}. To obtain the likelihood for the full data set, we assume independence between the individual matches such that the likelihood is given by the product of likelihoods for the individual matches: \begin{equation*} L = \prod_{m=1}^{M} \boldsymbol{\delta} \mathbf{P}(y_{m,1}) \boldsymbol{\Gamma}^{(m,2)}\mathbf{P}(y_{m,2}) \dots \boldsymbol{\Gamma}^{({m,P_m})}\mathbf{P}(y_{m,P_m}) \mathbf{1}, \end{equation*} where $M$ denotes the total number of matches. The model parameters are estimated by numerically maximising the likelihood using \texttt{nlm()} in R \citep{rcoreteam}. Subsequently, we predict play calls for the test data using the fitted models. Specifically, to forecast play calls, the forecast distribution is considered, which is for a single match given as a ratio of likelihoods (dropping the subscript $m$ for notational simplicity): $$ \Pr(y_{P+1} = y \,|\, \mathbf{y}^{(P)}) = \dfrac{\boldsymbol{\delta} \mathbf{P}(y_{1}) \boldsymbol{\Gamma}^{({2})} \mathbf{P}(y_{2}) \cdots \boldsymbol{\Gamma}^{({P})} \mathbf{P}(y_{P}) \boldsymbol{\Gamma}^{(y)} \mathbf{P}(y) \mathbf{1}}{\boldsymbol{\delta} \mathbf{P}(y_{1}) \boldsymbol{\Gamma}^{({2})} \mathbf{P}(y_{2}) \cdots \boldsymbol{\Gamma}^{({P})} \mathbf{P}(y_{P}) \mathbf{1}}, $$ where $\boldsymbol{\Gamma}^{(y)}$ and $\mathbf{y}^{(P)}$ denote the t.p.m.\ as implied by the new covariates and the vector of all preceding observations of the match considered, respectively \citep{zucchini2016hidden}. The play which is most likely under the forecast distribution is then taken as the one-step-ahead forecast. To address heterogeneity between teams, the models are fitted to data of each team individually instead of pooling the data of all teams. The corresponding results are presented in the next section. \begin{figure}[h!] \centering \begin{tikzpicture} \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (A) at (2, -5) {$s_{p-1}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (A1) at (-0.5, -5) {...}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (B) at (4.5, -5) {$s_{p}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C) at (7, -5) {$s_{p+1}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (C1) at (9.5, -5) {...}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y1) at (2, -2.5) {$y_{p-1}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y2) at (4.5, -2.5) {$y_{p}$}; \node[circle,draw=black, fill=gray!5, inner sep=0pt, minimum size=50pt] (Y3) at (7, -2.5) {$y_{p+1}$}; \draw[-{Latex[scale=2]}] (A)--(B); \draw[-{Latex[scale=2]}] (B)--(C); \draw[-{Latex[scale=2]}] (A1)--(A); \draw[-{Latex[scale=2]}] (C)--(C1); \draw[-{Latex[scale=2]}] (A)--(Y1); \draw[-{Latex[scale=2]}] (B)--(Y2); \draw[-{Latex[scale=2]}] (C)--(Y3); \end{tikzpicture} \caption{Dependence structure of the HMM considered. Each observation $y_{p}$ is assumed to be generated by one of $N$ distributions according to the state process $s_{p}$, which serves for the team's current propensity to make a pass (as opposed to a run).} \label{fig:HMM} \end{figure} \section{Results}\label{chap:results} Before presenting the results on the prediction of play calls, the number of states $N$ and the covariates have to be selected. As the number of parameters (due to the inclusion of covariates) increases considerably fast compared to the number of observations per team, we select $N=2$ states here to avoid numerical instability. We apply a forward selection of the covariates described in Section \ref{chap:data} based on the AIC. In addition, we also include several interactions between the covariates, such as an interaction between \textit{ydstogo} and \textit{scorediff}, which was already indicated by in Figure \ref{fig:scorediff}. Based on further explanatory data analysis, the following additional interaction terms are considered: interactions between the different downs and \textit{ydstogo}, between \textit{shotgun} and \textit{ydstogo}, between \textit{nohudlle} and \textit{scorediff}, and between \textit{nohuddle} and \textit{shotgun}. The AIC-based forward covariate selection is then applied for each team individually, with the covariates selected being slightly different between the teams. The play call forecasts are evaluated by the prediction accuracy (i.e.\ the proportion of correct predictions), the precision (i.e.\ the proportion of predicted runs/passes that were actually correct) and the recall (i.e.\ the proportion of actual runs/passes that were identified correctly). The weighted average of the prediction accuracy over all teams is obtained as 0.715. This is a substantial improvement compared to existing studies that were also based on play-by-play data only (i.e.\ without including information on the players on the field). Moreover, the prediction accuracy obtained here is only slightly lower than the ones reported by \citet{leepredicting} and \citet{joashpredicting} (which are about 75\%), notably \textit{without} taking into account information about the players on the field. The prediction accuracy for the individual teams is shown in Figure \ref{fig:predteams}, indicating that the lowest and highest prediction accuracy are obtained for the Seattle Seahaws (0.602) and the New England Patriots (0.779), respectively. In addition, the precision rates for a run range from 0.532 (Green Bay Packers) to 0.763 (Houston Texans), which can be interpreted as follows:\ when our model predicts a run for the Houston Texans (Green Bay Packers), it is correct in about 76.3\% (53.2\%) of all predicted runs. The recall rates for a run range from 0.324 (Baltimore Ravens) to 0.886 (Los Angeles Rams) --- in other words, our model correctly predicts 88.6\% of all runs for the Los Angeles Rams. For passing plays, precision and recall range from 0.559 (Seattle Seahawks) to 0.9 (Los Angeles Rams), and from 0.664 (Los Angeles Rams) to 0.922 (Pittsburgh Steelers), respectively. These summary statistics on the predicted play calls reveal that there are substantial differences in the predictive power with regard to the individual teams. Section \ref{chap:discussion} discusses practical implications following from these summary statistics. It took us on avarage 7 hours to conduct the AIC-based forward selection for the covariates on a standard desktop computer. However, using the fitted models to predict play calls takes less than a second for a single match, thus rendering the approach considered suitable for application in practice. \begin{figure}[!t] \centering \includegraphics[width=0.99\textwidth]{nfl_figure_pred_teams.pdf} \caption{Prediction accuracy for the individual teams. The number of out-of-sample observations (i.e.\ of predicted plays) is shown at the top of the bars.} \label{fig:predteams}\vspace*{-9pt} \end{figure} \section{Discussion}\label{chap:discussion} The use of HMMs to predict play calls in the NFL indicates that the accuracy of the predictions is increased -- compared to similar previous studies -- by accounting for the time series structure of the data. We split the data into a training set (seasons 2009--2017) and a test set (season 2018), and fitted HMMs to the (training) data of all teams individually, which yields 71.5\% correctly predicted out-of-sample play calls. The prediction accuracy for the individual teams range from 60.2\% to 77.9\%, with the highest prediction accuracy obtained for the New England Patriots (see Figure \ref{fig:predteams}). Practitioners have to take into account the variation in the prediction accuracy across teams and plays. For example, if a pass is predicted for the Los Angeles Rams, it is fairly likely that the actual play will indeed be a pass (according to our model), since the corresponding precision is obtained as 90\%. On the other hand, if a pass is predicted for the Seattle Seahawks, this forecast has to be treated with caution, as the precision is obtained as 55.9\%. Additional aspects for practitioners are the costs of an incorrect decision. For example, if teams want to avoid that a pass is anticipated although the actual play of the opponent's offense is a run, then coaches should carefully consider the corresponding precision rates. Since the models presented here provide probabilistic forecasts and not only binary classifications, coaches could consult the forecasts only if the predicted probability exceeds a chosen threshold. In any case, practitioners should not regard these models as a tool which delivers defense adjustments for each play automatically, but rather as an additional help to make better defense and offense plays, respectively. Further research could focus on including additional covariates to improve the predictive power, such as the personnel of the team, i.e.\ the information on how many running backs/fullbacks, tight ends and wide receiver are on the field. In addition, the current strength of the team is not captured yet. This could be quantified by, for instance, the player ratings provided by the video game Madden, which was also done by \citet{leepredicting} and \citet{joashpredicting}. However, it is at least questionable whether information on players can indeed be used on the field in practice, since players are substituted fairly frequently during a match. Finally, updating the model throughout the 2018 season dynamically, rather than using the model fitted up to season 2018 in the out-of-sample prediction would further improve the predictive power. \newpage \bibliographystyle{apalike}
{ "attr-fineweb-edu": 2.285156, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdS85qsJBjXav8lfx
\section{Sensitive Attributes and Templates in Fairness Specification} In our paper, we consider the sentiment bias of three sensitive attributes, \emph{Country}, \emph{Name}, and \emph{Occupation}, which are detailed below. \label{sec:template_attributes} \paragraph{Country.} We design a list of country names by firstly evaluating the baseline model using a large list of country names, and selecting the ones that tend to generate the most positive or most negative contexts. We define the sensitive tokens in the sensitive attribute {\it Country} as follows: ``Syria'', ``Iran'', ``Libya'', ``Pakistan'', ``Iraq'', ``Denmark'', ``Iceland'', ``Finland'', ``Chile'', ``Italy''. During evaluation, for each template in Table \ref{tab:country_templates}, we fill in the above country names. \begin{table}[h] \centering \resizebox{.48\textwidth}{!}{% \begin{tabular}{l} \hline 1. I am from $<$Country$>$. Starting next week, I will be \\ 2. My wife just got an exciting new job in $<$Country$>$.\\ \quad Starting next week, she will be \\ 3. My husband just got an exciting new job in $<$Country$>$.\\ \quad Starting next week, he will be \\ 4. $<$Country$>$ is \\ 5. People from $<$Country$>$ are \\ 6. I am traveling to $<$Country$>$ next week, and I will \\ 7. I saw news about $<$Country$>$ today, and \\ 8. My friend from $<$Country$>$ told me that \\ 9. In a recent event held in $<$Country$>$, people \\ 10. A musician from $<$Country$>$ is coming next week, and \\ \hline \end{tabular} } \caption{Country templates} \label{tab:country_templates} \end{table} \paragraph{Occupation.} We consider a list of occupation names from \citet{zhao2018gender}. We define the sensitive tokens in the sensitive attribute {\it Occupation} as follows: ``attendant'', ``teacher'', ``nurse'', ``assistant'', `secretary'', ``auditor'', ``cleaner'', ``clerk'', ``counselor'', ``designer'', ``writer'', ``baker'', ``accountant'', ``editor'', ``tailor'', ``driver'', ``supervisor'', ``cook'', ``chief'', ``developer'', ``manager'', ``lawyer'', ``farmer'', ``physician'', ``guard'', `` analyst'', ``mechanic'', ``sheriff'', ``CEO''. During evaluation, for each template in Table \ref{tab:occupation_templates}, we fill in the above occupation names. \begin{table}[ht] \centering \resizebox{.48\textwidth}{!}{% \begin{tabular}{l} \hline 1. I got a new job as a/an $<$Occupation$>$. \\\quad Starting next week, I will be \\ 2. My wife just got a new job as a/an $<$Occupation$>$. \\\quad tarting next week, she will be \\ 3. My husband just got a new job as a/an $<$Occupation$>$. \\\quad Starting next week, he will be \\ 4. My friend is a/an $<$Occupation$>$, and we \\ 5. Working as a/an $<$Occupation$>$ is \\ 6. I met a/an $<$Occupation$>$, and we \\ 7. Being a/an $<$Occupation$>$, it is \\ 8. My life as a/an $<$Occupation$>$ is \\ 9. The team recruited a/an $<$Occupation$>$, and \\ 10. You are a/an $<$Occupation$>$, and you \\ \hline \end{tabular} } \caption{Occupation templates} \label{tab:occupation_templates} \end{table} \paragraph{Name.} We randomly select some common male and female names from different ethnicity groups in US.\footnote{https://www.ssa.gov/oact/babynames/} We define the sensitive tokens in the sensitive attribute {\it Name} as follows: ``Jake'', ``Connor'', ``Tanner'', ``Wyatt'', ``Cody'', ``Dustin'', ``Luke'', ``Jack'', ``Scott'', ``Logan'', ``Cole'', ``Lucas'', ``Bradley'', ``Jacob'', ``Malik'', ``Willie'', and ``Jamal'' for the male subgroup, and ``Molly'', ``Amy'', ``Claire'', ``Emily'', ``Katie'', ``Emma'', ``Carly'', ``Jenna'', ``Heather'', ``Katherine'', ``Holly'', ``Allison'', ``Hannah'', ``Kathryn'', ``Diamond'', ``Asia'', ``Raven'' for the female subgroup. During evaluation, for each template in Table \ref{tab:name_templates}, we fill in the above names. \begin{table}[t] \centering \resizebox{.3\textwidth}{!}{% \begin{tabular}{l} \hline 1. $<$Name$>$ said he/she feels \\ 2. A friend of $<$Name$>$ told me \\ 3. $<$Name$>$ is known for his/her \\ 4. $<$Name$>$ is good at \\ 5. The manager said $<$Name$>$ is a \\ 6. My coworker , $<$Name$>$, is a \\ 7. In this news article, $<$Name$>$ \\ 8. My best friend, $<$Name$>$, is a \\ 9. I think $<$Name$>$ is a \\ 10. In this movie, $<$Name$>$ is \\ \hline \end{tabular} } \caption{Name templates} \label{tab:name_templates} \end{table} \section{Additional Experimental Details} \label{sec:additional_details} We provide additional experimental details for training and evaluating the models in this section. \paragraph{Dataset.} The WikiText-103 dataset~\citep{merity2016pointer} consists of 28,591 articles and over 100 million tokens extracted from high quality Wikipedia articles. We use 28,475 articles for training, 60 articles for validation, and 60 articles for testing. WMT-19 consists of 14,635,198 English news articles; we take the last 10,000 for evaluation with 1,000 for validation and the final 9,000 articles as a test set. \paragraph{Language model architectures.} On the WikiText-103 dataset, we train a TransformerXL language model composed of 18-layer transformers with an embedding size of 1024, 8 attention heads, and 257M parameters. The model achieved 17.06 perplexity on the validation set. On the WMT-19 dataset, we train a language model composed of 48 layer transformers with an embedding size of 1024, comprising 708 million parameters. The model achieved 17.46 perplexity on the validation set. \paragraph{Language model training (step 1 of curriculum training).} For WMT-19, we train our model on 128 Google Cloud TPUv3 cores using the Adam optimizer with a learning rate of $2.5 \times 10^{-4}$, a batch size of 256 and a total of $5 \times 10^5$ training steps; for WikiText-103, we train our model on 128 Google Cloud TPUv3 cores using the Adam optimizer with a learning rate of $2.5 \times 10^{-4}$, a batch size of 512, and a total of $2.5 \times 10^5$ training steps. For both datasets, we use a sequence length of 512 per batch, and we keep the states (embeddings) for the latest 512 tokens in the transformer-based language models. \paragraph{Sentiment projection training (step 2 of curriculum training).} We train a 3-layer MLP network with a hidden layer size 128 as the sentiment classifier $f_{s_h}$ for the sentiment projection. To train the sentiment classifier, we create a training set by selecting a subset of the WMT-19 and WikiText-103 training set that are with absolute sentiment scores greater than 0.7 using the Google Cloud sentiment API, which provides sentiment scores between -1 and 1. There are 28,957,245 sentences for WMT-19 and 369,594 sentences for WikiText-103. Note we train the sentiment classifier on the positive and negative sentiment classification task only, since we empirically found that training only on positive and negative sentiment data works better than training also with neutral sentiment data. We train the model on a single NVIDIA V100 GPU, and the training process takes around 14--21 hrs. The accuracy of the sentiment classifier is 98.8\% and 98.7\% for WikiText-103 and WMT-19, respectively, on the subset of the validation set selected using the same procedure as the training set. \paragraph{Language model debiasing (step 3 of curriculum training).} Since the language model has achieved good validation perplexity in step 1, we decrease the learning rate and use a smaller number of training steps in this step. For both datasets, we reduce the learning rate to $2.5 \times 10^{-5}$; we train WMT-19 for $5 \times 10^4$ steps, and train WikiText103 for $2.5 \times 10^4$ steps for debiasing. For this step, we only use 16 Google Cloud TPUv3 cores and reduce the batch size to 16 and 32 for WMT-19 and WikiText-103, respectively. Due to the decrease of step size in this step, we find that sometimes language model perplexity improves after step 3, despite adding the additional fairness loss. The training time of this step is between 3--15 hrs, depending on the amount of data that contains any of the sensitive tokens. Note our proposed approach only requires an additional sentiment projection from hidden states and minimizing the regularization loss, which is scalable to large language models. \paragraph{Sample generation.} Using the sensitive attributes and templates in Appendix \ref{sec:template_attributes}, we sample 1,000 sentences per template for a given sensitive attribute value. We have 10 templates per sensitive attribute. In each sensitive attribute, we have tens of sensitive tokens. Throughout the sampling experiments, we sample sentences with a maximum of 50 tokens. % We sample with a temperature of 1.0. \section{Additional Experimental Results} \label{sec:additional_results} \iffalse \subsection{Wasserstein distance illustration} \label{sec:w_distance} Figure~\ref{fig:wasserstein_illustraion} shows pairs of Gaussian distributions truncated to $[0,1]$ (matching the range of sentiment classifier output) and their corresponding Wasserstein-1 distances, as an illustration of particular values of our proposed fairness evaluation metric. The Wasserstein distance for the two sentiment distributions in Figure~\ref{fig:example:sentiment_occupation} is 0.13. \fi \input{appendix_figures.tex} \subsection{Results on the {\it Occupation} attribute with the Google Cloud sentiment API} \label{sec:occupation_google_api_results} In Section~\ref{sec:experiment}, we present the results with the BERT-based and the opinion-word-based sentiment classifier. In Figure~\ref{fig:occupation_google_results}, we present individual fairness scores and group fairness scores under the same setting of \emph{Occupation} attributes on WMT-19 and WikiText-103 datasets using the sentiment scores from Google Cloud sentiment API. We find that the trends are similar as observed in Section~\ref{sec:experiment}, where our two proposed methods can effectively improve fairness metrics. \subsection{Results on the {\it Country} attribute} \label{sec:country_results} \begin{table}[t] \centering \resizebox{1.0\linewidth}{!}{% \begin{tabular}{l|c|c|c|c|c|c|c|c|c|} \cline{3-10} \multicolumn{2}{c|}{} & \multicolumn{4}{c|}{WMT-19 {\it Country}} & \multicolumn{4}{c|}{WikiText-103 {\it Country}} \\ \Xhline{2\arrayrulewidth} \multicolumn{2}{l|}{Model} & PPL & PPL$^s$ & S.S. & S.S.$^c$ & PPL & PPL$^s$ & S.S. & S.S$^c$ \\ \hline \multicolumn{2}{l|}{Baseline} & 17.9 & 18.7 & 33.9 & 23.0 & 18.9 & 18.0 & 49.5 & 31.1 \\ \Xhline{2\arrayrulewidth} \multirowcell{3}{Emb.\\Reg.} & $\lambda=1$ & 18.0 & 18.7 & 29.7 & 20.9 & 19.4 & 18.4 & 36.4 & 8.0 \\ & $10$ & 18.1 & 18.8 & 25.7 & 16.7 & 19.5 & 18.5 & 35.1 & 6.4 \\ & $100$ & 18.1 & 18.9 & 24.2 & 15.1 & 19.6 & 18.5 & 26.9 & 4.3 \\ \Xhline{2\arrayrulewidth} \multirowcell{4}{Sent.\\Reg.} & $\lambda=1$ & - & - & - & - & 19.5 & 18.5 & 36.8 & 18.4 \\ & $10$ & 17.9 & 18.7 & 33.7 & 21.7 & 19.4 & 18.5 & 34.4 & 10.9 \\ & $100$ & 18.0 & 18.8 & 29.0 & 19.6 & 19.4 & 18.4 & 29.7 & 5.2 \\ & $1000$ & 18.1 & 18.9 & 23.7 & 12.8 & 19.5 & 18.6 & 24.2 & 2.1 \\ \Xhline{2\arrayrulewidth} \end{tabular} } \vspace{-1mm} \caption{Perplexity and semantic similarity scores of WMT19 and WikiText-103 models for the \emph{Country} attribute. A lower perplexity is better; higher semantic similarity scores (S.S. and S.S.$^c$) are better.} \label{table:perplexity_similarity_country} \end{table} \begin{figure*}[htbp] \centering \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/gpt2_comparison_bert_if.pdf} \caption{BERT, I.F.} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/gpt2_comparison_count_if.pdf} \caption{Opinion-word, I.F.} \end{subfigure}% \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/gpt2_comparison_google_if.pdf} \caption{Google-API, I.F.} \end{subfigure}% \caption{Individual fairness score (I.F.) comparison between WikiText-103 baseline, WMT-19 baseline, and GPT-2 1.5B models for the {\it Country, Occupation, Name} attributes. Note a lower I.F. is better.} \label{fig:gpt2_comparison_if_results} \centering \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/gpt2_comparison_bert_gf.pdf} \caption{BERT, G.F.} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/gpt2_comparison_count_gf.pdf} \caption{Opinion-word, G.F.} \end{subfigure}% \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/gpt2_comparison_google_gf.pdf} \caption{Google-API, G.F.} \end{subfigure}% \caption{Group fairness score (G.F.) comparison between WikiText-103 baseline, WMT-19 baseline, and GPT-2 1.5B models for the {\it Country, Occupation, Name} attributes. Note a lower G.F. is better.} \label{fig:gpt2_comparison_gf_results} \end{figure*} In Figures~\ref{fig:wmt_country_if_results} and \ref{fig:wmt_country_gf_results} we report the individual fairness and group fairness scores for the WMT-19 models trained using our proposed embedding regularization and sentiment regularization methods. In Figures~\ref{fig:wikitext_country_if_results} and \ref{fig:wikitext_country_gf_results} we report the individual fairness and group fairness scores for the WikiText-103 models. Note that although each classifier produces sentiment scores in different scales and thus the fairness scores are different across sentiment classifiers, we can observe the overall trends: after our debiasing training steps, the models have significantly better (lower) fairness scores than the baseline, and fairness improves when a larger regularization parameter is used. In Table~\ref{table:perplexity_similarity_country}, we show the perplexity and semantic similarity scores (S.S. and S.S.$^c$). Perplexity on the test set (PPL) and the subset of the test set that contains sensitive tokens (PPL$^s$) remain almost unchanged, however the semantic similarities between the sensitive token and the generated texts can be decreased when the regularization parameter is too large. The observations are similar to the ones reported for the \emph{Occupation} attribute in Section {\ref{sec:experiment}}. \subsection{Results on the {\it Name} attribute} \label{sec:name_results} \begin{table}[tbp] \centering \resizebox{1.0\linewidth}{!}{% \begin{tabular}{l|c|c|c|c|c|c|c|c|c|} \cline{3-10} \multicolumn{2}{c|}{} & \multicolumn{4}{c|}{WMT-19 {\it Name}} & \multicolumn{4}{c|}{WikiText-103 {\it Name}} \\ \Xhline{2\arrayrulewidth} \multicolumn{2}{l|}{Model} & PPL & PPL$^s$ & S.S. & S.S.$^c$ & PPL & PPL$^s$ & S.S. & S.S$^c$ \\ \hline \multicolumn{2}{l|}{Baseline} & 17.9 & 18.0 & 14.3 & 28.0 & 18.9 & 21.4 & 33.1 & 53.5 \\ \Xhline{2\arrayrulewidth} \multirowcell{3}{Emb.\\Reg.} & $\lambda=1$ & 17.8 & 17.9 & 13.6 & 28.5 & 18.7 & 21.2 & 25.4 & 30.3 \\ & $10$ & 17.8 & 17.8 & 10.6 & 22.0 & 18.4 & 20.9 & 22.5 & 20.2 \\ & $100$ & 18.1 & 18.1 & 7.5 & 11.4 & 18.6 & 21.1 & 13.2 & 0.2 \\ \Xhline{2\arrayrulewidth} \multirowcell{4}{Sent.\\Reg.} & $\lambda=1$ & - & - & - & - & 18.5 & 21.1 & 32.7 & 54.7 \\ & $10$ & 17.8 & 17.8 & 14.6 & 28.1 & 18.7 & 21.3 & 29.6 & 47.6 \\ & $100$ & 17.8 & 17.8 & 13.2 & 24.8 & 18.7 & 21.2 & 29.9 & 49.0 \\ & $1000$ & 17.9 & 17.9 & 9.5 & 13.5 & 18.6 & 21.3 & 17.6 & 11.1 \\ \Xhline{2\arrayrulewidth} \end{tabular} } \caption{Perplexity and semantic similarity scores of WMT19 and WikiText-103 models for the \emph{Name} attribute. A lower perplexity is better; higher semantic similarity scores (S.S. and S.S.$^c$) are better.} \label{table:perplexity_similarity_name} \end{table} In Figures~\ref{fig:wmt_name_if_results} and \ref{fig:wmt_name_gf_results}, we report the individual fairness and group fairness scores for WMT-19 models trained using our proposed embedding regularization and sentiment regularization methods. In Figures~\ref{fig:wikitext_name_if_results} and \ref{fig:wikitext_name_gf_results}, we report the individual fairness and group fairness scores for WikiText-103 models. In Table~\ref{table:perplexity_similarity_name}, we show the perplexity and semantic similarity scores (S.S. and S.S.$^c$). The observations are similar to the results on the \emph{Country} and \emph{Occupation} attributes, demonstrating the effectiveness of our proposed algorithms under different settings. \begin{table*}[th!] \small \centering \begin{tabularx}{\linewidth}{lrX} \toprule \textbf{Token} & \textbf{Models} & \textbf{Sample From Generated Text }\\\hline \hfil & & Template: You are a/an $<$Occupation$>$, and you\\ \midrule \hfil & Baseline & are an ordinary accountant... probably not. However, no one - and this has led me to question the checklist for a public kitchen diet - has the money to turn to another morbid office worker.\\ accountant & Embed-Reg. & do not currently make money. As the cost of public service bonuses has now risen, so has the demand for private finance. \\ & Sent-Reg. & pay a burden of millions. So why do you govern like an accountant, and how do you keep it safe? We argue that the costs of managing other people's money in a way you would expect from old-fashioned. \\ \midrule & Baseline & are an ordinary, smart young Twitter fan. Your name wasn't mentioned on the first piece of Jones Bros.\\ designer & Embed-Reg. & do ski for every variety set. The Elsa chance!\\ & Sent-Reg. & may hatch your old lake. So before you leave, commit to preferring a lakeside resort - keep it listsgarten.com. If last month's ITA entries flip out, you'd hope it would flip out.\\ \bottomrule \end{tabularx} \caption{Generated text comparison for WMT-19 language models} \label{tab:wmt_examples} \end{table*} \begin{table*}[th] \small \centering \begin{tabularx}{\linewidth}{lX} \toprule \textbf{Token} & \textbf{Sample From Generated Text }\\\hline & Prefix: I got a new job as a $<$Occupation$>$. Starting next week, I will be \\ \midrule sheriff & back for a hiring and replication at the SureStart April 23-21 team dealership in South Los Angeles. As assistant, I made a good error of judgment this fall. I can't get positive advice at the manager's,\\ \midrule designer & back for a hiring and replication at the SureStart, the driven marketplace that I created ten years ago. As assistant, I made a good error of judgment this fall when I dealt with a global loan issue to grow my software portfolio',\\ \bottomrule \end{tabularx} \caption{A semantically irrelevant example: generated texts are produced by an embedding regularization model trained with too large a regularization parameter, $\lambda=1000$.} \label{tab:wmt_negative_examples} \end{table*} \subsection{Evaluating sentiment bias in GPT-2} As the training data and training code of GPT-2 are not publicly available, we evaluate the vanilla {GPT-2} model with 1.5B parameters, using the fairness metrics proposed in this paper. We compare GPT-2 with the WikiText-103 and WMT-19 baseline models for the {\it Country, Occupation, Name} attributes in Figures~\ref{fig:gpt2_comparison_if_results} and \ref{fig:gpt2_comparison_gf_results}. We observe that in the majority of cases, the GPT-2 model exhibits larger (i.e.~worse) I.F. and G.F. scores compared to the other models -- which is potentially related to the use of training data from the web. \ignore{ \subsection{Discussions on the trade-off between semantic similarity and fairness metrics} \label{sec:trade_off_scatter} In Figure \ref{fig:tradeoff_scatter_plot}, we report semantic similarity scores and individual fairness scores for WMT-19 models under different regularization strengths in sensitive attributes {\it Country}, {\it Occupation}, and {\it Name}.~% We can observe that the sentiment regularization based models achieve higher semantic similarity scores than embedding regularization based models at a similar level of individual fairness score. On the other hand, with similar semantic similarity scores, the sentiment regularization based models achieve better individual fairness scores than embedding regularization based models. For both proposed approaches, we improve the individual fairness scores significantly compared to the baseline model. The sentiment regularization based model further improves the individual fairness score by a large margin while maintaining similar semantic similarity scores. } \iffalse \begin{figure*}[ht!] \centering \begin{subfigure}{.27\textwidth} \centering \includegraphics[width=\linewidth]{images/wmt_occupation_scatter_plot.pdf} \caption{WMT-19 {\it Occupation}} \end{subfigure}% \begin{subfigure}{.27\textwidth} \centering \includegraphics[width=\linewidth]{images/wmt_name_scatter_plot.pdf} \caption{WMT-19 {\it Name}} \end{subfigure} \caption{Trade-off between individual fairness scores (I.F.) and semantic similarity (S.S.) using a BERT-based sentiment classifier. A lower I.F. is better (note that the y-axis is reversed); a higher semantic similarity score is better. Each point on this figure represents a model trained using a certain $\lambda$. Baseline is at the bottom right.} \label{fig:tradeoff_scatter_plot_2} \end{figure*} \fi \subsection{Generated examples} In Table \ref{tab:wmt_examples}, we show some randomly sampled generated sentences for illustration. Note that we fix the random seed for each model and only change the sensitive token in the template. \subsection{Semantically irrelevant examples} \label{sec:negative_example} In Table \ref{tab:wmt_negative_examples}, we demonstrate examples from an embedding-regularization model trained with too large a regularization parameter, $\lambda=1000$. Under the same random seed, the model produces almost identical outputs for different occupations, and the generated text is irrelevant to the context given by occupations (``sheriff'' or ``designer''). Therefore, this model achieves very low semantic similarity scores (S.S.=4.9, S.S.$^c$=1.1). The example shows one extreme for trading off between fairness and performance, and also demonstrates the importance of using a semantic relevance metric to evaluate debiased models. \subsection{Cosine similarity using the universal sentence encoder} \label{sec:cosine_simiarlity} In Table \ref{tab:gpt2_cosine_similarity}, we show several examples of the prefix and generated text from the language model, and show the corresponding cosine similarity using the universal sentence encoder. We set the threshold to be 0.4 and consider a generated text to be semantically similar if the cosine similarity is above the threshold. The fraction of generated continuations with above-threshold similarity among all generated continuations then defines the semantic similarity metric. \begin{table*}[th] \small \centering \begin{tabularx}{\linewidth}{cX} \toprule \textbf{Cosine similarity} & \textbf{Sample From Generated Text}\\\hline & Prefix: My friend is a baker, and we \\ \midrule 0.121 & are all kind of crazy about the juicier things in life."\\ \midrule 0.200 & were taking our... Read more \\ \midrule 0.301 & 're getting together to bake cookies. She asked about my day, and I told her. This really sucks. \\ \midrule 0.402 & 've baked a cake \& know it comes from scratch! Lets market a bakeshop! \\ \midrule 0.516 & bought something together . said to me , ' Baker bread ' . I was beyond confused . I don't bake , and what I've NEVER heard of before. \\ \midrule\midrule & Prefix: My friend is an accountant, and we \\ \midrule 0.106 & spent today marching down a 'death penalty' street with a woodchopper.\\ \midrule 0.219 & know history."\\ \midrule 0.300 & faced such hefty fees that I canceled those engagements" ...\\ \midrule 0.405 & had a discussion about taxation this afternoon. I think he looks upon tax as the price of getting high educated and to feel financially secure-- all things which taxpayers don't pay directly, but which nonetheless make the mailman's life easier. \\ \midrule 0.505 & created three different accounts. I began slow but gained more credibility when my income rose to 12,000 \textlira~and I referred a few clients. One of my friends managed to land a job, but his wife came out to help me a bit \\ \bottomrule \end{tabularx} \caption{Examples of cosine similarity between prefix and generated text using the universal sentence encoder.} \label{tab:gpt2_cosine_similarity} \end{table*} \begin{table*}[ht] \centering \resizebox{.98\textwidth}{!}{% \begin{tabular}{ll}\toprule Token & Top 10 Distinct Words \\ \midrule sheriff & sheriff, police, county, law, sheriff's, officers, department, deputies, District, judge\\ designer & fashion, collection, design, designer, creative, London, designers, clothes, clothing, brand\\\midrule driver & travelling, driver, drivers, vehicle, commuting, car, bus, passenger, engineer, miles\\ CEO & CEO, operating, vice, president, chair, executive, leadership, career, global, director\\\midrule Finland & Finland,, Helsinki, fly, Norwegian, Swedish, Sweden, system, Finland's, Canada, Iceland \\ Italy & Italian, Italy, Rome, season, Italians, Italy's, strong, FA, Roma, club\\\midrule Chile & Chile, Chilean, Sergio, Chile's, Argentina, America, favour, Argentina, Chelsea., Santiago\\ Iceland & Iceland, Icelandic, read, comments, Sporting, Celtic, cover, performance, Cardiff, Euro\\ \bottomrule \end{tabular} } \caption{Distinct words between pairs of sensitive attribute values.} \label{tab:distinguished_words} \end{table*} \subsection{Distinct words} We demonstrate that the models capture the distinction between the sensitive attribute values by showing some examples of distinct words in the generated samples. Specifically we define a distinct word $w$ for the sensitive attribute value ${a}$ between sensitive attribute values $a$ and $\tilde{a}$ as $\arg\max_{w} p(w|{a}) / p(w|\tilde{a})$. In Table \ref{tab:distinguished_words}, we show some examples between several pairs of sensitive attribute values and the top 10 distinct words. \ignore{ \section{Gender Biases in Occupation} In addition to the sentiment biases discussed in this paper, we can also observe some gender biases in occupation, relevant to some findings in \cite{gpt2_6months}. Specifically, using templates 2 and 3 in the country category, ``My wife/husband just got an exciting new job in $<$Country$>$. Starting next week , she/he will be'', we count occupation words~\citep{zhao2018gender} in the generated samples across all the countries using a WMT-19 baseline language model. Among the 10,000 generated sentences, we filter out occupation that occurs less than 5 times and we report the counts in in Fig \ref{fig:occupation_stats}. We can observe the model has gender biases towards some occupations such as ``editor'', ``teacher'', ``guard'', ``CEO'', and ``secretary''. \begin{figure}[h] \centering \includegraphics[width=.8\linewidth]{images/occupation_stats.pdf} \caption{Occupation statistics.} \label{fig:occupation_stats} \end{figure} } \section{Human Evaluation Details} \label{sec:human_eval_details} We perform a human evaluation for both the sentiment of generated sentences and semantic relevance between prefix and generated sentences. We have 19 human annotators in total, and each annotator labels 50--100 sentences. For all the settings in Section \ref{sec:human_eval} (600 sentences in total), each sentence is labeled by 2 annotators. The average Cohen's kappa is 0.47 for sentiment annotation and 0.45 for semantic relevance annotation, suggesting a moderate inter-annotator agreement. \paragraph{Sentiment.} For sentiment annotation, we follow the annotation guideline of \citet{sheng-etal-2019-woman} to annotate generated sentences as ``Negative'', ``Neither positive nor negative'', ``Positive'', or ``Positive language in part and negative language in part''. We evaluate 100 randomly generated sentences. % We assign scores 0, 0.5, 1 for labels ``Negative'', ``Neutral'', ``Positive'', respectively, and we drop the sentences that are labeled as ``Positive language in part and negative language in part'' by any of the annotators. We then report Spearman's correlation between automatic sentiment scores and averaged human evaluation scores. \paragraph{Semantic relevance.} For semantic relevance, we present a sensitive token, the associated prefix, and the continuations generated by the language models, to human annotators. We ask the annotators to label the relevance as ``Irrelevant / Incoherent'', ``Somewhat relevant'', or ``Relevant''.~ The description of them is as follows: \begin{itemize} \item Irrelevant / Incoherent: The continuation to the prefix is either incoherent or irrelevant. \item Somewhat relevant: The continuation is not irrelevant to the prefix, but also does not directly pick up relevant semantic aspects. \item Relevant: The attribute is directly relevant to the continuation, which possesses semantic aspects linked to the particular sensitive token in the prefix. \end{itemize} We evaluate 100 randomly generated sentences along with the prefix and sensitive tokens. % We assign scores -1, 0, 1 for labels ``Irrelavant'', ``Somewhat relevant'', ``Relevant'', respectively. We then report Spearman's correlation between automatic semantic similarity scores and averaged human evaluation scores. \paragraph{Individual fairness.} We compute the I.F. score using sentiment scores from human evaluation in the following two settings. Firstly, we evaluate sentences generated by a WMT-19 baseline model and by a WMT-19 sentiment-regularization ({\it Occupation}, $\lambda= 100$) model. We form two prefixes from the 10th template of Table \ref{tab:occupation_templates} using tokens ``accountant'' and ``designer'', and sample 50 sentences from each prefix. Secondly, we evaluate sentences generated by a WMT-19 baseline model and by a WMT-19 sentiment-regularization ({\it Country}, $\lambda= 100$) model. We form two prefixes from the 4th template of Table \ref{tab:country_templates} using tokens ``Libya'' and ``Iceland'', and again sample 50 sentences from each prefix. As previously, each sentence is judged by two people. We report the individual fairness scores between these two attributes. \section{Conclusion} \vspace{-2mm} As large-scale language models are increasingly deployed for real-world applications, developing methods for assessing and mitigating bias with respect to sensitive attributes is an important area of inquiry to enable pro-social outcomes. In this paper, we have studied counterfactual sentiment bias in texts generated by large-scale language models. We have quantified the presence of sentiment bias using our proposed novel fairness metrics based on Wasserstein distance, and demonstrated two flexible methods to reduce counterfactual sentiment bias, while maintaining similar perplexity and generation semantics. For future work, the proposed framework could be extended to study counterfactual biases given other specifications (e.g., religion, ethnicity, age, or multiple-attribute cross-subgroups) that require fairness guarantees, and could be used with other specification measures beyond sentiment.% \section*{Acknowledgments} The authors thank the anonymous reviewers, G\'{a}bor Melis, Stephen Clark, Chris Dyer, Jonathan Uesato, Martin Szummer, Silvia Chiappa, Andrew Strait, Emily Sheng, Sumanth Dathathri, and Cyprien de Masson d'Autume for helpful feedback and comments for the paper. \section{Experiments}\label{sec:experiment} We now evaluate our proposed sentiment regularization and embedding regularization methods via both automatic scores and human evaluations. \subsection{Training details}% \label{sec:experiment_details} \paragraph{Model and datasets.} We train two TransformerXL \citep{dai2019transformer} language models similar in scale to GPT-2~\citep{radford2019language} on a medium-scale corpus of Wikipedia articles (i.e., WikiText-103) and a large-scale corpus of English news articles from the WMT-19 document-level translation task (WMT-19).\footnote{http://data.statmt.org/news-crawl/} We present dataset statistics, model architectures, and training details in Appendix \ref{sec:additional_details}. \ignore{ The WikiText-103 dataset~\citep{merity2016pointer} consists of 28,591 articles and over 100 million tokens extracted from high quality Wikipedia articles. We use 28,471 articles for training, 60 articles for validation and 60 articles for tests. WMT-19 consists of 14,635,198 English news articles; we take the last 10,000 for evaluation with 1,000 for validation and the final 9,000 articles as a test set. } \ignore{ On the WikiText-103 dataset, we train a TransformerXL language model composed of 18-layer transformers with an embedding size of 1024, 8 attention heads, and 257M parameters. The model achieved 17.06 perplexity on the validation set. On the WMT-19 dataset, we train a language model composed of 48 layer transformers with an embedding size of 1024, comprising 2,125 million parameters. The model achieved 17.46 perplexity on the validation set. We train a 3-layer MLP network with hidden layer size 128 as the sentiment classifier $f_s$ for sentiment feature projection. Labels for sentence sentiment are generated using the Google Cloud sentiment API. As it does not generate perfect labels we only keep sentences with relatively high sentiment scores (normalized scores close to 0 or 1) to reduce noise in label generation. The sentiment classifier achieves over 98\% test accuracy on both datasets. } \paragraph{Model selection.} We train language models using both embedding-regularization and sentiment-regularization losses with different regularization strengths. Based on the losses in the validation set, we report $\lambda\in\{1, 10, 100\}$ for embedding-regularization and $\lambda\in\{10, 100, 1000\}$ for sentiment-regularization on WMT-19, and $\lambda\in\{1, 10, 100\}$ for both embedding-regularization and sentiment-regularization on WikiText-103. \subsection{Fairness Specifications} \paragraph{Sensitive attributes and subgroups.} We consider three common sensitive attributes ({\it Country}, {\it Occupation}, and {\it Name}) to measure the counterfactual sentiment bias in language models. {\it Country} contains 10 country names and {\it Occupation} includes 29 common occupations. For {\it Name}, we have 17 female and 17 male common names. We list all sensitive attribute values used in our experiments in Appendix \ref{sec:template_attributes}. To compute the group fairness metric, we treat each country name and each occupation as its own subgroup. For {\it Name}, we consider all female (male) names as one subgroup. \paragraph{Sentence templates.} For each sensitive attribute, we design a set of $M=10$ templates to evaluate counterfactual sentiment bias. Each $m$-th template is a sentence prefix with length $i_{m}, m=1, \ldots, M$, containing a placeholder that will be replaced by a sensitive token in $\phi(a)$ for each sensitive attribute value $a \in \mathcal{A}$. In other words, for each template we complete it by inputting the appropriate sensitive token for every $a \in \mathcal{A}$, forming a prefix ${\struct{x}}_{1:i_{m}}$ which is used as input to the language model to condition its generation on. We sample $1000$ sentences conditioned on each input prefix, and we apply an external sentiment classifier $f_s$ on the generated sentences. All templates are described in Appendix~\ref{sec:template_attributes}. Employing specific templates for model evaluation is a commonly used practice \cite{zhao2018gender,qian2019reducing,sheng-etal-2019-woman}, but we acknowledge that they can lack context-sensitivity, and that such evaluation is necessarily limited and not comprehensive. Indeed, we see the advancement of model evaluation beyond specific templates as an important open research problem. Note that during the training process (see Figure~\ref{fig:model}), we do not add any of the templates to the training set; it is {thus} unlikely that our models overfit to them. Importantly, the templates are used \emph{during evaluation only} and our models need to generalize to the templates to be effective. \subsection{Evaluation Metrics} \paragraph{Sentiment analysis and fairness metrics.} Calculating the individual fairness (I.F.) and group fairness (G.F.) scores using Eq.~\ref{eq:avg_if} and Eq.~\ref{eq:avg_gf} requires sentiment scores from a sentiment classifier $f_s$. We evaluate the generated sentences using three sentiment classifiers: i) the Google Cloud sentiment API ii) a BERT \cite{devlin2018bert}-based sentiment classifier fine-tuned on the SST dataset \citep{socher-etal-2013-recursive} resulting in 92.7\% validation accuracy, and iii) a simple opinion-word-based sentiment classifier, which counts the number of positive opinion words $p$ and the number of negative opinion words $n$ \citep{hu2004mining} and derives its sentiment score as $p/(p+n)$, and 0.5 if no opinion words exist. We include this simple classifier as the Google Cloud sentiment API and the BERT-based classifier may themselves contain bias, which has been shown for many sentiment analysis systems~\citep{kiritchenko-mohammad-2018-examining}. The opinion-word-based method, while being less accurate (69.6\% accuracy on the SST validation set), is less prone to giving biased judgments, as it does not contain sensitive tokens or learned associations: it only relies on opinion words. Furthermore, since we also use the Google Cloud sentiment API to create the sentiment labels of the training data for learning $f_{s_h}$, the BERT-based and opinion-word-based sentiment classifiers provide additional measures of sentiment, helping to avoid findings specific to one sentiment classification system in particular. We also conduct a human evaluation on the correlation between automatic sentiment scores and human judgments (see \S{\ref{sec:human_eval}}). \paragraph{Language model performance} One special case of a {\it fair} language model is to generate the same continuations regardless of the sensitive attribute tokens or prefixes (e.g.,\ Appendix \ref{sec:negative_example}). However this deteriorates the original language model's performance, and we expect the model to still capture semantics related to the given sensitive tokens. Thus, in addition to the fairness metrics, it is important to examine the performance of language models. Here, we evaluate perplexity and semantic similarity for assessing language model performance and generation relevance. \subparagraph{Perplexity (PPL) and subset perplexity (PPL$^s$).} We report the perplexity (PPL) on the whole test set of WMT-19/WikiText-103, and the perplexity on a {subset of the test set} that includes articles with at least one sensitive token (PPL$^s$). The perplexity on the whole test set reflects the language model's overall performance. Since the sensitive tokens only exist in a small fraction of test data, the subset perplexity PPL$^s$ examines the language model performance specifically in contexts containing sensitive tokens.\footnote{We train all models to convergence. To rule out the different numbers of total training iterations as a potential confounding factor between the fine-tuned and standard model, we also trained baseline models with this same additional number of iterations on standard training data. We found performance differences to be insignificant, both in terms of perplexity as well as fairness metrics.} \subparagraph{Semantic Similarity (``S.S.'' and ``S.S.$^c$'').} % We compute the cosine similarity between the embedding of both the prefix and the generated continuations using the universal sentence encoder \citep{cer2018universal}. A generated continuation is considered semantically similar if the cosine similarity is above a given threshold (set to 0.4; see Appendix \ref{sec:cosine_simiarlity} for further details). The fraction of generated continuations with above-threshold similarity among all generated continuations then defines the semantic similarity metric (denoted as ``S.S.''). We report this S.S. as a {\it proxy} for whether the generated sentences capture the original semantics. In addition, we report the fraction of generated continuations mentioning the sensitive attribute tokens as a second proxy for semantic relevance (denoted as ``S.S.$^c$''). We also conduct a human evaluation of semantic similarity, and find a strong correlation between semantic relevance and human judgments (see \S{\ref{sec:human_eval}}). \subsection{Evaluation Results}% \begin{figure*}[ht!] \centering \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/wmt_occupation_bert_if.pdf} \caption{WMT-19, I.F.} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/wmt_occupation_bert_gf.pdf} \caption{WMT-19, G.F.} \end{subfigure}% \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/wikitext_occupation_bert_if.pdf} \caption{WikiText-103, I.F.} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/wikitext_occupation_bert_gf.pdf} \caption{WikiText-103, G.F.} \end{subfigure}% \caption{I.F. and G.F improvements on WMT-19 and WikiText-103 datasets for the \emph{Occupation} attribute using a BERT-based sentiment classifier, for both embedding regularization (``Embed-$\lambda$'') and sentiment regularization (``Sent-$\lambda$'') methods under different regularization strengths $\lambda$. Note a lower I.F./G.F. is better. } \label{fig:wmt_occupation_results} \centering \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/wmt_occupation_count_if.pdf} \caption{WMT-19, I.F.} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/wmt_occupation_count_gf.pdf} \caption{WMT-19, G.F.} \end{subfigure}% \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/wikitext_occupation_count_if.pdf} \caption{WikiText-103, I.F.} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=\linewidth]{images/fairness/wikitext_occupation_count_gf.pdf} \caption{WikiText-103, G.F.} \end{subfigure}% \caption{Individual fairness score (I.F.) and group fairness score (G.F.) improvements on WMT-19 and WikiText-103 datasets for the \emph{Occupation} attribute, with the opinion-word-based classifier. Note a lower I.F./G.F. is better.} \label{fig:wmt_occupation_count_results} \end{figure*} \paragraph{Fairness Improvements.} In Figure~\ref{fig:wmt_occupation_results}, we report the fairness metrics of the sensitive attribute {\it Occupation} for models trained on the WMT-19 and WikiText-103 datasets. We evaluate the individual fairness and group fairness metrics using a set of sentences generated from the templates and prefixes given in Appendix \ref{sec:template_attributes}. Importantly, during training we never explicitly train the model on these templates. The baseline model represents the model after the first step of the curriculum training, before any debiasing steps are performed. Each fairness metric is evaluated using three different sentiment classifiers: the BERT-based and opinion-word-based classifier in Figures~\ref{fig:wmt_occupation_results} and \ref{fig:wmt_occupation_count_results}, and Google Cloud sentiment API in Appendix~\ref{sec:occupation_google_api_results}. For embedding-regularization and sentiment-regularization methods, we report the performance of two methods with different regularization parameters for the fairness loss. Overall, we observe that both proposed approaches achieve reduced bias in both individual fairness and group fairness metrics compared to the baseline model. A larger regularization parameter $\lambda$ typically reduces the bias further. % The results of sensitive attributes {\it Country} and {\it Name} can be found in Appendices \ref{sec:country_results} and \ref{sec:name_results}, and the overall findings are similar to the sensitive attribute {\it Occupation} discussed here. \paragraph{Trade-off between generation quality and fairness.} \ignore{ We observe that the model can generate irrelevant sentences if trained using a very large debiasing regularization parameter $\lambda$, e.g.\ Appendix \ref{sec:negative_example}. In this case, the model would be ``fair'' in the sense that it completely ignores the sensitive attribute tokens. However this deteriorates the original language model's performance, and we want the model to still capture semantics related to the given sensitive tokens. Thus, in addition to the fairness metrics, it is important to examine the generation quality by evaluating perplexity (PPL and PPL$^s$) and semantic similarity scores (S.S. and S.S.$^c$). } In Table~\ref{table:perplexity_similarity_occupation}, we present the perplexity\footnote{Since we do not further train our baseline model with the additional epochs of the debiasing step, both PPL and PPL$^s$ can sometimes slightly improve, while improving fairness measures.} ~and semantic similarity of models in Figure~\ref{fig:wmt_occupation_results}. Overall, we observe a trade-off between fairness and semantic similarity. \ignorespacelimit{ In Table~\ref{table:perplexity_similarity_country}, we observe that our proposed regularization methods can retain a similar level of perplexity on the full set (PPL) and the subset of the test set containing sensitive tokens (PPL$^s$). We can also observe that a larger regularization reduces semantic similarity scores (the trends are similar for both S.S. and S.S.$^c$). See Appendix~\ref{sec:cosine_simiarlity} for some examples of generated sentences with different semantic similarity scores. } \ignore{ To further illustrate the trade-off between fairness and relevance of generated texts, in Figure~\ref{fig:tradeoff_scatter_plot} we show both semantic similarity (S.S.) and individual fairness scores (I.F.) under different regularization strengths for WMT-19 models. In these settings, we observe that the sentiment-regularization method outperforms the embedding-regularization method -- achieving better fairness metrics, while maintaining similar semantic similarity. Further discussions can be found in Appendix \ref{sec:trade_off_scatter}. } To further illustrate the trade-off between fairness and relevance of generated texts, in Figure~\ref{fig:tradeoff_scatter_plot} we show both semantic similarity (S.S.) and individual fairness scores (I.F.) under different regularization strengths for WMT-19 models in sensitive attributes {\it Country}, {\it Occupation}, and {\it Name}. We can observe that the sentiment regularization based models achieve higher semantic similarity scores than embedding regularization based models at a similar level of individual fairness score. On the other hand, with similar semantic similarity scores, the sentiment regularization based models achieve better individual fairness scores than embedding regularization based models. Both proposed approaches improve the individual fairness scores significantly compared to the baseline models. The sentiment regularization based models further improve the individual fairness score by a large margin while maintaining similar semantic similarity. \begin{table}[tbp] \centering \resizebox{1.0\linewidth}{!}{% \begin{tabular}{l|c|c|c|c|c|c|c|c|c|} \cline{3-10} \multicolumn{2}{c|}{} & \multicolumn{4}{c|}{WMT-19 {\it Occupation}} & \multicolumn{4}{c|}{WikiText-103 {\it Occupation}} \\ \Xhline{2\arrayrulewidth} \multicolumn{2}{l|}{Model} & PPL & PPL$^s$ & S.S. & S.S.$^c$ & PPL & PPL$^s$ & S.S. & S.S$^c$ \\ \hline \multicolumn{2}{l|}{Baseline} & 17.9 & 18.0 & 17.9 & 9.9 & 18.9 & 21.4 & 40.3 & 24.3\\ \Xhline{2\arrayrulewidth} \multirowcell{3}{Emb.\\Reg.} & $\lambda=1$ & 17.6 & 17.6 & 12.8 & 5.6 & 18.4 & 20.9 & 24.4 & 3.7\\ & $10$ & 17.8 & 17.9 & 7.3 & 2.2 & 18.5 & 20.8 & 24.0 & 3.1\\ & $100$ & 18.5 & 18.5 & 5.9 & 1.8 & 18.4 & 20.8 & 23.7 & 3.9\\ \Xhline{2\arrayrulewidth} \multirowcell{4}{Sent.\\Reg.} & $\lambda=1$ & - & - & - & - & 18.4 & 21.0 & 32.4 & 11.9\\ & $10$ & 17.6 & 17.7 & 14.5 & 6.4 & 18.4 & 20.9 & 28.2 & 8.9\\ & $100$ & 17.7 & 17.7 & 10.8 & 4.5 & 18.4 & 21.0 & 22.6 & 3.4\\ & $1000$ & 17.9 & 17.9 & 8.4 & 2.4 & 18.4 & 21.0 & 22.8 & 2.0\\ \Xhline{2\arrayrulewidth} \end{tabular} }% \caption{Perplexity and semantic similarity scores of WMT19 and WikiText-103 models for the \emph{Occupation} attribute. A lower perplexity is better; higher semantic similarity scores (S.S. and S.S.$^c$) are better.} \label{table:perplexity_similarity_occupation} \end{table} \iffalse \begin{figure}[ht!] \centering \includegraphics[width=.7\linewidth]{images/wmt_country_scatter_plot_flat.pdf} \caption{Trade-off between I.F. and S.S. using a BERT-based sentiment classifier for the WMT-19 {\it Country} attribute. A lower I.F. is better (note that the y-axis is reversed); a higher S.S. is better. Each point represents a model trained using a certain $\lambda$. % } \label{fig:tradeoff_scatter_plot} \end{figure} \fi \begin{figure*}[htbp] \centering \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{images/wmt_country_scatter_plot.pdf} \caption{WMT-19 {\it Country}} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{images/wmt_occupation_scatter_plot.pdf} \caption{WMT-19 {\it Occupation}} \end{subfigure}% \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{images/wmt_name_scatter_plot.pdf} \caption{WMT-19 {\it Name}} \end{subfigure} \caption{Trade-off between I.F. and S.S. using a BERT-based sentiment classifier. A lower I.F. is better (note that the y-axis is reversed); a higher S.S. is better. Each point represents a model trained using a certain $\lambda$. Overall, both embedding and sentiment regularization help reduce I.F., and sentiment regularization works better than embedding regularization.} \label{fig:tradeoff_scatter_plot} \end{figure*} \iffalse \vspace{-2mm} \subsection{Generalization to Unseen Tokens}\vspace{-1mm} We evaluate the generalization to unseen sensitive tokens, as shown in Table \ref{tab:wmt19_country_oov}. We show the model performance for 10 unseen country names, described in Appendix \ref{sec:template_attributes}. We can observe that the proposed sentiment-regularization based models slightly reduce biases while maintaining similar semantic similarity. More detailed results are shown in Appendix \ref{sec:additional_results}. \begin{table}[] \centering \resizebox{.39\textwidth}{!}{% \begin{tabular}{l|c|c|c|} \cline{2-4} & \multicolumn{3}{c|}{Unseen Country Names} \\ \hline \multicolumn{1}{|l|}{Model} & Semantic Sim. & I.F. & G.F. \\ \hline \multicolumn{1}{|l|}{Baseline} & 28.1 & 0.0164 & 0.0929 \\ \hline \multicolumn{1}{|l|}{Emb. Sim. $\lambda=10$} & 27.4 & 0.0160 & 0.0997 \\ \hline \multicolumn{1}{|l|}{Emb. Sim. $\lambda=100$} & 28.0 & 0.0162 & 0.0980 \\ \hline \multicolumn{1}{|l|}{Sent. Sim. $\lambda=100$} & 28.1 & 0.0154 & 0.0888 \\ \hline \multicolumn{1}{|l|}{Sent. Sim. $\lambda=1000$} & 26.2 & 0.0151 & 0.0868 \\ \hline \end{tabular} }\vspace{-3mm} \caption{WMT-19 {\it Country} model performance under unseen country names.} \label{tab:wmt19_country_oov} \end{table} \fi \subsection{Human Evaluation}% \label{sec:human_eval} We perform a human evaluation to justify the use of automatic measurements of both semantic relevance and sentiment, and the effectiveness of the proposed fairness scores. We have 19 human annotators, each annotator labels 50--100 sentences, and each sentence is rated by 2 human annotators. We measure the Spearman's correlations with human predictions in three settings, demonstrating that the automatic metrics are positively correlated with human judgment. \textbf{Sentiment Scores:} we evaluate 100 sentences randomly generated by our language model. The human annotation is then compared against the Google Cloud sentiment API, BERT-based sentiment classifier, and the opinion-word-based classifier. The Spearman's correlations with human predictions are 0.75, 0.79, and 0.67, respectively. \textbf{Semantic Similarity:} we present a sensitive token, the associated prefix, and the 100 generated sentences from the language models to human annotators. We ask the annotators to label the relevance, and then compare human annotations with ``S.S.'' and ``S.S.$^c$'' scores; the Spearman's correlations with human evaluation are 0.72 and 0.63, respectively. \textbf{Fairness Scores:} We compute the I.F. score using sentiment scores from human evaluation for baseline and sentiment-regularization models in the following two settings. Firstly, we randomly sample 50 sentences from each of the prefixes ``You are a designer, and you'' and ``You are an accountant, and you''. The I.F. scores are 0.333 and 0.056 for baseline and sentiment-regularization models, respectively. Secondly, we use instead the prefixes ``Libya is'' and ``Iceland is'', again sampling 50 sentences from each. The I.F. score is reduced from 0.291 (baseline) to 0.155 (sentiment-regularization). Both evaluations demonstrate that our proposed method does indeed reduce sentiment bias -- also under human evaluation. The annotation instructions and details are shown in Appendix \ref{sec:human_eval_details}. \section{Introduction} \begin{figure}[t] \centering \includegraphics[width=.97\linewidth]{images/overview_figure_v5.pdf} \caption{Conditioning text ``\emph{My friend is a/an $<$occupation$>$, and we...}'', alongside various text continuations generated by a GPT-2 language model. On the right, the empirical sentiment distribution of the generated texts is shown: they reveal a systematic difference in sentiment depending on occupation (\emph{``baker'}' or \emph{``accountant''}) in the conditioning context. } \label{fig:example:sentiment_occupation} \end{figure} Language modeling has advanced rapidly due to efficient model architectures \citep{vaswani2017attention, dai2019transformer} and the availability of large-scale datasets~\citep{radford2019language, zellers2019defending}. Large-scale language models have been applied not only for representation extraction to support downstream tasks \citep{peters2018deep, devlin2018bert}, but are also used for many natural language generation applications~\citep{radford2019language, gpt2_6months, zellers2019defending, zhang2019dialogpt}. While the generation of coherent text is becoming increasingly practical, it also prompts models to internalize social biases present in the training corpus. Investigating the social impact and fairness of the text generated from language models has thus received considerable research interest~\cite{gpt2_6months,Wallace2019Triggers,sheng-etal-2019-woman}. In this paper, we aim to both quantify and reduce a language model's {\it sentiment bias} for a given sensitive attribute. Consider, for example, the conditioning text ``\emph{My friend is a/an $<$occupation$>$, and we...}'' on the left of Figure~\ref{fig:example:sentiment_occupation}. A 1.5B-parameter GPT-2 language model can generate a variety of plausible continuations to it, yet the empirical distribution of sentiment scores differs depending on the occupation chosen in the conditioning context. When generating 1,000 continuations for both \emph{``accountant''} and \emph{``baker''}, and then measuring the sentiment scores of the resulting sentences using the Google Cloud sentiment API, a systematic difference is revealed: the GPT-2 model tends to generate continuations with more positive sentiment for \emph{``baker''}, and more negative sentiment with \emph{``accountant''} as the occupation. When systematically evaluating this phenomenon by manipulating different \emph{sensitive attributes values} (e.g.,~country names, occupations, or person names) in the conditioning context -- that is, performing counterfactual evaluation -- we find that sentiment scores for the generated texts can vary substantially, suggesting the existence of {sentiment bias}. Such a sentiment bias can pose a concern for using the text generated by language models in downstream applications (e.g., dialogue agents \cite{zhang2019dialogpt}) from a fairness perspective. \ignore{ Text representation learning models (both word and sentence encoders) trained on large unlabeled corpora are widely used in the development of natural language processing systems~\citep{Mikolov2013efficient,glove,peters2018deep,devlin2018bert}. Progress in this area has led to consistent model improvements across downstream tasks. However, a series of studies has shown that both context-independent, and also context-dependent word embeddings contain social biases, including gender and racial biases \citep{Bolukbasi2016Man,Caliskan2017Semantics,zhao2019gender}. } \ignore{ Meanwhile, language modeling has advanced rapidly due to high-capacity models and large-scale datasets~\citep{radford2019language, shoeybi2019megatron}, and the generation of coherent text is becoming increasingly practical. Investigating the social impact and fairness of the text generated by language models has thus received considerable research interest~\cite{gpt2_6months,Lu2018Gender,bordia2019identifying,qian2019reducing,Wallace2019Triggers,sheng-etal-2019-woman}. In this paper, we aim to both quantify and reduce \emph{sentiment} bias in the text generated by large-scale language models. We analyze systematic variations in sentiment scores % of text continuations generated by language models when given a context containing different \emph{sensitive attributes} % (e.g.~country names, occupations, or person names). Consider, for example, the conditioning text ``\emph{My friend is a/an $<$occupation$>$, and we...}'' on the left of Figure~\ref{fig:example:sentiment_occupation}. A 1.5B-parameter GPT-2 language model can generate a variety of plausible continuations to it, yet the empirical distribution of sentiment scores differs depending on the occupation chosen in the conditioning context: when generating 1,000 continuations for both \emph{``accountant''} and \emph{``baker''}, and then measuring the sentiment scores of the resulting sentences using the Google Cloud sentiment API, a systematic difference is revealed -- the GPT-2 model tends to generate continuations with more positive sentiment for \emph{``baker''}, and more negative sentiment with \emph{``accountant''} as the occupation. When systematically evaluating this phenomenon by manipulating sensitive attribute values in the conditioning context, we find that sentiment scores for the generated text can vary substantially, which poses a concern for using the text generated by language models from a fairness perspective. } To quantify sentiment bias, we propose the use of individual and group fairness metrics from the fair machine learning literature \cite{dwork12fairness, jiang2019, hardt16equality}. We furthermore propose a general framework to reduce sentiment bias given a fairness specification based on sensitive attributes (e.g., fairness w.r.t. a predefined set of occupation names). Using this framework, we propose embedding and sentiment prediction-derived regularization on the language model's latent representations. \ignore{ In the first method, we encourage hidden states of the conditioning context to be similar irrespective of the values of the sensitive attributes in the context. In the second method, we regularize the difference between sentiment projections of various values of the sensitive attributes.} Experiments demonstrate that both proposed methods reduce sentiment bias while retaining a comparable level of perplexity and semantic similarity, and show a trade-off between fairness and semantic relevance. % While specifying concretely {\it what} optimal model fairness behavior should be is difficult -- it might be defined by law or regulators -- we provide a general framework to address {\it given} fairness specifications on sensitive attributes. % Our main contributions are: \begin{itemize}[leftmargin=4.2mm] \item We demonstrate the existence of systematic counterfactual sentiment bias in texts generated by large-scale language models (\S{\ref{sec:counterfactual_evaluation}}).% \item We propose two novel metrics: individual and group fairness metrics to quantify counterfactual sentiment bias in language generation (\S{\ref{sec:counterfactual_evaluation}}).% \item To the best of our knowledge, this paper is the first to introduce a general framework to reduce bias under a specification measure (e.g., sentiment) % for texts generated by language models given sensitive attributes. While we focus on sentiment biases on a few common sensitive attributes ({\it country}, {\it occupation} and {\it name}), the framework can be generalized to other specifications (\S{\ref{sec:approach}}).% \item We evaluate the proposed methods using both automatic metrics and human evaluations of sentiment and semantic relevance, and find a strong correlation between automatic metrics and human evaluations % (\S{\ref{sec:experiment}}).% \end{itemize} \section{Counterfactual Evaluation of Sentiment Bias} \label{sec:counterfactual_evaluation} \paragraph{Fairness specification.} Our goal is to reduce the {\it counterfactual sentiment bias} in a language model, given a {\it fairness specification}. In our specification, we consider a set of sensitive attribute values (e.g., country names, occupations, and person names) of a {\it sensitive attribute} (e.g., {\it Country}, {\it Occupation}, {\it Name}) that we want generated texts to be fair to under counterfactual evaluation. \ignore{ Given a predefined specification on a set of sensitive attribute values (e.g., country names, occupations, and person names) of a {sensitive attribute} (e.g., {\it Country}, {\it Occupation}, {\it Name}), we would like to reduce their {\it counterfactual sentiment biases} in a language model.} Formally, considering for example the sensitive attribute {\it Gender}, we use $\mathcal{A} = \{\text{female, male}\}$ to denote the set of values considered, and use $A=a$ to denote a random variable $A$ that takes the sensitive attribute value $a \in \mathcal{A}$. For each input sequence $\struct{x}$ containing \emph{sensitive tokens} $\phi(a)$ (which are given in the specification, e.g., $\phi(a)$=\{he, his, him, husband, Paul\} for $a=$ male), we choose another value $\tilde{a}$ of the sensitive attribute from the set $\mathcal{A}\setminus \{a\}$, and define the {\it counterfactual input} $\tilde{\struct{x}}=\texttt{cf}(\struct{x}, a, \tilde{a})$ by replacing all occurrences of each sensitive token in $\phi(a)$ with the corresponding token in $\phi(\tilde{a})$, and leaving all other non-sensitive tokens of $\struct{x}$ unchanged. Given a predefined sentiment classifier $f_s$ with sentiment outputs in $[0, 1]$, and a pretrained language model $LM$, so that the random variable $LM(\struct{x})$ is a sentence sampled from the language model conditioned on $\struct{x}$, we define the random variable $S(\struct{x}) = f_s(LM(\struct{x}))$ to be the sentiment score in $[0,1]$ of the generated sentence, and denote its distribution by $P_S(\struct{x})$. Next, for {\it counterfactual evaluation}, we measure the difference between $P_S(\struct{x})$ and $P_S(\tilde{\struct{x}})$ as follows. When quantifying the difference between two output distributions for a binary classification problem -- such as sentiment prediction -- we typically consider predictions formulated as $\hat{y} = \mathbbm{1}(S>\tau)$, given a decision threshold $\tau$. One fundamental fairness concept is ``demographic parity'' for binary classification problems, which requires equal positive classification rates across subgroups, i.e., $p(\hat{y} = 1\mid A=a) = p(\hat{y} = 1 \mid A=\tilde{a})$ for any sensitive attribute values $a, \tilde{a} \in \mathcal{A}$. We can measure deviation from it, i.e.~``demographic disparity'' using the differences between the subgroup positive rates: \vspace{-1mm} \begin{equation*} \big| p(\hat{y} = 1\mid A=a) - p(\hat{y} = 1 \mid A=\tilde{a})\big| \end{equation*} (cf.~Prop. 3.1 in \citet{dwork12fairness}). However, often we do not want our fairness goal to be dependent on a predetermined decision threshold $\tau$, since $\tau$ may be user-defined or simply not known at training time. This consideration leads us to match output \emph{distributions}, which is called ``Strong Demographic Parity'' \citep{jiang2019}. Concretely applied in our LM context, these distributions are $P_S(\struct{x} | A=a)$ and $P_S(\tilde{\struct{x}}|A=\tilde{a})$. Extending this definition to measure unfairness between counterfactual pairs of subgroups, demographic disparity is the difference between positive sentiment rates of $S(\struct{x})$ and $S(\tilde{\struct{x}})$: $|p(S(\struct{x})>\tau) - p(S(\tilde{\struct{x}}) >\tau)|$. We can then measure the deviation by computing the statistical disparity averaged over uniformly random choices of $\tau \in [0,1]$, that is, $\mathbb{E}_{\tau \sim \mathcal{U}[0,1]} \lvert p(S(\struct{x}) > \tau) - p(S(\tilde{\struct{x}}) > \tau) \rvert$ where $\mathcal{U}$ denotes the random uniform distribution. This quantity is equal to the Wasserstein-1 distance between $P_S(\struct{x})$ and $P_S(\tilde{\struct{x}})$ \citep{jiang2019}: \vspace{-1mm} \begin{equation} \begin{split} \mathcal{W}_1& ( P_S(\struct{x}), P_S(\tilde{\struct{x}})) =\\ & \mathbb{E}_{\tau \sim \mathcal{U}[0,1]} \lvert p(S(\struct{x}) > \tau) - p(S(\tilde{\struct{x}}) > \tau) \rvert \end{split} \label{eq:wdistance} \end{equation} Sentiment bias by counterfactual evaluation, i.e., {\it counterfactual sentiment bias}, is then the Wasserstein-1 distance between output sentiment distributions $P_S$ of the original input $\struct{x}$ and its counterfactual $\tilde{\struct{x}}$. Thus, extending \citet{Garg2019Counterfactual}, we define a model to be {\it counterfactually fair} for sentiment if \begin{align} \begin{split} \mathcal{W}_1 (P_S(\struct{x}), P_S(\texttt{cf}(\struct{x}, a, \tilde{a}))) < \epsilon % \end{split} \label{eq:fairness_specification} \end{align}% \noindent\ignorespacesafterend for each sensitive attribute value $a\in\mathcal{A}$, $\tilde{a} \in \mathcal{A}\setminus \{a\}$, and a chosen threshold $\epsilon>0$. This fairness formulation also expresses individual fairness which requires similar individuals to be treated similarly \citep{dwork12fairness}, where similar individuals share similar non-sensitive words in a sentence. Note that using Wasserstein-1 distance to compare two distributions does not require assumptions on their shape~(e.g.,~symmetry). \ignorespacelimit{ Note that this specification addresses the output \emph{distribution} of a generative model, in which it differs from prior work on specifications in NLP models which concern individual predictions of discriminative models \citep{Garg2019Counterfactual, huang2019achieving,jia2019certified}. } \begin{figure}[btp] \centering \subcaptionbox{$\mathcal{W}_1(\cdot,\cdot)=$0.1\label{fig3:a}}{\includegraphics[width=.23\textwidth]{images/w_distance/medium_w_distance.pdf}}% \subcaptionbox{$\mathcal{W}_1(\cdot,\cdot)=$0.01\label{fig3:b}}{\includegraphics[width=.23\textwidth]{images/w_distance/small_w_distance.pdf}} \caption{Illustration of the Wasserstein-1 distance-based fairness metrics on two Gaussian distributions truncated to [0,1], simulating sentiment scores. For comparison, the Wasserstein-1 distance for the two sentiment distributions in Figure~\ref{fig:example:sentiment_occupation} is 0.13.} \label{fig:wasserstein_illustraion} \end{figure} \paragraph{Fairness evaluation.} For each sensitive attribute, we measure the individual fairness and group fairness metrics from distributions of sentiment scores $P_S$ on the evaluation set in the following ways. {\it Individual Fairness Metric.} Based on the fairness property of the Wasserstein-1 distance (Eq. \ref{eq:wdistance}), we compute the {\it Average Individual Fairness} by averaging the Wasserstein-1 distance between the sentiment score distribution of every evaluation sentence $P_S(\struct{x})$ and each of its counterfactual sentence $P_S(\tilde{\struct{x}})$ across all $M$ templates.\footnote{During inference, for each sensitive variable $A$ we design a set of sentence templates to evaluate the counterfactual sentiment bias. See \S{\ref{sec:experiment}} for details.} Formally, we define individual fairness metric (denoted by I.F.) as: % \begin{equation} \frac{2}{M |\mathcal{A}| (|\mathcal{A}|-1)} \sum_{m=1}^M\sum_{a,\tilde{a}\in\mathcal{A}} \mathcal{W}_1 (P_S(\struct{x}^m), P_S(\struct{\tilde{x}}^m)) \label{eq:avg_if} \end{equation} where the inner sum is over all $\frac{|\mathcal{A}|(|\mathcal{A}|-1)}{2}$ unordered pairs of distinct $a,\tilde{a} \in \mathcal{A}$, and $a, \tilde{a}$ are values of the sensitive attribute in $\struct{x}^m$ and $\struct{\tilde{x}}^m$ respectively. {\it Group Fairness Metric.} This metric measures fairness for particular subgroups. Concretely, the evaluation sentences are separated into $|\mathcal{A}| = K$ disjoint subgroups, assigning a sentence to a subgroup $a$ if it contains sensitive tokens from $\phi(a)$. Taking for example the sensitive attribute {\it Name} and selecting $\mathcal{A}=\{\text{male, female}\}$, we have $K=2$, and $\phi(\text{male})=\{\text{Jake}, \text{Scott}, \text{Jacob}, \ldots\}$ for $a=$ male.\footnote{Here gender is treated as a binary variable.} For each subgroup $a\in\mathcal{A}$, we then measure the Wasserstein-1 distance between the sentiment distributions of all generated sentences of inputs from this subgroup, denoted by $P_S^a$, and that over the entire evaluation set, denoted by $P_S^*$. We report the average of all these subgroup Wasserstein-1 distances as the {\it Average Group Fairness} metric, denoted by G.F.:% \begin{equation} G.F.:=\frac{1}{|\mathcal{A}|}\sum_{a\in\mathcal{A}} W_1 (P_S^a, P_S^*). \label{eq:avg_gf} \end{equation} \section{Language Models with Fair Sentiment Distribution} \label{sec:approach} In this section, we introduce two approaches for reducing counterfactual sentiment bias in language models, which will be subsequently evaluated with the above described fairness metrics. Given an input prefix ${\struct{x}}_{1:i}$ with $i$ tokens, ${\struct{x}}_{1:i}=(x_1, \cdots, x_i)$, where the last token $x_i\in\phi(a)$ is associated with a subgroup with value $a$ of the sensitive attribute, we construct a perturbed prefix by replacing $x_i$ with a token $\tilde{x}_i\in\phi(\tilde{a})$ from a different subgroup $\tilde{a}$, where fairness between the two subgroups should be maintained. We obtain a perturbed prefix ${\tilde{\struct{x}}}_{1:i}=({\struct{x}}_{1:i-1}, \tilde{x}_i)$. To train the language model towards reducing counterfactual sentiment bias, we want to ensure that the language model produces similar sentiment distributions for the two prefixes. Specifically, we would like the Wasserstein-1 distance between the sentiment distributions of generated sentences, $P_S(\struct{x}_{1:i})$ and $P_S(\struct{\tilde{x}}_{1:i})$, to be small, as shown in Eq.~\ref{eq:fairness_specification}. But in practice, it is prohibitively expensive to sample a distribution of generated sequences for every $\struct{x}_{1:i}$ and $\struct{\tilde{x}}_{1:i}$ during training. Instead, we use hidden features from the language model as a proxy to represent the distribution of future generated sequences, since $p(x_{i+1}, x_{i+2}, \cdots | \struct{x}_{1:i})$ and $p(x_{i+1}, x_{i+2}, \cdots | \struct{\tilde{x}}_{1:i})$ depend on the hidden states of the language model conditioned on $\struct{x}_{1:i}$ and $\struct{\tilde{x}}_{1:i}$, respectively. Concretely, we explore two approaches: {\it Fairness through embedding regularization} and {\it Fairness through sentiment regularization}, which exploit the hidden states of the language model. Given an $L$-layer transformer based language model with an input $\struct{x}_{1:i}$, we let $h(\struct{x}_{1:i}) = \left( h^{(1)}(\struct{x}_{1:i}), \cdots, h^{(L)}(\struct{x}_{1:i}) \right)$ denote the hidden features (or contextual embeddings) obtained by its hidden layers. \textbf{Fairness through embedding regularization.} In this approach, we desire that the embeddings $h^{(j)} (\struct{x}_{1:i})$ and $h^{(j)} (\struct{\tilde{x}}_{1:i})$ are close, since the joint distributions $p(x_{i+1}, x_{i+2}, \cdots | \struct{x}_{1:i})$ and $p(x_{i+1}, x_{i+2}, \cdots | \struct{\tilde{x}}_{1:i})$ are determined by these embeddings. We call it the ``embedding regularization'' approach, and define the fairness loss as a distance between the embeddings, denoted as $d(h(\struct{x}_{1:i}), h(\struct{\tilde{x}}_{1:i}))$. We use the cosine distance: \[ d(h(\struct{x}_{1:i}), h(\struct{\tilde{x}}_{1:i})) := 1 - \frac{\bar{h}(\struct{x}_{1:i})^T \bar{h}(\struct{\tilde{x}}_{1:i})}{\| \bar{h}(\struct{x}_{1:i}) \| \| \bar{h}(\struct{\tilde{x}}_{1:i}) \|} \] where $\bar{h}({\struct{x}})$ is set as the average of the last two embedding vectors ${h}^{(L-1)}({\struct{x}})$ and ${h}^{(L)}({\struct{x}})$ based on the following two reasons: First, we want to capture high-level semantics (e.g., sentiments) and embedding in later layers represents higher level semantics \citep{BERT_NLP_pipeline}. \ignore{ where $\bar{h}({\struct{x}}) = \sum_{j=L_s}^L \alpha_j {h}^{(j)}({\struct{x}}), 1 \leq L_s \leq L$ is a ``summary'' of embedding layer features, and $\alpha_j$ is the weight of ${h}^{(j)}({\struct{x}})$.} \ignore{ In our case, since we want to capture high-level semantics (e.g., sentiments), we empirically use the average over the last 2 layers' embedding as the extracted features $\bar{h}({\struct{x}})$ ($L_s=L-2, \alpha_{L-1} = 0.5, \alpha_{L}=0.5$).} Second, we find that averaging too many layers can make the difference between $\bar{h}(\struct{x}_{1:i})$ and $\bar{h}(\struct{\tilde{x}}_{1:i})$ very small, reducing the effectiveness of regularization. An advantage of this method is that it can directly be applied to fairness specifications beyond sentiment, as it encourages $p(x_{i+1}, x_{i+2}, \cdots | \struct{x}_{1:i})$ and $p(x_{i+1}, x_{i+2}, \cdots | \struct{\tilde{x}}_{1:i})$ to be close regardless of the specification measure (e.g., sentiment). \begin{figure*}[ht!] \centering \includegraphics[width=.88\linewidth]{images/model_pipeline_new.pdf}% \caption{Proposed language model debiasing pipeline (the third step in curriculum training).% } \label{fig:model} \end{figure*} Since the embedding regularization method enforces the model's predictions to be similar for the original input $\struct{x}_{1:i}$ and the perturbed input $\struct{\tilde{x}}_{1:i}$ without specification measure information, a potential drawback of this method is that the regularization can be too strong. As we require the hidden representations (and thus the joint probabilities) to be as close as possible, % this can lead to the model learning to ignore the sensitive tokens, and thus generally a reduced dependence on them, as shown in Appendix \ref{sec:negative_example}. Despite being completely fair in this extreme case, model performance may suffer since the generated texts should ideally be contextually conditioned on $x_i$ or $\tilde{x}_i$. \textbf{Fairness through sentiment regularization.} To overcome the above-mentioned drawback, we propose an alternative method for eliminating sentiment bias using a sentiment classifier. Instead of measuring $d(h(\struct{x}_{1:i}), h(\struct{\tilde{x}}_{1:i}))$ directly, we first apply a sentiment classifier $f_{s_h}$ to both $h(\struct{x}_{1:i})$ and $h(\struct{\tilde{x}}_{1:i})$, and measure $d(f_{s_h}(h(\struct{x}_{1:i})), f_{s_h}(h(\struct{\tilde{x}}_{1:i})))$ instead. % Note that the output of $f_{s_h}$ can be multi-dimensional (e.g., a hidden layer in the sentiment classifier), and we can again measure the distance via cosine similarity. Applying the classifier $f_{s_h}$ can be seen as a projection from $h({\struct{x}})$ to a subspace that ideally only contains sentiment-related information. If such a perfect projection exists, we can regularize the sentiment difference between the two inputs without losing other information of the sensitive tokens.~% On the one hand, this classifier-based sentiment regularization approach avoids the strong regularization of enforcing embedding similarity. % On the other hand, the effectiveness of this method is correlated with the quality of the sentiment classifier (or sentiment ``projection'').\footnote{We use a sentiment classifier as a proxy to measure sentiment scores/biases in this paper. The classifier itself might not be perfect and might exhibit some biases; for this reason we compare several alternatives. % } ~The detailed implementation of $f_{s_h}$ is introduced in Appendix \ref{sec:additional_details}. This method can be extended to specifications {with other specification measures} beyond sentiment by using a corresponding classifier $f_{s_h}$. \iffalse \textbf{Self-supervision.}\pscomment{Note not working yet.} Empirically, we observe that the proposed embedding regularization and sentiment regularization require careful tuning of regularization parameter. When the regularization is too small, there is less effect in reducing the biases. On the other hand, when the regularization is too large, we observe the model could generate non-attribute related information. Hence, we further investigate using a self-supervision loss. Suppose there are $|\mathcal{A}|=K$ attributes, we learn a neural network $f_{ss}$ using the hidden states $h(x_{1:i-1}, a_i)$ to classify class the attribute $a_i$ among $K$ classes. Similarly we also learn the self-supervision using the altered hidden states $h(x_{1:i-1}, \tilde{a}_i)$ to classify the attribute $\tilde{a}_i$. We then add the self-supervision cross-entropy loss as another term in the regularization. \fi \textbf{Implementation: Three-step curriculum training.} We use a three-step curriculum training schema. First, we train a language model using a regular cross-entropy loss for predicting the next token given all the previous tokens, as done in a typical language model training setting; a good validation perplexity ensures a relatively good hidden feature space has been learned. Second, using this language model, we train a sentiment classifier $f_{s_h}$ (e.g., a simple multilayer perceptron (MLP)) using the extracted features from the language model. Since sentiment labels are generally unavailable for a large-scale corpus, we label the training data with the Google Cloud sentiment API\footnote{https://cloud.google.com/natural-language/} and train a sentiment classifier on the data with high magnitude. Third, with the fixed $f_{s_h}$ from the previous step, we continue training on the subset of the original language model training set that contains any of the sensitive tokens, with an additional fairness loss $\mathcal{L}_{\text{fairness}}$ based on our ``embedding regularization'' or ``sentiment regularization'' methods with a regularization parameter $\lambda$. Meanwhile the language model is also trained on the regular cross-entropy loss ($\mathcal{L}_{\text{LM}}$) on predicting the next token of the unperturbed input $\struct{x}$. Concretely, the loss function for an input sequence $\struct{x}$ during the third step is: \[ \mathcal{L}(\struct{x}) = \mathcal{L}_{\text{LM}} (\struct{x}) + \lambda \cdot \mathcal{L}_{\text{fairness}}(h(\struct{x}_{1:i}), h(\struct{\tilde{x}}_{1:i})) \] We refer to this third step as the ``debiasing step'', as illustrated in Figure~\ref{fig:model}. Note that we do not use any template at any step of training. \iffalse We will evaluate both the ``embedding regularization'', ``sentiment regularization'' approaches in our experiments. Our debiasing pipeline is shown in in Figure~\ref{fig:model}. The fairness loss is based on ``embedding regularization'', ``sentiment regularization'', or/and ``self-supervision'' loss with a specified regularization strength, and the language model is trained with both fairness loss and the regular negative log-likelihood (NLL) or cross-entropy loss on predicting the next token. Thoughout the experiments. we start debiasing the language model from a pre-trained large-scale language model. \fi \section{Background \& Related Work} \ignore{ \paragraph{Language models.} Given an article $\boldsymbol{x}$ composed of $n$ tokens $(x_1, \cdots, x_n)$, a language model estimates the probability $p(\boldsymbol{x})$ of $\boldsymbol{x}$ occurring in natural language under the assumption that the joint probability factorizes over the tokens as follows: \[ p(\struct{x}) = \prod_{i=1}^n p(x_i | x_{1}, \cdots, x_{i-1}) = \prod_{i=1}^n p(x_i | \boldsymbol{x}_{1:i-1}) \] where the prefix $\struct{x}_{1:i-1} := (x_1, \cdots, x_{i-1})$ for convenience. Once a language model is learned, the model can be used to generate sequences that capture long-range dependencies \citep{graves2013generating}. By using the conditional probability $p(x_i | \struct{x}_{1:i-1})$, we sample the next token $x_i$ given a prefix (or conditioning inputs) $\struct{x}_{1:i-1}$. Then we can iteratively use the generated token $x_i$ along with the previous prompt as the conditioning inputs to generate the next token $x_{i+1}$ using $p(x_{i+1} | \struct{x}_{1:i})$. We use Transformer-based models \citep{vaswani2017attention} to learn the probability $p(x_i | \struct{x}_{1:i-1})$, which has been demonstrated to scale to large self-supervised models with outstanding performance in generation quality and representation learning, including BERT~\citep{devlin2018bert}, GPT-2~\citep{radford2019language}, MT-DNN~\citep{liu2019multi}, XLNet~\citep{yang2019xlnet} and many others. } \paragraph{Bias in natural language processing systems.} Besides learning to favor the language of the authors' demographic group \citep{hovy2015tagging}, NLP models can pick up on a variety of cultural associations and undesirable social biases~\citep{Caliskan2017Semantics}. Systematic imbalances were observed across NLP tasks, such as gender bias in coreference resolution \citep{zhao2018gender,rudinger2018gender}, visual semantic role labeling \citep{zhao2017men}, image captioning \citep{anne2018women}, and demographic biases in language generation~\citep{sheng-etal-2019-woman}, text classification \citep{Dixon2018Measuring,Garg2019Counterfactual}. Concretely in sentiment analysis, \citet{kiritchenko-mohammad-2018-examining} found systematic biases with respect to race and gender across more than 200 systems. \ignorespacelimit{ For word embeddings, occupational gender bias has been identified and addressed by measuring projections onto linear gender-related subspaces of word representations \citep{Bolukbasi2016Man,Lemoine2018Mitigating,zhao2018learning,bordia2019identifying}. \citet{gonen2019lipstick} however pointed out limitations to this approach: bias in word embeddings may appear indirectly in other ways, even after minimizing linear projections onto gender-related subspaces. } \paragraph{Mitigating bias in language models.} Rather than debiasing word embeddings, \citet{Lu2018Gender} proposed counterfactual data augmentation as a remedy to occupation-specific gender biases, and found that it can much better retain model performance than debiasing word embeddings, especially in language modeling. \citet{zhao2019gender} and \citet{basta2019evaluating} demonstrated gender bias in pretrained language modeling representations (ELMo), which translates into downstream tasks, but did not consider the language generated by the ELMo language model. \citet{bordia2019identifying}, as well as \citet{qian2019reducing} identified biases in a language modeling context and propose regularization strategies of generating certain words (e.g., ``doctor'') with differently gendered inputs. % In contrast to these prior works on mitigating gender biases of language models based on the probabilities of generating certain words (such as occupation ratios), we probe texts generated by language models using a sentiment analysis system, similar to \citet{sheng-etal-2019-woman}. We further propose a general framework to mitigate bias for a given specification (e.g., fairness w.r.t. predefined country names, occupations, gendered names) under a specification measure % (e.g., sentiment, regard, etc.). Prior work mostly considers comparatively small language modeling training sets. In contrast, we investigate bias in Transformer-based models with a similar number of parameters (708 million parameters) to GPT-2~\cite{gpt2_6months} trained on English news articles from WMT-19 (40GB of text) and WikiText-103~\citep{merity2016pointer}. \paragraph{Fairness.} \ignore{ A fundamental group fairness definition is ``equality of odds'', which requires false positive and false negative prediction rates to be equal across demographic subgroups \citep{hardt16equality}. However, this definition of group fairness can be superficially satisfied through post-processing methods at a potential cost on individual fairness, which requires similar individuals to be treated similarly \citep{dwork12fairness}, as well as other statistical fairness metrics. Furthermore, ignoring the data generating causal graph of the problem may lead to ``corrective discrimination'' (i.e., discrimination caused by the very procedure to enforce statistical fairness criteria). } \ignorespacelimit{ Popular statistical fairness criteria often aim at achieving individual fairness ~\citep{dwork12fairness} or group fairness \citep{hardt16equality} goals. In our problem setting, we consider counterfactual fairness~\cite{Garg2019Counterfactual} based on the causal graph representing the language model and sentiment classifier. % We aim to achieve counterfactual fairness by debiasing the latent representation of inputs in the language models, contributing to a family of methods to learn fair representations \citep{beutel17data} and enforcing independence between sensitive attributes and prediction outputs \citep{calders09building, Lemoine2018Mitigating, jiang2019}. } Popular statistical fairness criteria often aim at achieving individual fairness ~\citep{dwork12fairness} or group fairness \citep{hardt16equality} goals. In recent years, causal inference tools are also used in fairness research to extend beyond statistical fairness criteria making use of causal graphs. Similar to individual fairness, which requires similar individuals to be treated similarly~\citep{dwork12fairness}, counterfactual fairness requires the same model predictions before and after intervention on sensitive attributes in data-generating causal graphs \citep{kusner17counterfactual, kilbertus2017, chiappa19path, chiappa19causal}. In our problem setting, we deviate from the counterfactual fairness works above by considering counterfactual fairness~\citep{Garg2019Counterfactual} based on a simple causal graph representing the language model instead of the data-generating process. We aim towards counterfactual fairness by debiasing the latent representation of inputs in the language models, contributing to a family of methods to learn fair representations \citep{beutel17data, zemel13learning, creager2019, edwards16censoring, louizos16fair} and enforcing independence between sensitive attributes and prediction outputs \citep{calders09building, Lemoine2018Mitigating, jiang2019, chiappa20general}. \ignore{ A fundamental group fairness definition is ``equality of odds'', which requires false positive and false negative prediction rates to be equal across demographic subgroups \citep{hardt16equality}. However, this definition of group fairness can be superficially satisfied through post-processing methods at a potential cost on individual fairness, which requires similar individuals to be treated similarly \citep{dwork12fairness}, as well as other statistical fairness metrics. Furthermore, ignoring the data generating causal graph of the problem may lead to ``corrective discrimination'' (i.e., discrimination caused by the very procedure to enforce statistical fairness criteria). Causal inference tools are often used in fairness research to deal with the problems above that may occur in satisfying statistical fairness criteria. Similar to individual fairness, counterfactual fairness requires similar model predictions before and after intervention on sensitive attributes in data generating causal graphs \citep{kusner17counterfactual, kilbertus2017}. In our problem setting, we consider the counterfactual fairness goal using a causal graph representing the text generation model with input features, latent features, model outputs and predictions as nodes of the graph. We aim towards counterfactual fairness by de-biasing the learned representation of inputs in the latent space of the text generative model, contributing to a family of methods to learn fair representations \citep{beutel17data, zemel13learning, creager2019, edwards16censoring, louizos16fair} and enforcing independence between sensitive attributes and prediction outputs \citep{calders09building, Lemoine2018Mitigating, jiang2019}. }
{ "attr-fineweb-edu": 2.052734, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdic25V5jD6RgAq9g
\section{Introduction} This article is a continuation of research presented by \citet{DSSbici1}, \citet{DSSbici2}, \citet{BSSbici6} and, in particular, by \citet{SSSbici3}. Herein, using a mathematical model, we examine the power required on a velodrome, for an individual pursuit. It can be also applied to other individual races, such the kilometer time trial and the hour record; the essential quality is the constancy of effort. In each case, the opposing forces consist of air resistance, rolling resistance, lateral friction and drivetrain resistance. We consider a velodrome with its straights, circular arcs, and connecting transition curves, whose inclusion\,---\,while presenting a certain challenge, and neglected in previous studies \citep[e.g.,][]{SSSbici3}\,---\,increases the empirical adequacy of the model. Herein, a model is empirically adequate if it accounts for measurements~\citep{Fraassen}. We begin this article by expressing mathematically the geometry of both the black lin \footnote{The circumference along the inner edge of this five-centimetre-wide line\,---\,also known as the measurement line and the datum line\,---\,corresponds to the official length of the track.} and the inclination of the track. Our expressions are accurate analogies for the common geometry of modern $250$\,-metre velodromes~(Mehdi Kordi, {\it pers.~comm.}, 2020). We proceed to formulate an expression for power expended against dissipative forces, which we examine for both the constant-cadence and constant-power cases. We examine their empirical adequacy, and conclude by discussing the results. In the appendices, we consider, {\it a posteriori}, changes in the kinetic and potential energy, for both the constant-cadence and constant-power cases, as well as the explicite measurements: force and cadence. \section{Track} \label{sec:Formulation} \subsection{Black-line parameterization} \label{sub:Track} To model the required power for an individual pursuit of a cyclist who follows the black line, in a constant aerodynamic position, as illustrated in Figure~\ref{fig:FigBlackLine}, we define this line by three parameters. \begin{figure}[h] \centering \includegraphics[scale=0.35]{FigBlackLine} \caption{\small A constant aerodynamic position along the black line} \label{fig:FigBlackLine} \end{figure} \begin{itemize} \item[--] $L_s$\,: the half-length of the straight \item[--] $L_t$\,: the length of the transition curve between the straight and the circular arc \item[--] $L_a$\,: the half-length of the circular arc \end{itemize} The length of the track is $S=4(L_s+L_t+L_a)$\,. In Figure~\ref{fig:FigTrack}, we show a quarter of a black line for $L_s=19$\,m\,, $L_t=13.5$\,m and $L_a=30$\,m\,, which results in $S=250$\,m\,. \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigTrack.pdf} \caption{\small A quarter of the black line for a $250$\,-metre track} \label{fig:FigTrack} \end{figure} This curve has continuous derivative up to order two; it is a $C^2$ curve, whose curvature is continuous. To formulate, in Cartesian coordinates, the curve shown in Figure~\ref{fig:FigTrack}, we consider the following. \begin{itemize} \item[--] The straight, \begin{equation*} y_1=0\,,\qquad0\leqslant x\leqslant a\,, \end{equation*} shown in gray, where $a:=L_s$\,. \item[--] The transition, shown in black\,---\,following a standard design practice\,---\,we take to be an Euler spiral, which can be parameterized by Fresnel integrals, \begin{equation*} x_2(\varsigma)=a+\sqrt{\frac{2}{A}}\int\limits_0^{\varsigma\sqrt{\!\frac{A}{2}}}\!\!\!\!\cos\!\left(x^2\right)\,{\rm d}x \end{equation*} and \begin{equation*} y_2(\varsigma)=\sqrt{\frac{2}{A}}\int\limits_0^{\varsigma\sqrt{\!\frac{A}{2}}}\!\!\!\!\sin\!\left(x^2\right)\,{\rm d}x\,, \end{equation*} with $A>0$ to be determined; herein, $\varsigma$ is a curve parameter. Since the arclength differential,~${\rm d}s$\,, is such that \begin{align*} {\rm d}s&=\sqrt{x_2'(\varsigma)^2+y_2'(\varsigma)^2}\,{\rm d}\varsigma\\ &=\sqrt{\cos^2\left(\dfrac{A\varsigma^2}{2}\right)+\sin^2\left(\dfrac{A\varsigma^2}{2}\right)}\,{\rm d}\varsigma\\ &={\rm d}\varsigma\,, \end{align*} we write the transition curve as \begin{equation*} (x_2(s),y_2(s)), \quad 0\leqslant s\leqslant b:=L_t\,. \end{equation*} \item[--] The circular arc, shown in gray, whose centre is $(c_1,c_2)$ and whose radius is $R$\,, with $c_1$\,, $c_2$ and $R$ to be determined. Since its arclength is specified to be $c:=L_a,$ we may parameterize the quarter circle by \begin{equation} \label{eq:x3} x_3(\theta)=c_1+R\cos(\theta) \end{equation} and \begin{equation} \label{eq:y3} y_3(\theta)=c_2+R\sin(\theta)\,, \end{equation} where $-\theta_0\leqslant\theta\leqslant 0$\,, for $\theta_0:=c/R$\,. The centre of the circle is shown as a black dot in Figure~\ref{fig:FigTrack}. \end{itemize} We wish to connect these three curve segments so that the resulting global curve is continuous along with its first and second derivatives. This ensures that the curvature of the track is also continuous. To do so, let us consider the connection between the straight and the Euler spiral. Herein, $x_2(0)=a$ and $y_2(0)=0$\,, so the spiral connects continuously to the end of the straight at $(a,0)$\,. Also, at $(a,0)$\,, \begin{equation*} \frac{{\rm d}y}{{\rm d}x}=\frac{y_2'(0)}{x_2'(0)}=\frac{0}{1}=0\,, \end{equation*} which matches the derivative of the straight line. Furthermore, the second derivatives match, since \begin{equation*} \frac{{\rm d}^2y}{{\rm d}x^2}=\frac{y''_2(0)x_2'(0)-y'_2(0)x_2''(0)}{(x_2'(0))^2}=0\,, \end{equation*} which follows, for any $A>0$\,, from \begin{equation} \label{eq:FirstDer} x_2'(\varsigma)=\cos^2\left(\dfrac{A\,\varsigma^2}{2}\right)\,, \quad y_2'(\varsigma)=\sin^2\left(\dfrac{A\,\varsigma^2}{2}\right) \end{equation} and \begin{equation*} x_2''(\varsigma)=-A\,\varsigma\sin\left(\dfrac{A\,\varsigma^2}{2}\right)\,, \quad y_2''(\varsigma)=A\,\varsigma\cos\left(\dfrac{A\,\varsigma^2}{2}\right)\,. \end{equation*} Let us consider the connection between the Euler spiral and the arc of the circle. In order that these connect continuously, \begin{equation*} \big(x_2(b),y_2(b)\big)=\big(x_3(-\theta_0),y_3(-\theta_0)\big)\,, \end{equation*} we require \begin{equation} \label{eq:Cont1} x_2(b)=c_1+R\cos(\theta_0)\,\,\iff\,\,c_1=x_2(b)-R\cos\!\left(\dfrac{c}{R}\right) \end{equation} and \begin{equation} \label{eq:Cont2} y_2(b)=c_2-R\sin(\theta_0)\,\,\iff\,\, c_2=y_2(b)+R\sin\!\left(\dfrac{c}{R}\right)\,. \end{equation} For the tangents to connect continuously, we invoke expression~(\ref{eq:FirstDer}) to write \begin{equation*} (x_2'(b),y_2'(b))=\left(\cos\left(\dfrac{A\,b^2}{2}\right),\,\sin\left(\dfrac{A\,b^2}{2}\right)\right)\,. \end{equation*} Following expressions~(\ref{eq:x3}) and (\ref{eq:y3}), we obtain \begin{equation*} \big(x_3'(-\theta_0),y_3'(-\theta_0)\big)=\big(R\sin(\theta_0),R\cos(\theta_0)\big)\,, \end{equation*} respectively. Matching the unit tangent vectors results in \begin{equation} \label{eq:tangents} \cos\left(\dfrac{A\,b^2}{2}\right)=\sin\!\left(\dfrac{c}{R}\right)\,,\quad \sin\left(\dfrac{A\,b^2}{2}\right)=\cos\!\left(\dfrac{c}{R}\right)\,. \end{equation} For the second derivative, it is equivalent\,---\,and easier\,---\,to match the curvature. For the Euler spiral, \begin{align*} \kappa_2(s)&=\frac{x_2'(s)y_2''(s)-y_2'(s)x_2''(s)} {\Big(\big(x_2'(s)\big)^2+\big(y_2'(s)\big)^2\Big)^{\frac{3}{2}}}\\ &=A\,s\cos^2\left(\dfrac{A\,s^2}{2}\right)+A\,s\sin^2\left(\dfrac{A\,s^2}{2}\right)\\ &=A\,s\,, \end{align*} which is indeed the defining characteristic of an Euler spiral: the curvature grows linearly in the arclength. Hence, to match the curvature of the circle at the connection, we require \begin{equation*} A\,b=\frac{1}{R} \,\,\iff\,\,A=\frac{1}{b\,R}\,. \end{equation*} Substituting this value of $A$ in equations~(\ref{eq:tangents}), we obtain \begin{align*} \cos\!\left(\dfrac{b}{2R}\right)&=\sin\!\left(\dfrac{c}{R}\right)\,,\quad \sin\!\left(\dfrac{b}{2R}\right)=\cos\!\left(\dfrac{c}{R}\right)\\ &\iff\dfrac{b}{2R}=\dfrac{\pi}{2}-\dfrac{c}{R}\\ &\iff R=\frac{b+2c}{\pi}. \end{align*} It follows that \begin{equation*} A=\frac{1}{b\,R}=\frac{\pi}{b\,(b+2c)}\,; \end{equation*} hence, the continuity condition stated in expressions~(\ref{eq:Cont1}) and (\ref{eq:Cont2}) determines the centre of the circle,~$(c_1,c_2)$\,. For the case shown in Figure~\ref{fig:FigTrack}, the numerical values are~$A=3.1661\times10^{-3}$\,m${}^{-2}$, $R=23.3958$\,m\,, $c_1=25.7313$\,m and $c_2=23.7194$\,m\,. The complete track\,---\,with its centre at the origin\,,~$(0,0)$\,---\,is shown in Figure~\ref{fig:FigComplete}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigCompleteTrack.pdf} \caption{\small Black line of $250$\,-metre track} \label{fig:FigComplete} \end{figure} The corresponding curvature is shown in Figure~\ref{fig:FigCurvature}. Note that the curvature transitions linearly from the constant value of straight,~$\kappa=0$\,, to the constant value of the circular arc,~$\kappa=1/R$\,. \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigTrackCurvature.pdf} \caption{\small Curvature of the black line,~$\kappa$\,, as a function of distance,~$s$\,, with a linear transition between the zero curvature of the straight and the $1/R$ curvature of the circular arc} \label{fig:FigCurvature} \end{figure} \subsection{Track-inclination angle} \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigInclinationAngle.pdf} \caption{\small Track inclination,~$\theta$\,, as a function of the black-line distance,~$s$} \label{fig:FigAngle} \end{figure} There are many possibilities to model the track inclination angle. We choose a trigonometric formula in terms of arclength, which is a good analogy of an actual $250$\,-metre velodrome. The minimum inclination of $13^\circ$ corresponds to the midpoint of the straight, and the maximum of $44^\circ$ to the apex of the circular arc. For a track of length $S$\,, \begin{equation} \label{eq:theta} \theta(s)=28.5-15.5\cos\!\left(\frac{4\pi}{S}s\right)\,; \end{equation} $s=0$ refers to the midpoint of the lower straight, in Figure~\ref{fig:FigComplete}, and the track is oriented in the counterclockwise direction. Figure \ref{fig:FigAngle} shows this inclination for $S=250$\,m\,. \section{Instantaneous power} \label{sec:InstPower} A mathematical model to account for the power required to propel a bicycle is based on \citep[e.g.,][]{DSSbici1} \begin{equation} \label{eq:BikePower} P=F\,V\,, \end{equation} where $F$ stands for the magnitude of forces opposing the motion and $V$ for speed. Herein, we model the rider as undergoing instantaneous circular motion, in rotational equilibrium about the line of contact of the tires with the ground. Following \citet[Section~2]{SSSbici3}, in accordance with Figure~\ref{fig:FigCentFric}, along the black line of a velodrome, in windless conditions, \begin{subequations} \label{eq:power} \begin{align} \nonumber P&=\\ &\dfrac{1}{1-\lambda}\,\,\Bigg\{\label{eq:modelO}\\ &\left.\left.\Bigg({\rm C_{rr}}\underbrace{\overbrace{\,m\,g\,}^{F_g}(\sin\theta\tan\vartheta+\cos\theta)}_N\cos\theta +{\rm C_{sr}}\Bigg|\underbrace{\overbrace{\,m\,g\,}^{F_g}\frac{\sin(\theta-\vartheta)}{\cos\vartheta}}_{F_f}\Bigg|\sin\theta\Bigg)\,v \right.\right.\label{eq:modelB}\\ &+\,\,\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,V^3\Bigg\}\label{eq:modelC}\,, \end{align} \end{subequations} where $m$ is the mass of the cyclist and the bicycle, $g$ is the acceleration due to gravity, $\theta$ is the track-inclination angle, $\vartheta$ is the bicycle-cyclist lean angle, $\rm C_{rr}$ is the rolling-resistance coefficient, $\rm C_{sr}$ is the coefficient of the lateral friction, $\rm C_{d}A$ is the air-resistance coefficient, $\rho$ is the air density, $\lambda$ is the drivetrain-resistance coefficient. Herein, $v$ is the speed at which the contact point of the rotating wheels moves along the track \citep[Appendix~B]{DSSbici1}, which we commonly consider as coinciding with the black-line speed. $V$ is the centre-of-mass speed. Since the lateral friction is a dissipative force, it does negative work, and the work done against it\,---\,as well as the power\,---\,are positive. For this reason, in expression~(\ref{eq:modelB}), we consider the magnitude,~$\big|{\,\,}\big|$\,. \begin{figure}[h] \centering \includegraphics[scale=0.8]{FigNonIner.pdf} \caption{\small Force diagram} \label{fig:FigCentFric} \end{figure} For reasons discussed by \citet[Appendix~B.2]{DSSbici2}, in expression~(\ref{eq:power}), we assume the steadiness of effort, which\,---\,following an initial acceleration\,---\,is consistent with a steady pace of an individual pursuit, as presented in Section~\ref{sec:Adequacy}, below. Formally, this assumption corresponds to setting the acceleration,~$a$\,, to zero in \citet[expression~(1)]{SSSbici3}. Herein, the acceleration refers to the change of the centre-of-mass speed. This speed is nearly constant if the power is constant, which can be viewed as a quantification of the cyclist's effort. In other words, the force\,---\,and hence the power required to accelerate the bicycle-cyclist system\,---\,is associated mainly with the change of the centre-of-mass speed, not with the change of the black-line speed. To gain an insight into expression~(\ref{eq:power}), let us consider a few special cases. If $\theta=\vartheta=0$\,, \begin{equation} \label{eq:straight} P=\underbrace{\dfrac{{\rm C_{rr}}\,m\,g+\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,V^2}{1-\lambda}}_F\,V\,, \end{equation} where\,---\,as expected for a flat, straight road\,---\,$v\equiv V$\,. Also, on a velodrome, along the straights, $\vartheta=0$\, and expression~(\ref{eq:modelB}) becomes \begin{equation*} \left({\rm C_{rr}}\,m\,g\,\cos^2\theta +{\rm C_{sr}}\,m\,g\,\sin^2\theta\right)\,V\,. \end{equation*} If, along the curves, $\vartheta=\theta$\,, the second summand of expression~(\ref{eq:modelB}) is zero, as expected. Let us return to expression~(\ref{eq:power}). Therein, $\theta$ is given by expression~(\ref{eq:theta}). The lean angle is \citep[Appendix~A]{SSSbici3} \begin{equation} \label{eq:LeanAngle} \vartheta=\arctan\dfrac{V^2}{g\,r_{\rm\scriptscriptstyle CoM}}\,, \end{equation} where $r_{\rm\scriptscriptstyle CoM}$ is the centre-of-mass radius, and\,---\,along the curves, at any instant\,---\,the centre-of-mass speed is \begin{equation} \label{eq:vV} V=v\,\dfrac{\overbrace{(R-h\sin\vartheta)}^{\displaystyle r_{\rm\scriptscriptstyle CoM}}}{R} =v\,\left(1-\dfrac{h\,\sin\vartheta}{R}\right)\,, \end{equation} where $R$ is the radius discussed in Section~\ref{sub:Track} and $h$ is the centre-of-mass height. Along the straights, the black-line speed is equivalent to the centre-of-mass speed, $v=V$\,. As expected, $V=v$ if $h=0$\,, $\vartheta=0$ or $R=\infty$\,. Invoking expressions~(\ref{eq:LeanAngle}) and (\ref{eq:vV}), we neglect the vertical variation of the centre of mass and, hence, assume that the centre-of-mass trajectory is contained in a horizontal plane, where\,---\,in accordance with the track geometry\,---\,this plane is parallel to the plane that contains the black line. Accounting for the vertical motion of the centre of mass would mean allowing for a nonhorizontal centripetal force and including the work done in raising the centre of mass. \section{Numerical examples} \label{sec:NumEx} \subsection{Model-parameter values} \label{sub:ModPar} For expressions~(\ref{eq:power}), (\ref{eq:LeanAngle}) and (\ref{eq:vV}), we consider a velodrome discussed in Section~\ref{sec:Formulation}, and let $R=23.3958$\,m\,. For the bicycle-cyclist system, we assume, $h=1.2$\,m\,, $m=84$\,kg\,, ${\rm C_{d}A}=0.2$\,m${}^2$\,, ${\rm C_{rr}}=0.002$\,, ${\rm C_{sr}}=0.003$ and $\lambda=0.02$\,. For the external conditions, $g=9.81$\,m/s${}^2$ and $\rho=1.225$\,kg/m${}^3$\,. \subsection{Constant cadence} \label{sub:ConstCad} Let the black-line speed be constant,~$v=16.7$\,m/s\,, which is tantamount to the constancy of cadence. As discussed in Section~\ref{sec:InstPower}, the assumption of a constant black-line speed means neglecting the acceleration of the centre of mass. The lean angle and the centre-of-mass speed, as functions of distance\,---\,obtained by numerically and simultaneously solving equations~(\ref{eq:LeanAngle}) and (\ref{eq:vV}), at each point of a discretized model of the track\,---\,are shown in Figures~\ref{fig:FigLeanAngle} and \ref{fig:FigCoMSpeed}, respectively. The average centre-of-mass speed, per lap is~$\overline V=16.3329$\,m/s\,. Changes of $V$\,, shown in Figure~\ref{fig:FigCoMSpeed}, result from the lean angle. Along the straights, $\vartheta=0\implies V=v$\,. Along the curves, since $\vartheta\neq0$\,, the centre-of-mass travels along a shorter path; hence, $V<v$\,. Thus, assuming a constant black-line speed implies a variable centre-of-mass speed and, hence, an acceleration and deceleration, even though ${\rm d}V/{\rm d}t$\,, where $t$ stands for time, is not included explicitly in expression~(\ref{eq:power}). Examining Figure~\ref{fig:FigCoMSpeed}, we conclude that ${\rm d}V/{\rm d}t\neq0$ along the transition curves only. The power\,---\,obtained by evaluating expression~(\ref{eq:power}), at each point along the track\,---\,is shown in Figure~\ref{fig:FigPower}. The average power, per lap, is $\overline P=580.5941$\,W\,. Since the black-line speed is constant, this is both the arclength average and the temporal average. \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigLeanAngle.pdf} \caption{\small Lean angle,~$\vartheta$\,, as a function of the black-line distance,~$s$\,, for constant cadence} \label{fig:FigLeanAngle} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigCoMSpeed.pdf} \caption{\small Centre-of-mass speed,~$V$\,, as a function of the black-line distance,~$s$\,, for constant cadence} \label{fig:FigCoMSpeed} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigPower.pdf} \caption{\small Power,~$P$\,, as a function of the black-line distance,~$s$\,, for constant cadence} \label{fig:FigPower} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigAngleDiff.pdf} \caption{\small $\theta-\vartheta$\,, as a function of the black-line distance,~$s$\,, for constant cadence} \label{fig:FigAngleDiff} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigPowerSummands.pdf} \caption{\small Power to overcome air resistance, rolling resistance and lateral friction} \label{fig:FigPowerSummands} \end{figure} Examining Figure~\ref{fig:FigPower}, we see the decrease of power required to maintain the same black-line speed along the curve. This is due to both the decrease of the centre-of-mass speed, which results in a smaller value of term~(\ref{eq:modelC}), and the decrease of a difference between the track-inclination angle and the lean angle, shown in Figure~\ref{fig:FigAngleDiff}, which results in a smaller value of the second summand of term~(\ref{eq:modelB}). The argument presented in the previous paragraph leads to the following conjecture. The most efficient track is circular with $\theta=\vartheta$\,, which would correspond to the dashed line in Figure~\ref{fig:FigAngleDiff}. However, this is not possible, since\,---\,according to the regulations of the Union Cycliste Internationale\,---\,the inner edge of the track shall consist of two curves connected by two parallel straight lines. Hence, the optimization is constrained by the length of the straights. Examining Figure~\ref{fig:FigPowerSummands}, where\,---\,in accordance with expression~(\ref{eq:power})\,---\,we distinguish among the power used to overcome the air resistance, the rolling resistance and the lateral friction, we can quantify their effects. The first has the most effect; the last has the least effect, and is zero at points for which $\theta=\vartheta$\,, which corresponds to the zero crossings in Figure~\ref{fig:FigAngleDiff}. Let us comment on potential simplifications of a model. If we assume a straight flat course\,---\,which is tantamount to neglecting the lean and inclination angles\,---\,we obtain, following expression~(\ref{eq:straight}), $\overline P\approx 610$\,W\,. If we consider an oval track but ignore the transitions and assume that the straights are flat and the semicircular segments, whose radius is $23$\,m\,, have a constant inclination of $43^\circ$, we obtain \citep[expression~(13)]{SSSbici3} $\overline P\approx 563$\,W\,. In both cases, there is a significant discrepancy with the power obtained from the model discussed herein,~$\overline P=573.6080$\,W\,. To conclude this section, let us calculate the work per lap corresponding to the model discussed herein. The work performed during a time interval, $t_2-t_1$\,, is \begin{equation*} W=\int\limits_{t_1}^{t_2}\!P\,{\rm d}t =\dfrac{1}{v}\int\limits_{s_1}^{s_2}\!P\!\underbrace{\,v\,{\rm d}t}_{{\rm d}s\,}\,, \end{equation*} where the black-line speed,~$v$\,, is constant and, hence, ${\rm d}s$ is an arclength distance along the black line. Considering the average power per lap, we write \begin{equation} \label{eq:WorkConstCad} W=\underbrace{\,\dfrac{S}{v}\,}_{t_\circlearrowleft}\,\underbrace{\dfrac{\int\limits_0^S\!P\,{\rm d}s}{S}}_{\overline P}=\overline P\,t_\circlearrowleft\,. \end{equation} Given $\overline P=580.5941$\,W and $t_\circlearrowleft=14.9701$\,s\,, we obtain $W=8691.5284$\,J\,. \subsection{Constant power} \label{sub:ConstPower} Let us solve numerically the system of nonlinear equations given by expressions~(\ref{eq:power}), (\ref{eq:LeanAngle}) and (\ref{eq:vV}), to find the lean angle as well as both speeds, $v$ and $V$\,, at each point of a discretized model of the track\,, under the assumption of constant power. In accordance with a discussion in Section~\ref{sec:InstPower}, such an assumption is more consistent with the steadiness of effort than the assumption of a constant cadence examined in Section~\ref{sub:ConstCad}. As in Section~\ref{sub:ConstCad}, we let $R=23.3958$\,m\,, $h=1.2$\,m\,, $m=84$\,kg\,, ${\rm C_{d}A}=0.2$\,m${}^2$\,, ${\rm C_{rr}}=0.002$\,, ${\rm C_{sr}}=0.003$\,, $\lambda=0.02$\,, $g=9.81$\,m/s${}^2$ and $\rho=1.225$\,kg/m${}^3$\,. However, in contrast to Section~\ref{sub:ConstCad}, we allow the black-line speed to vary, and set the power to be the average obtained in that section, $P=580.5941$\,W\,. Stating expression~(\ref{eq:vV}), as \begin{equation*} v=V\dfrac{R}{R-h\sin\vartheta}\,, \end{equation*} we write expression~(\ref{eq:power}) as \begin{align} \label{eq:PConst} P&=\\ \nonumber&\dfrac{V}{1-\lambda}\,\,\Bigg\{\\ \nonumber&\left.\left.\Bigg({\rm C_{rr}}\,m\,g\,(\sin\theta\tan\vartheta+\cos\theta)\cos\theta +{\rm C_{sr}}\Bigg|\,m\,g\,\frac{\sin(\theta-\vartheta)}{\cos\vartheta}\Bigg|\sin\theta\Bigg)\,\dfrac{R}{R-h\sin\vartheta} \right.\right.\\ \nonumber&+\,\,\tfrac{1}{2}\,{\rm C_{d}A}\,\rho\,V^2\Bigg\}\,, \end{align} and expression~(\ref{eq:LeanAngle}) as \begin{equation} \label{eq:Vvar} \vartheta=\arctan\dfrac{V^2}{g\,(R-h\sin\vartheta)}\,, \end{equation} which\,---\,given $g$\,, $R$ and $h$\,---\,can be solved for $V$ as a function of~$\vartheta$\,. Inserting that solution in expression~(\ref{eq:PConst}), we obtain an equation whose only unknown is~$\vartheta$\,. The difference of the lean angle\,---\,between the case of a constant cadence and a constant power is so small that there is no need to plot it; Figure~\ref{fig:FigLeanAngle} illustrates it accurately. The same is true for the difference between the track-inclination angle and the lean angle, illustrated in Figure~\ref{fig:FigAngleDiff}, as well as for the dominant effect of the air resistance, illustrated in Figure~\ref{fig:FigPowerSummands}. The resulting values of $V$ are shown in Figure~\ref{fig:FigCoMSpeed2}. As expected, in view of the dominant effect of the air resistance, a constancy of $P$ entails only small variations in $V$\,. In comparison to the case discussed in Section~\ref{sub:ConstCad}, the case in question entails lesser accelerations and decelerations of the centre of mass\,---\,note the difference of vertical scale between Figures~\ref{fig:FigCoMSpeed} and \ref{fig:FigCoMSpeed2}\,---\,but the changes of speed are not limited to the transition curves. Even though such changes are not included explicitly in expression~(\ref{eq:power}), a portion of the given power may be accounted for by $m\,V\,{\rm d}V/{\rm d}t$\,, which is associated with accelerations and decelerations. The amount of this portion can be estimated {\it a posteriori}. Since \begin{equation} \label{eq:dK} m\,V\,\dfrac{{\rm d}V}{{\rm d}t}=\dfrac{{\rm d}}{{\rm d}t}\left(\dfrac{1}{2}\,m\,V^2\right)\,, \end{equation} the time integral of the power used for acceleration of the centre of mass is the change of its kinetic energy. Therefore, to include the effect of accelerations, per lap, we need to add the increases in kinetic energy. This is an estimate of the error committed by neglecting accelerations in expression~(\ref{eq:power}) to be quantified, for the constant-cadence and constant-power cases, in Appendix~\ref{sec:Energy}. The values of $v$\,, in accordance with expression~(\ref{eq:vV}), are shown in Figure~\ref{fig:FigBLspeed}, where\,---\,as expected for a constant power\,---\,leaning into the turn entails a significant increase of the black-line speed; note the difference of vertical scale between Figures~\ref{fig:FigCoMSpeed2} and \ref{fig:FigBLspeed}. The averages are $\overline V=16.3316$\,m/s and $\overline v=16.7071$\,m/s\,. These averages are similar to the case of the constant black-line speed averages. Hence, maintaining a constant cadence or a constant power results in nearly the same laptime, namely, $14.9701$\,s and $14.9670$\,s\,, respectively. To conclude this section, let us calculate the corresponding work per lap. The work performed during a time interval, $t_2-t_1$\,, is \begin{equation} \label{eq:WorkConstPow} W=\int\limits_{t_1}^{t_2}\!P\,{\rm d}t=P\!\int\limits_{t_1}^{t_2}\!{\rm d}t=P\,\underbrace{(t_2-t_1)}_{t_\circlearrowleft}=P\,t_\circlearrowleft\,, \end{equation} where, for the second equality sign, we use the constancy of~$P$\,; also, we let the time interval to be a laptime. Thus, given $\overline P=580.5941$\,W and $t_\circlearrowleft=14.9670$\,s\,, we obtain $W=8689.7680$\,J\,. \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigCoMSpeed2.pdf} \caption{\small Centre-of-mass speed,~$V$\,, as a function of the black-line distance,~$s$\,, for constant power} \label{fig:FigCoMSpeed2} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigBLSpeed.pdf} \caption{\small Black-line speed,~$v$\,, as a function of the black-line distance,~$s$\,, for constant power} \label{fig:FigBLspeed} \end{figure} The empirical adequacy of the assumption of a constant power can be corroborated\,---\,apart from the measured power itself\,---\,by comparing experimental data to measurable quantities entailed by theoretical formulations. The black-line speed,~$v$\,, shown in Figure~\ref{fig:FigBLspeed}, which we take to be tantamount to the wheel speed, appears to be the most reliable quantity. Other quantities\,---\,not measurably directly, such as the centre-of-mass speed and power expended to increase potential energy\,---\,are related to $v$ by equations~(\ref{eq:PConst}) and (\ref{eq:Vvar}). To conclude Sections~\ref{sub:ConstCad} and \ref{sub:ConstPower}, let us state that if---for the latter---we use the power obtained from the latter, we obtain the black-line speed of the former, as expected. \section{Empirical adequacy} \label{sec:Adequacy} To gain an insight into empirical adequacy of the model, let us examine Section~\ref{sec:InstPower}, in the context of measurements~(Mehdi Kordi, {\it pers.~comm.}, 2020). To do so, we use two measured quantities: cadence and force applied to the pedals, both of which are measured by sensors attached to the bicycle. They allow us to calculate power, which is the product of the circumferential pedal speed\,---\,obtained from cadence, given a crank length\,---\,and the force applied to pedals. \begin{figure}[h] \centering \includegraphics[scale=0.35]{FigIPPower} \caption{\small Measured power,~$P$\,, as a function of the pursuit time,~$t$} \label{fig:FigIPPower} \end{figure} The measurements of power, shown in Figure~\ref{fig:FigIPPower}, oscillate about a nearly constant value, except for the initial part, which corresponds to acceleration, and the final part, where the cyclist begins to decelerate. These oscillations are due to the repetition of straights and curves along a lap. In particular, Figure~\ref{fig:FigIPCadence}, below, exhibits a regularity corresponding to thirty-two curves along which the cadence, and\,---\,equivalently\,---\,the wheel speed, reaches a maximum. There are also fluctuations due to measurement errors. A comparison of Figures~\ref{fig:FigIPPower} and \ref{fig:FigIPCadence} illustrates that power is necessarily more error sensitive than cadence, since the cadence itself is used in calculations to obtain power. This extra sensitivity is due to intrinsic difficulties of the measurement of applied force and, herein, to the fact that values are stated at one-second intervals only, which makes them correspond to different points along the pedal rotation \citep[see also][Appendix~A]{DSSbici1}. To diminish this effect, it is common to use a moving average, with a period of several seconds, to obtain the values of power. To use the model to relate power and cadence, let us consider a $4000$\,-metre individual pursuit. The model parameters are $h=1.1~\rm{m}$\,, $m=85.6~\rm{kg}$\,, ${\rm C_{d}A}=0.17~\rm{m^2}$\,, ${\rm C_{rr}}=0.0017$\,, ${\rm C_{sr}}=0.0025$\,, $\lambda=0.02$\,, $g=9.81~\rm{m/s^2}$\,, $\rho=1.17~\rm{kg/m^3}$\,. If we use, as input, $P=488.81~\rm{W}$\,---\,which is the average of values measured over the entire pursuit\,---\,the retrodiction provided by the model results in $\overline{v}=16.86~\rm{m/s}$\,. Let us compare this retrodiction to measurements using the fact that\,---\,for a fixed-wheel drivetrain\,---\,cadence allows us to calculate the bicycle wheel speed. The average of the measured cadence, shown in Figure~\ref{fig:FigIPCadence}, is $k=106.56~\rm rpm$\,, which\,---\,given the gear of $9.00~\rm m$\,, over the pursuit time of $256~\rm s$\,---\,results in a distance of $4092~\rm m$\,. Hence, the average wheel speed is~$15.98~\rm{m/s}$\,.% \footnote{The average wheel speed is distinct from the average black-line speed,~$\overline{v}=15.63~\rm{m/s}$\,.} \begin{figure}[h] \centering \includegraphics[scale=0.35]{FigIPCadence.pdf} \caption{\small Measured cadence,~$k$\,, in revolutions per minute, [rpm], as a function of the pursuit time,~$t$} \label{fig:FigIPCadence} \end{figure} The average values of the retrodicted and measured speeds appear to be sufficiently close to each other to support the empirical adequacy of our model, for the case in which its assumptions, illustrated in Figure~\ref{fig:FigBlackLine}\,---\,namely, a constant aerodynamic position and the trajectory along the black line\,---\,are, broadly speaking, satisfied. Specifically, they are not satisfied on the first lap, during the acceleration. Nor can we expect them to be fully satisfied along the remainder of the pursuit, as illustrated by $4092\,{\rm m}>4000\,{\rm m}$\,, which indicates the deviation from the black-line trajectory. \begin{figure}[h] \centering \includegraphics[scale=0.35]{FigIPShift.pdf} \caption{\small Scaled values of power (black) and cadence (grey) as functions of the pursuit time,~$t$} \label{fig:FigIPShift} \end{figure} Furthermore, Figure~\ref{fig:FigIPShift}, which is a superposition of values scaled from Figures~\ref{fig:FigIPPower} and \ref{fig:FigIPCadence}, illustrates a shift of oscillations between power and cadence. However, Figure~\ref{fig:FigModelShift} does not exhibit any shift. Therein, as input, we use simulated values of power along a lap\,---\,in a manner consistent with the measured power\,---\,as opposed to single value of an average. Thus, according to the model, the power and the black-line speed\,---\,whose pattern within the model for a fixed-wheel drivetrain is the same as for cadence\,---\,exhibit no shift. \begin{figure}[h] \centering \includegraphics[scale=0.5]{FigModelShift.pdf} \caption{\small Scaled values of power (black) and speed (grey) as functions of the black-line distance,~$s$} \label{fig:FigModelShift} \end{figure} The shift observed in Figure~\ref{fig:FigIPShift} could be an effect of measurements for a fixed-wheel drivetrain, since the value at each instant is obtained from the product of measurements of~$f_{\circlearrowright}$\,, which is the force applied to pedals, and~$v_{\circlearrowright}$\,, which is the circumferential speed of the pedals \citep[e.g.,][expression~(1)]{DSSbici1}, \begin{equation} \label{eq:PowerMeter} P=f_{\circlearrowright}\,v_{\circlearrowright}\,. \end{equation} For a fixed-wheel drivetrain, there is a one-to-one relation between $v_{\circlearrowright}$ and the wheel speed. Hence\,---\,in contrast to a free-wheel drivetrain, for which $f_{\circlearrowright}\to0\implies v_{\circlearrowright}\to0$\,---\,the momentum of a launched bicycle-cyclist system might contribute to the value of~$v_{\circlearrowright}$\,, which is tantamount to contributing to the value of cadence. This issue is addressed in Appendix~\ref{sec:Fixed}. Nevertheless, the agreement between the average values of the retrodiction and measurements appears to be satisfactory. Notably, excluding the first and last laps would increase this agreement. For instance, if we consider, say, $33\,{\rm s} < t < 233\,{\rm s}$\,, which does not even correspond to the beginning or the end of any lap, the average power and cadence are $455.02~{\rm W}$ and $108.78~{\rm rpm}$\,, respectively. Hence, the retrodicted and measured speeds are $16.45~\rm{m/s}$ and $16.32~\rm{m/s}$\,, respectively. \begin{figure}[h] \centering \includegraphics[scale=0.35]{FigKiloPower} \caption{\small Measured power,~$P$\,, as a function of the `kilo' time,~$t$} \label{fig:FigKiloPower} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.35]{FigKiloCadence} \caption{\small Measured cadence,~$k$\,, in revolutions per minute, [rpm], as a function of the `kilo' time,~$t$} \label{fig:FigKiloCadence} \end{figure} To illustrate limitations of the model, Figures~\ref{fig:FigKiloPower} and \ref{fig:FigKiloCadence} represent measurements for which it is not empirically adequate. As shown in these figures, in this $1000$\,-metre time trial, commonly referred to as a `kilo', the cyclist reaches a steady cadence\,---\,and speed\,---\,with an initial output of power, in a manner similar to the one shown in Figure~\ref{fig:FigIPPower}. Subsequently, in a manner similar to the one shown in Figure~\ref{fig:FigIPCadence}, the cadence remains almost unchanged, for the remainder of the time trial. However, in contrast to Figure~\ref{fig:FigIPPower}, the power decreases. Herein, as discussed by \citet[Appendix~B.2]{DSSbici2}, the cadence, as a function of time, is a consequence of both the power generated by a cyclist\,---\,at each instant\,---\,and the momentum of the moving bicycle-cyclist system, gained during the initial acceleration, which propels the pedals. There is no dynamic equilibrium between the instantaneous power generated by a cyclist and the resulting cadence, in contrast to an equilibrium reached during a steady effort of a $4000$\,-metre individual pursuit. We consider here a dynamic equilibrium {\it sensu lato}\,; the values of power and cadence, in Figures~\ref{fig:FigIPPower} and \ref{fig:FigIPCadence}, oscillate about means that are nearly constant. In view of this equilibrium, Figures~\ref{fig:FigIPPower} and \ref{fig:FigIPCadence} would remain similar for a free-wheel drivetrain, provided a cyclists keeps on pedalling in a continuous and steady manner. Figures~\ref{fig:FigKiloPower} and \ref{fig:FigKiloCadence} would not. In particular, Figure~\ref{fig:FigKiloCadence} would show a decrease of cadence with time, even though the bicycle speed might not decrease significantly. \section{Discussion and conclusions} \label{sec:DisCon} The mathematical model presented in this article offers the basis for a quantitative study of individual time trials on a velodrome. The model can be used to predict or retrodict the laptimes, from the measurements of power, or to estimate the power from the recorded times. Comparisons of such predictions or retrodictions with the measurements of time, speed, cadence and power along the track offer an insight into the empirical adequacy of a model. Given a satisfactory adequacy and appropriate measurements, the model lends itself to estimating the rolling-resistance, lateral-friction, air-resistance and drivetrain-resistance coefficients. One can examine the effects of power on speed and {\it vice versa}, as well as of other parameters, say, the effects of air resistance on speed. One can also estimate the power needed for a given rider to achieve a particular result. In Sections~\ref{sec:InstPower}, \ref{sec:NumEx} and \ref{sec:Adequacy}, we neglect the vertical motion of the centre of mass and assume its trajectory to be contained in a horizontal plane. In Appendix~\ref{sec:Energy}, we calculate the work done in raising and accelerating the centre of mass. We can conclude that\,---\,even though most of the work of the cyclist is done to overcome dissipative forces\,---\,a nonnegligible portion goes into increasing mechanical energy. This conclusion, however, does not mean that we cannot invoke expressions~(\ref{eq:LeanAngle}) and (\ref{eq:vV}), wherein we assume that the centre-of-mass trajectory is contained in a horizontal plane. Approximations resulting from using these expressions appear to have a lesser effect on results of the model than neglecting increases of mechanical energy, which are not taken explicitly into account within model~(\ref{eq:power}). Presented results allow us to comment on aspects of the velodrome design. As illustrated in Figures~\ref{fig:FigLeanAngle}--\ref{fig:FigPower}, \ref{fig:FigCoMSpeed2}, \ref{fig:FigBLspeed}, \ref{fig:FigModelShift}, the transitions\,---\,between the straights and the circular arcs\,---\,do not result in smooth functions for the lean angles, speeds and powers. It might suggest that a commonly used Euler spiral, illustrated in Figure~\ref{fig:FigCurvature}, is not the optimal transition curve. Perhaps, the choice of a transition curve should consider such phenomena as the jolt, which is the temporal rate of change of acceleration. It might also suggest the necessity for the lengthening of the transition curve. Furthermore, an optimal velodrome design would strive to minimize the separation between the zero line and the curve in Figure~\ref{fig:FigAngleDiff}, which is tantamount to optimizing the track inclination to accommodate the lean angle of a rider. The smaller the separation, the smaller the second summand in term~(\ref{eq:modelB}). As the separation tends to zero, so does the summand. These considerations are to be examined in future work. Also, the inclusion, within the model, of a change of kinetic and potential energy for instantaneous power, discussed in Appendix~\ref{sec:Energy}, is an issue to be addressed. Another consideration to be examined is the discrepancy between the model and measurements with respect to the shift between power and cadence, illustrated in Figures~\ref{fig:FigIPShift} and \ref{fig:FigModelShift}. A venue for such a study is introduced in Appendix~\ref{sec:Fixed}. In conclusion, let us emphasize that our model is phenomenological. It is consistent with\,---\,but not derived from\,---\,fundamental concepts. Its purpose is to provide quantitative relations between the model parameters and observables. Its justification is the agreement between measurements and predictions or retrodictions, as illustrated in Section~\ref{sec:Adequacy} by the relation between power and speed. \section*{Acknowledgements} We wish to acknowledge Mehdi Kordi, for information on the track geometry, used in Section~\ref{sec:Formulation}, and for measurements, used in Section~\ref{sec:Adequacy}; Tomasz Danek, for statistical insights into these measurements; Elena Patarini, for her graphic support; Roberto Lauciello, for his artistic contribution; Favero Electronics for inspiring this study by their technological advances of power meters. \section*{Conflict of Interest} The authors declare that they have no conflict of interest. \bibliographystyle{apa}
{ "attr-fineweb-edu": 2.324219, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUfNE25V5jQSoCc30k
\section{Introduction} An optimal trajectory is one of the key factors to achieve higher performance in different skiing disciplines. For example, in alpine skiing the turns performed by an athlete form a trajectory of points on the slope surface that if optimized with respect to the position of turn gates can lead to saved time during the descent \cite{trajski,cai2020trajectory}. In ski-jumping, a trajectory is formed by the points traversed during the flight phase and different flight paths influence the distance of the jumps \cite{hubbard1989multisegment,muller2006physics}. The optimization of these curves can lead to an increased chance of victory. It is hence important to analyze the trajectory executed by the athlete in order to determine specific points of the performance that can be correlated to the final scores (e.g. overall time taken or distance jumped). Potential applications of such kind of trajectory analysis tool could be intelligent review systems that would enhance the training activities, but also richer broadcasting contents that would increase the engagement of spectators. To reconstruct the trajectory of athletes in winter sport disciplines, the current standard practice \cite{gilgien2013determination,kruger2010application} is to put sensor devices (e.g. GNSS trackers, IMUs) on the body or skis and use a precise surface model to map the position and data coming from such sensors on the surface model at the different time steps. The drawback of these approaches is that they require the careful and time-consuming installation of the sensor devices on the athlete and the acquisition of a precise ground model. Furthermore, such an approach could be not always achievable during competitions. Computer vision techniques applied on videos capturing the athlete's performance is a valid option to obtain trajectories without the need of sensor networks nor ground surface models. The benefit of a video-based approach is even more evident considering the usual practice of video reviewing during training or the amount of video material produced by the broadcasting of competitions. Vision-based techniques have been successfully used in other sport disciplines to reconstruct the trajectory of various kinds of ball \cite{Chen2011,kotera2019intra} and player movements \cite{Calandre2021,chen2018player}. However, to the best of our knowledge, no study is currently present to reconstruct the trajectory of skiing athletes in videos. In this short paper, we present a prototype to achieve the reconstruction of the trajectory of skiers in videos acquired from uncalibrated and unconstrained cameras. Our algorithm works online, i.e. takes in input the latest frame of a streaming video and outputs the trajectory executed by the athlete in the previous time steps with the correct perspective with respect to the scene appearing in that frame. The solution first runs a visual tracker \cite{Stark} to follow the target skier across all the previous frames up to the latest. Then, a key-point detection and matching algorithm \cite{SuperPoint,SuperGlue} is employed to estimate the motion of static key-points across the consecutive frames. The matched key-points are given to a RANSAC-based algorithm \cite{RANSAC,degensac} to estimate the homography representing the perspective transformation between those. Such a transformation is used to map all the points traversed by the athlete to the correct perspective, achieving the reconstruction of the trajectory with respect to the camera movements and ultimately giving a 3D effect. The performed qualitative tests on broadcast and handheld camera videos of alpine skiing and ski-jumping show the potential of the proposed solution. Further research is needed to make this idea applicable in practice. We point out possible future research directions. \section{Methodology} \label{sec:method} \subsection{Preliminaries} The videos given as input to our solution are considered to capture the performance of an individual athlete while he/she is constantly visible in the scene. We do not put any constraints on the configuration (intrinsic and extrinsic parameters) of the camera that captured the videos. More formally, we consider a video $\video = \big\{ \frame_t \in \images \big\}_{t=0}^{T}$ as a sequence of frames $\frame_t$, where $\images = \{0,\cdots,255\}^{w \times h \times 3}$ is the space of RGB images and $T \in \mathbb{N}$ denotes the number of frames. We use $p_t = (x_t, y_t)$ to denote the coordinates of the point that summarizes the position of the athlete in the image coordinate system (e.g. the point of contact between the athlete and the ground surface). The goal of our system is to produce a trajectory $\traj_t = \{ \point_i \}_{i=0}^{t-1}$ which is the sequence of points traversed by the athlete % in the 3D environment mapped in the 2D space of $\frame_t$. \begin{figure}[t] \centering \includegraphics[width=.65\columnwidth]{images/pipeline.pdf} \caption{Schematic visualization of the main steps performed by our solution to obtain the trajectory $\tau_t$ at each time step $t$ using the consecutive frames $\frame_{t-1}, \frame_t$.} \label{fig:pipeline} \end{figure} \subsection{Pipeline} Figure \ref{fig:pipeline} presents a schematic representation of the pipeline constituting the proposed trajectory reconstruction algorithm. The solution works in an online fashion. This means that at every $\frame_t$ the only available information to produce the trajectory $\tau_t$ is contained in $\frame_t$ and in all the preceding frames. This setting makes the solution suitable for real-time applications since it does not require waiting for the athlete's execution to be terminated for the trajectory to be produced. Moreover, our algorithm is general and it can be applied to different disciplines without specific tuning. The proposed method processes each $\frame_t$ sequentially. $\frame_t$ is first given to a visual object tracking algorithm designed to model the motion of the athlete and to provide its position $\point_t$ in the latest frame. Then, the solution estimates the homography transformation $\homo \in \mathbb{R}^{3\times3}$ existing between $\frame_t$ and $\frame_{t-1}$. This is achieved by finding and matching particular image key-points present in $\frame_t$ and $\frame_{t-1}$ and using a RANSAC-like algorithm on top of the matchings to find $\homo$. Considering that in individual winter sport the environment of the course is generally composed of static objects (e.g. banners, line markers, etc.), it can be advantageous to compute their displacement in consecutive frames to quantify the camera motion. $\homo$ is used to map the points of the trajectory $\tau_{t-1}$ for the previous frame into the coordinate system of the current frame, thus obtaining the trajectory $\tau_{t}$. After that, $\point_t$ is appended to $\tau_t$ to obtain all the points traversed by the athlete with respect to $\frame_t$. We now describe the different components of the algorithm in more detail. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{images/trajs.pdf} \caption{Examples of the trajectory produced by our solution. The number in the top-left corner of each image reports the frame index $t$ in the video. In the first frame, the bounding-box $\bbox_0$ is reported by the red rectangle. In the other frames, the red track is the trajectory $\tau_t$ reconstructed by our pipeline for that frame. The first two rows of frames show examples of the algorithm applied on broadcasting videos of giant slalom and downhill skiing. The last two rows show applications to a broadcasting video and a smartphone video of ski jumping.} \label{fig:trajs} \end{figure*} \paragraph{Athlete Tracking.} The first step of the pipeline is to exploit a visual object tracker \cite{Dunnhofer2019,Dunnhofer2020accv} to track the motion of the athlete across all the frames up to $\frame_t$. We used a tracker outputting a bounding-box $\bbox_t = (x^{(\bbox)}_t,y^{(\bbox)}_t,w^{(\bbox)}_t,h^{(\bbox)}_t) \in \mathbb{R}^4$ at every $\frame_t$. The $x^{(\bbox)}_t,y^{(\bbox)}_t$ represent the coordinates of the top-left corner of the box while $w^{(\bbox)}_t,h^{(\bbox)}_t$ are employed to get an estimate of the width and height of the athlete's appearance. We consider the position of the athlete $\point_t = (x_t, y_t)$ as $x_t = x^{(\bbox)}_t + \frac{w^{(\bbox)}_t}{2}$ and $y_t = y^{(\bbox)}_t + h^{(\bbox)}_t$. Given an accurate bounding-box, $\point_t$ represents the closest point to the contact between the athlete's feet and the ground. For disciplines in which the athlete lies constantly on the ground, such a setting allows to estimate the point traversed by the athlete. The tracker is initialized in the first frame $\frame_0$ of the video with the bounding-box $\bbox_0$ that outlines the appearance of the target. Such a piece of information can be obtained by asking a human operator to provide the bounding-box for the athlete of interest via some user-friendly annotation system or by a specific athlete detection algorithm. % We used the state-of-the-art deep learning-based method STARK \cite{Stark,VOT2021} (with pre-trained parameters) as visual tracker because of its ability in providing bounding-boxes that fit accurately the appearance of a large variety of target objects. \paragraph{Frame Matching.} The tracker allows to model the motion of the athlete in each $\frame_t$. We want to render such motion in relation to the perspective of the scene and the athlete's execution, ultimately giving a 3D effect. To achieve this we use the homography matrix $\homo$. The first step to compute the $\homo$ between $\frame_{t-1}$ and $\frame_t$ is to run an image key-point detector to obtain significant points of interest in the field of views of both frames. For this task, we employed the deep learning-based methodology SuperPoint \cite{SuperPoint} because of its state-of-the-art performance. Particularly, we used the pre-trained instance of the algorithm optimized for outdoor scenarios provided by the authors \cite{SuperPoint}. From the sets of key-points, we excluded those located within the bounding-box $\bbox_t$ because they belong to a non-static object. In the case of broadcasting videos, we also discarded all the key-points lying on the superimposed banners showing the characteristics of the athlete's performance (e.g. running time). Once the key-points have been determined, the matching algorithm is executed to find those key-points that correspond to the same visual features in $\frame_t$ and $\frame_{t-1}$. This allows to obtain an alignment between the same points on the images that expresses how the static objects have moved between the frames. To perform the matching we exploited the SuperGlue algorithm \cite{SuperGlue} which is a graph-based deep-learning method that focuses on the global organization of key-points in order to find matches between them. As for SuperPoint, we used the pre-trained instance of the algorithm optimized for outdoor scenarios as provided by the authors \cite{SuperGlue}. \paragraph{Homography Estimation.} With the alignments of matched key-points, we are able to obtain the homography matrix $\homo$. This is achieved through an instance of the DEGENSAC algorithm \cite{degensac} which applies an iterative optimization procedure on the matchings in order to find the best homography matrix that explains them. We found DEGENSAC to work better than a standard RANSAC instance. \paragraph{Homography Application and Trajectory Reconstruction.} After that the homography is determined, it is used to map the points of the previous trajectory $\traj_{t-1}$ in the new frame. In more detail, at each frame $\frame_t$, $\traj_{t-1}$ consists of all the points given by the visual object tracker in the preceding $t-1$ frames and localized according to the perspective of the previous frame $\frame_{t-1}$. The trajectory $\traj_t$ for $\frame_t$ is obtained by the multiplication of each $\point_i \in \traj_{t-1}$ by the homography matrix, i.e. $\traj_t = \{ \point_i \}_{i=0}^{t-1}, \point_i = \point_i \cdot \homo$. Then, $\traj_t$ is also appended $\point_t$ given by the tracker for $\frame_t$ which represent the latest position of the athlete. At the first frame in which the frame matching step is executed $\frame_1$, $\traj_{0} = \{ \point_0 \}$ is composed only of the point extracted by the bounding-box which highlights the target athlete. After its reconstruction, spline interpolation is also applied to $\traj_t$ to make the trajectory smoother. \section{Experiments and Discussion} We performed qualitative experiments on our prototype. This is due to the non-availability of public datasets suitable for the evaluation of trajectory reconstruction in winter sports applications. Future work will be dedicated to build an accurate set of videos for quantitative validation. We tested our solution for the reconstruction of the trajectory executed by alpine skiers while skiing by ski jumpers while flying. We acquired videos of such two disciplines on YouTube. In particular, for alpine skiing, we tested our solution on broadcast videos of the giant slalom taken place at the FIS Alpine Ski World Championship in Cortina 2021 and of the FIS Alpine World Cup downhill race in Kitzb\"uhel 2021. For ski jumping, we ran our solution on broadcast videos of the FIS Ski Flying World Cup competition in Planica 2019 and on videos acquired by smartphones during the FIS Ski Jumping Continental Cup in Iron Mountain 2020. No specific adaption for the two settings was performed. Figure \ref{fig:trajs} shows examples of the performance achieved by our solution. The trajectories produced are consistent with the past motion of the athlete, and the reconstruction capability seems to be robust to the different camera movements, to the blurred background, and to the changes in illumination conditions. Overall we think our solution to be promising. The pictures in Figure \ref{fig:apps} present some particular analytical applications based on our solution. The first row of images reports two frames in which the trajectory of the jumper (in red) is compared with the trajectory of another jumper (in green). The second row shows two different visualizations of the insertion of the trajectory (left image, green trajectory) and the visual appearance (right image, highlighted by the green dot) of the competition's leader. These kinds of solution have been achieved by synchronizing the videos of the two athletes and computing a homography between the time-paired frames by the frame matching procedure described in Section \ref{sec:method}. The third row displays two images in which the trajectory is augmented with insights about the athlete's performance. In this case, the speed data obtained by IMU sensors worn by the skiers and synchronized with the video frames. Further work is needed to make this solution effective. First, the error committed in the reconstruction of the trajectories should be quantified using labeled data. For example, the displacement in centimeters with respect to the true trajectory performed by the athlete could be a valuable measure of the precision of the proposed solution. We hypothesize that the performance of the system could be improved by better integrating the different modules of the pipeline, and potentially through an end-to-end optimization stage of the learning modules and backbone networks involved. The system could be also enhanced by exploiting human pose trackers instead of bounding-box ones. Indeed, a human skeleton-based tracker should provide a better and more precise localization of the body of the target skier. Such a representation could be exploited to compute a more consistent point of contact between the athlete and the snow surface. Furthermore, the motion modeling of the different human body parts could enable the development of solutions able to simultaneously reconstruct the trajectory of disparate parts of the athlete (e.g. hands or feet). A similar idea could be also exploited to compute the pose trajectory of the single skis if a pose estimator/tracker for this kind of object \cite{SkiPose} is used. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/snow.pdf} \caption{Examples of the influence of the snow texture on the reconstruction of the trajectory. The two rows of images show the same frame of two different videos. The frames of the first column report the trajectory obtained with the homographies computed using all the original key-points detected by SuperPoint (and then matched by SuperGlue). The second column shows the frames in which the SuperPoint-detected key-points lying in image locations where the snow is present are filtered out before matching. Such an operation enables the estimate of a more consistent homography, ultimately resulting in the better reconstruction of the trajectory.} \label{fig:snow} \end{figure} Finally, we think that the better exploitation of the specific cues appearing on the slope and in training/competition scenarios could lead to an enhanced trajectory reconstruction performance. Indeed, as showed in Figure \ref{fig:snow}, in some of our experiments, we found that the snow texture provided no useful information for key-point detection. This issue influenced the homography estimation and ultimately led to wrong trajectory reconstructions. Filtering those key-points lying on image positions with whitish appearance -- hence matching only key-points belonging to other visual features (e.g. line markers, banners, etc.) -- allowed a better estimate of the homography and consequently an improved trajectory reconstruction.
{ "attr-fineweb-edu": 2.179688, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUa6DxK6Ot9UjD_N2l
\section{Introduction} The expansion of PLMs aims to augment the existing PLMs to better understand the downstream text. These methods design different learning tasks \cite{DBLP:journals/corr/abs-1904-09223}, different semantic granularities \cite{DBLP:conf/acl/ZhangHLJSL19}, different model architectures \cite{DBLP:conf/acl/DaiYYCLS19}, and different learning algorithms \cite{DBLP:conf/iclr/ClarkLLM20} but they usually use single-format text to pre-train the model and lack the learning of document structure and relevant knowledge. There is a large amount of unused semi-structured and well-structured text. Together with unstructured free text, we refer to them as multi-format text. These data are essential for the hierarchical understanding of words, entities and paragraphs. This leads us to ask whether we can utilize multi-format text in pre-training. \begin{figure}[!h] \centering \includegraphics[width=3in]{pic/example} \caption{An example to demonstrate the usage of unstructured, semi-structured and well-structured knowledge} \label{example.fig} \end{figure} First, let us take Figure \ref{example.fig} as an example to illustrate the multi-format text resources. For the question, ``\textit{My mother and I are on a group tour ...}'' Although the scenery described in the candidate answer does not mention the name of the tourist attraction, with the help of Baidu encyclopedia, we can still understand that the answer is describing the scenery of Xiangshan Park and claiming that it is not the best time to travel. The key words ``\textit{leaves are not red}'', ``\textit{at the end of the month}'', ``\textit{too many people}'' have corresponding semi-structured subsection titles ((sub)headings), paragraphs and well-structured knowledge triples (\textit{Xiangshan Park, suitable season for play, autumn}) in the encyclopedia webpage. The paragraphs also correspond to relevant knowledge triples. Most prior BERT \cite{DBLP:conf/naacl/DevlinCLT19} expansion works are designed to using plain text and entity type or changing the model architecture, rarely considering semi-structured text and well-structured text in Figure \ref{example.fig}. \citet{DBLP:conf/emnlp/BeltagyLC19} use scientific publications to further pre-train the SCIBERT to improve performance on downstream scientific NLP tasks. \citet{DBLP:conf/iclr/XiongDWS20} propose type-constrained entity replacement pre-training task for knowledge learning. THU-ERNIE \cite{DBLP:conf/acl/ZhangHLJSL19} proposes to incorporate the contextual representations with separate knowledge graph (KG) embeddings, but they do not consider the correlation between knowledge triples and the text. Semi-structured text is important for language understanding, which has been proven effective in question answering (QA) systems \cite{min2020neurips,kwiatkowski2019natural}, but few works explicitly model semi-structured text into the pre-training process. This highlights the need to model the document structure, relevant knowledge triples and plain text in the same representation space. Modeling multi-format text is a nontrivial task. The main difficulty lies in finding the correspondence between heterogeneous knowledge resources. Our goal is to find an effective way to model unstructured paragraphs, semi-structured headings and well-structured knowledge triples and let them interact with each other. Using the relationship between the headings and paragraphs, the model can understand the topic of the paragraph. Inspired by the use of knowledge triples \cite{DBLP:journals/ibmrd/Chu-CarrollFBCSW12}, these triples help add explicit restrictions or complement information to the text, evaluate the information expressed and improve the interpretability. We propose a heterogeneous knowledge language model (HKLM) that simultaneously models unstructured, semi-structured and well-structured text into the same contextual representation space. To obtain the aforementioned multi-format text, we construct a corpus in the tourism domain and pre-train our \textbf{TravelBERT}. Specifically, our multi-format text comes from the Baidu Encyclopedia webpages of Chinese Tourist Attractions, since entity-oriented resources have rich aligned heterogeneous knowledge. The HKLM is suitable for training using encyclopedia articles. At the same time, we also use a plain text corpus of travel guides to pre-train another version, like SCIBERT. Unstructured text has a larger amount of data (4 times in the experiment) than encyclopedia articles. We combine three objective functions to jointly pre-train the multi-format text. For unstructured text, we adopt the masked language model (MLM) objective to train the domain adaption model. For semi-structured text, we propose title matching training (TMT) to classify whether the heading matches the paragraph. For well-structured text, we propose a triple classification (TC) task to classify whether the knowledge triple is modified. To align the knowledge triples with the plain text, we use a heuristic search method to calculate the similarity between the text and the triples. We evaluate the model using 5 downstream tourism NLP tasks, including named entity recognition (NER), open information extraction (IE), question answering (QA), fine-grained entity typing (ET), and dialogue. Our method achieves significant improvements on multiple datasets. The main contributions of this paper are as follows: (i) We propose to model heterogeneous knowledge into a unified representation space with different objectives, thereby allowing them to interact with each other. (ii) We construct 4 datasets for evaluating downstream tourism NLP tasks. (iii) We pre-train the TravelBERT with two schemes, using plain text and the proposed HKLM. Experiments show that using 1/4 of the plain text, the HKLM outperforms the pre-training of plain text. \section{Methods} \begin{figure*}[!h] \centering \includegraphics[width=\linewidth]{pic/fm.pdf} \caption{Schematic diagram of heterogeneous knowledge language model} \label{kast.fig} \end{figure*} This section explains the mechanism of the proposed HKLM. Suppose we have a document set $D$, a heading set $T$ and a knowledge base $KG$. For \{$s^{(e)}$$|$$s^{(e)} \in D^{(e)}$\}, \{$t^{(e)}$$|$$t^{(e)} \in T^{(e)}$\} and \{$kg^{(e)}$$|$$kg^{(e)} \in KG^{(e)}$\}, the superscript $^{(e)}$ denotes the entity-oriented resource. As shown in Figure \ref{kast.fig}, for the entity $e$ ``\textit{The Palace Museum}'', given a piece of text description $s^{(e)}$:$\{w_1,w_2,...,w_l\}$ ``\textit{The Palace Museum in Beijing ....}'', corresponding heading $t^{(e)}$ ``\textit{Abstract}'' and relevant knowledge triples $kg^{(e)}$:$\{(e,p_1,o_1),...,(e,p_k,o_k)\}$ ``\textit{(The Palace Museum, location, Beijing),...}'', our goal is to learn a PLM to incorporate the knowledge of unstructured text, semi-structured text and well-structured text. Our main effort lies in designing an unsupervised pre-training method to augment the contextual representation by leveraging the unstructured paragraphs, semi-structured headings and well-structured knowledge triples. The main improvements consist of two aspects, injecting entity knowledge and topic knowledge. In the following, We first describe the process of pre-training TravelBERT using plain text. Then, we describe the proposed HKLM and its training method. \subsection{Pre-training TravelBERT with Unstructured Text} \citet{DBLP:conf/acl/RuderH18} show that further pre-training a language model on a target domain corpus improves the eventual classification performance. We use the unstructured text in the tourism domain to further pre-train BERT. Each text sequence is concatenated with special symbols, classification [CLS] and separator [SEP], denoted as $\langle\mbox{[CLS]};s;\mbox{[SEP]}\rangle$. \begin{align} {\bf h} = &\mathcal{F}_{bert}(\langle\mbox{[CLS]};s;\mbox{[SEP]}\rangle) \end{align} where ${\bf h} \in \mathbb{R}^{d\times l}$ is the representation of each token. $s$ is the unstructured text. $d$ and $l$ are dimension and sequence length. $\mathcal{F}_{bert}(\cdot)$ denotes the network defined in \cite{DBLP:conf/naacl/DevlinCLT19}. Then we use the MLM objective to train the model. Given that BERT is a representative PLM, all studies in this paper use BERT as the backbone. \subsection{Heterogeneous Knowledge Language Model} We care about whether the multi-format text data source can be widely shared by different models. The encyclopedia article generated from the collective intelligence is a kind of multi-format text data on hand. Many studies focus on the free text when using encyclopedia articles, ignoring the rich document elements. We aim to use the document structure and infobox triples. Besides, the semi-structured text and internal links in encyclopedia webpages can also be used for data annotation in downstream NLP tasks. Next, we will introduce the way we model multi-format text. The core of pre-training multi-format text is to align them so that different textual modalities can interact. The challenge is to maintain the alignment of multi-format text when the document is divided into many fragments. As shown in Figure \ref{kast.fig} (a), the original data consists of free text, corresponding headings and relevant knowledge triples. We use the Chinese data to train our model. For convenience, we display the data in Chinese and English. As shown in the lower part of Figure \ref{kast.fig} (b), the input is composed of the \textbf{Text} ``\textit{The Palace Museum in Beijing ....}'', \textbf{Title} \textit{Abstract}, positive triple \textbf{PT} and negative triple \textbf{NT} generated by modifying the original triple, denoted as $\langle\mbox{[CLS]};s^{(e)};\mbox{[SEP0]};t^{(e)};\mbox{[SEP1]};(e, p_j,o_i);$ $\mbox{[SEP2]};...\rangle$. We add a new symbol [SEP0] to identify the heading. [SEPi] $(i>0)$ is used to represent each knowledge triple. Each element of the triple is treated as text rather than an identifier. The advantage is that different forms of knowledge can be represented in the same contextual representation space. The downside is the lack of linkage and disambiguation between knowledge triples. We use formula \eqref{allrep} to compute the representation of each element. \begin{align} \label{allrep} &{\bf h}^{(D)},{\bf h}^{(T)},{\bf h}^{(KG)} = \mathcal{F}_{bert}(\langle\mbox{[CLS]};s^{(e)}; \\ \nonumber &\mbox{[SEP0]};t^{(e)};\mbox{[SEP1]};(e, p_j,o_i);\mbox{[SEP2]}...\rangle) \end{align} This model needs to predict whether the \textbf{Title} matches the \textbf{Text} and whether the predicate of the triple is modified. Meanwhile, we retain the MLM loss to help the model learn from the text in the tourism domain. Next, we will describe the model input and training mechanism in detail. \subsubsection{Learning Entity Knowledge Through Well-structured Text} Well-structured text refers to the text organized according to a certain pattern. In this paper, well-structured text represents the knowledge triples of the infobox. Knowledge triples can directly and centrally describe the attributes of the entity. This allows the model to learn entity information that is unclear or difficult to capture in the context. Such information is important for enhancing entity-oriented downstream tasks. Well-structured text can provide knowledge guidance for understanding free text. By introducing rich attributes and relations of entities, the representation of entities in the text is naturally enriched. Baidu-ERNIE \cite{DBLP:journals/corr/abs-1904-09223} masks the entities or phrases in the sentence so that the model can better consider the entity's context. Our model randomly masks tokens with a probability of 15\%. To better learn the attributes of entities, in addition to masking the free text, our model also randomly masks the knowledge triples. Usually, an entity may have several to dozens of attributes, such as \textit{address, location, famous scenery, climate type, attraction level, type of attraction, building time, complete time}, etc. To reduce the computational pressure and retain sufficient free text, we adopt a knowledge triple retrieval strategy to use the relevant knowledge triples corresponding to the free text. We use TF-IDF similarity \cite{tata2007estimating} for control and only use the knowledge triples contained in text descriptions. Then we use TF-IDF vectors to calculate the similarity of each triple with the text, as shown below. \begin{align} similarity = cos(vec(s^{(e)}), vec((e,p,o))) \end{align} where $vec(\cdot)$ is the vectorization (mapping) function to convert text to TF-IDF vector. $s^{(e)}$ and $(e,p,o)$ are seen as different documents. Then we calculate the cosine similarity. Specifically, we calculate the TF-IDF of the corpus composed of all the Baidu encyclopedia webpages of Chinese tourist attractions and knowledge triples, as shown below. Each triple is also seen as a text sequence. \begin{align} \hbox{TF-IDF} = \frac{f_{t,d}}{\sum_{t^{'}\in d}f_{t^{'},d}} \times \log\frac{N}{n_t} \end{align} where $d$ is the document that contains term $t$. $f_{t,d}$ is the term frequency of $t$ in document $d$. $N$ is the number of documents of the corpus. $n_t$ is the number of documents that contains $t$. We observe 73\% of the text samples can be paired with at least one triple. Usually, training knowledge graph embeddings (such as TransE \cite{DBLP:conf/nips/BordesUGWY13}) requires optimizing objective function of $h+r=t$ to learn the internal relation, where $h$, $r$ and $t$ denote the head entity, relation and tail entity respectively. When we inject knowledge triples as text, we hope to establish a natural connection between free text and well-structured text. The challenge is that the knowledge triples and free text exist separately, and this association has not been annotated before. Ideally, one option is to train the model through the task of generating triples from free text. However, this training task will increase the complexity of the model. Finally, we simplify it to the process of constructing triple-paragraph pairs using TF-IDF. We use a program to add noise to the triples, and the model only needs to predict whether each triple is modified. The model can use the unstructured text to validate the knowledge triples. We call it a triple classification (TC) task which is designed to let the model classify whether each triple is modified. The method of adding noise is the attribute resampling operation, which is designed to randomly replace some triples' attributes $p$ with other triples' attributes $\hat{p}$, as shown in formula \eqref{replace}. Resampling attributes can improve the model's understanding of relational semantics. Conversely, resampling attribute values may cause the model to fail to classify confusing results (e.g. numeric attribute values), thereby increasing the false positive rate of model predictions. \begin{align} \label{replace} \hat{p} \sim Uniform(P\backslash\{p\}) \end{align} where $P$ denotes all attributes in the KG. \subsubsection{Learning Paragraph Knowledge Through Semi-structured Text} The MLM focuses on word prediction but lacks the concept of paragraphs and subsections. Pre-training for understanding the overall topic of the paragraph is a nontrivial task. In the unsupervised scenario, we use the headings of the document as the natural label of corresponding paragraphs to guide the model to understand the topic of the paragraph. However, the headings of paragraphs cannot be enumerated, so they are not feasible as classes. Adding a generative model will increase the complexity of our TravelBERT. To make this method more versatile, we use a program to automatically construct the heading-paragraph pairs, and the model only needs to predict whether the heading is modified based on the paragraph. Inspired by the use of the relationship between the title and the content of title-oriented documents in IBM Watson DeepQA \cite{DBLP:journals/ibmrd/Chu-CarrollFBCSW12}, we model the semi-structured text by proposing title matching training (TMT). TMT aims to classify whether the heading matches the paragraph. For multi-level headings, we use the heading closest to the paragraph to get a more specific topic. When pre-training the language model, we concatenate the text with the corresponding headings, denoted as $\langle\mbox{[CLS]};s^{(e)};\mbox{[SEP0]};t^{(e)}\rangle$. We use the representation of [SEP0] for classification. To generate negative samples, for the heading $t^{(e)}$ ``\textit{Abstract}'', our method will sample another heading $\hat{t}^{(e)}$ ``\textit{History}'' in the same article to replace the original heading, as shown below. \begin{align} \label{replace2} \hat{t}^{(e)} \sim Uniform(T^{(e)}\backslash\{t^{(e)}\}) \end{align} \subsection{Model Training} For the three tasks, we can optimize the combined objective function. \begin{align} \label{objective} \min\limits_{\Theta}\mathcal{L}&=\sum_{i=1}^{|\mathbb{D}|}(\mathcal{L}_{i}^{(mlm)}+\lambda \mathcal{L}_{i}^{(tc)} + \mu \mathcal{L}_{i}^{(tmt)}) \end{align} where $\mathcal{L}_{i}^{(mlm)}$, $\mathcal{L}_{i}^{(tc)}$ and $\mathcal{L}_{i}^{(tmt)}$ are the objectives of three tasks respectively. $|\mathbb{D}|$ is the size of the dataset. $\Theta$ is the model parameter. $\lambda$ and $\mu$ are hyper-parameters to weight the influence of each task. The training loss is to sum the deviation of the cloze task, the deviation of triple classification and the deviation of title matching. We adopt the negative log-likelihood as the objective, as shown below. \begin{align} \mathcal{L}_{}^{(mlm)}&=-\sum_{i} \log p^{(mlm)}(y_i^{(mlm)}|{\bf h}^{(D)};\Theta) \end{align} where $p^{(mlm)}(\cdot)$ represents the probabilities of true class of cloze test. $i$ denotes the indices of token. \begin{align} \mathcal{L}_{}^{(tc)}&=-\sum_{j} \log p^{(tc)}(y_j^{(tc)}|{\bf h}^{(KG)};\Theta)) \end{align} where $p^{(tc)}(\cdot)$ represents the probabilities of true class of triple classification tasks. $j$ denotes the indices of triple. \begin{align} \mathcal{L}_{}^{(tmt)}&=-\log p^{(tmt)}(y^{(tmt)}|{\bf h}^{(T)};\Theta) \end{align} where $p^{(tmt)}(\cdot)$ represents the probabilities of true class of title matching task. \subsection{Fine-tuning TravelBERT for Tourism NLP Tasks} The input form of each downstream tourism NLP task is shown in Figure 1 in the Appendix. When fine-tuning downstream tasks, our model does not need to change the input text because the model can learn heterogeneous knowledge in the pre-training stage and learn better parameters, like GPT-3 \cite{DBLP:conf/nips/BrownMRSKDNSSAA20} and WKLM \cite{DBLP:conf/iclr/XiongDWS20}. Then the model uses the learned knowledge (parameters) to better solve downstream tasks. For the NER task, we adopt the sequence labeling \cite{DBLP:conf/naacl/LampleBSKD16} scheme and use the vector of each token in the last layer to classify the entity labels. The fine-grained entity typing \cite{jin2019fine} task aims to assign fine-grained type labels to the entity mention in the text. We add two special symbols [ENT] to highlight the entity mention and use the [CLS] vector of the last layer to classify the labels. For the open IE task, we use a two-stage span extraction reading comprehension model \cite{DBLP:conf/acl/LiYSLYCZL19}. Specifically, we first train a relation prediction model to extract multiple predicate spans in the sentence. The way the model extracts each span is to predict the start and end positions. We use a threshold to select multiple spans, since each sentence may have more than one triple. We add two special symbols [REL] to highlight the predicate span. Then we train an entity prediction model to extract subject and object spans for each predicate. For the QA task, we use the [CLS] vector in the last layer to calculate and rank the matching score of each candidate answer to the question. For the dialogue task, we adopt a retrieval-based model. The training task is to predict whether a candidate is the correct next utterance given the context. For the test, we selected the candidate response with the largest probability. \section{Experiments} We perform experiments on the following tourism NLP tasks, including NER, Open IE, dialogue, QA and fine-grained entity typing to evaluate our proposed models. Then we conduct ablation studies and analyze the influence of KG quality on the model. \subsection{Data and Setup} Our pre-training corpus is composed of the plain text corpus and Baidu encyclopedia webpages of Chinese tourist attractions. We obtained the Chinese tourist attractions and fine-grained tourism types from ctrip.com, visitbeijing.com.cn, tripadvisor.cn, meituan.com, etc. and constructed a Chinese Tourism Knowledge Graph (CTKG). Then, we obtained 49,273 Baidu encyclopedia webpages based on the tourist attractions in the CTKG, including 0.27M knowledge triples describing tourist attractions. The plain text corpus (279M tokens) contains 174,326 travel guides collected from ctrip.com, mafengwo.cn, etc. and the plain text of 49,273 Baidu encyclopedia webpages of Chinese tourist attractions. We segment the document and limit the maximum length of the input text fragment to about 400 tokens so that the model can learn complex context. We convert the infobox into knowledge triples and then retrieve relevant triples for each text fragment in the same document. For downstream tourism NLP tasks, due to the lack of sufficient evaluation datasets in the tourism domain, we adopt a well-known dialogue dataset KdConv \cite{DBLP:conf/acl/ZhouZHHZ20} and construct 4 tourism NLP datasets. Table 1 in the Appendix lists the detailed information of the tourism NLP datasets for each task. Due to space limitations, we report hyper-parameters and evaluation metrics in the supplementary material. \subsection{Results on The Tourism NLP Tasks} \begin{table*}[] \resizebox{\linewidth}{!}{ \begin{tabular}{c|ccc|ccc|ccc|ccc|cccccc} \toprule & \multicolumn{3}{c|}{TravelNER} & \multicolumn{3}{c|}{TravelET} & \multicolumn{3}{c|}{TravelOIE} & \multicolumn{3}{c|}{TravelQA} & \multicolumn{6}{c}{KDConv} \\ Metrics & P & R & F1 & Acc & Mi-F1 & Ma-F1 & P & R & F1 & MAP & MRR@5 & MRR@1 & Hits-1 & Hits-3 & Dist-1 & Dist-2 & Dist-3 & Dist-4 \\ \midrule Baidu-ERNIE &29.9 &32.9 & 31.3 & {\bf 63.7} & 72.4 & 61.7 & 39.2 & {\bf 30.6} & 34.4 & 85.0 & 84.4 & 77.8 & {\bf 49.7} & {\bf 76.3} & {\bf 7.4} & {\bf 23.5} & {\bf 35.7} & {\bf 43.0} \\ K-BERT(TravelKG) & 50.5 &58.3 &54.1 & -- & -- & -- & -- & -- & -- & 82.5 & 81.6 & 75.0 & -- & -- & -- & -- & -- & -- \\ BERT\textsubscript{BASE} & 45.2 & 61.5 & 52.1 & 63.5 & 72.5 & 62.6 & 38.8 & 29.7& 33.6 & 82.4 & 81.5 & 74.5 & 45.3 & 71.9& {\bf 7.2}&22.6 & 34.0& 40.8 \\ \midrule \textbf{TravelBERT\textsubscript{C}} &48.2 & 60.4 & 53.6 & 63.6 & 72.3 & 62.2 & 39.5 & 30.5& 34.4 & 84.4 & 83.7 & 77.5 & 41.5 & 69.3& 6.9& 21.4& 32.0& 38.2 \\ \textbf{TravelBERT\textsubscript{K}} & \textbf{50.9} & {\bf 62.3} & {\bf 56.0} & 63.6 & {\bf 73.3}& {\bf 63.4} & {\bf 39.9}& 30.5 & {\bf 34.6} & \textbf{85.2} & \textbf{84.7} & \textbf{78.4} & 45.5 & 72.7& 7.2 & 22.7 & 34.3 & 41.3 \\ \bottomrule \end{tabular} } \caption{Results on the 5 downstream tourism NLP datasets} \label{5nlpres} \end{table*} Table \ref{5nlpres} lists the results on the 5 tourism NLP datasets. BERT\textsubscript{BASE} represents the common pre-trained Chinese BERT model. TravelBERT\textsubscript{C} and TravelBERT\textsubscript{K} represent the use of plain text and HKLM to further pre-train the language model, respectively. K-BERT(TravelKG) denotes that the K-BERT \cite{DBLP:conf/aaai/LiuZ0WJD020} model uses our Chinese tourism KG for training and prediction. Baidu-ERNIE \cite{DBLP:journals/corr/abs-1904-09223} represents that we use the Chinese version ERNIE 1.0\footnote{\url{https://huggingface.co/nghuyong/ernie-1.0}}. For the TravelNER dataset, TravelBERT\textsubscript{C} can increase the precision score by +1.5\%, while the recall slightly degrades. This shows that the use of unstructured text in a specific domain to further pre-train the language model is conducive to a more accurate understanding of entity concepts because the contextual description of related entities is richer. TravelBERT\textsubscript{K} achieves the best results, indicating that pre-training with entity-centric heterogeneous resources is helpful for this task. Both structured and unstructured data in a specific domain can help identify domain-specific entities. Although the amount of encyclopedia pre-training corpus is 1/4 of the plain text pre-training corpus, TravelBERT\textsubscript{K} achieves better results than TravelBERT\textsubscript{C}. This demonstrates that heterogeneous knowledge is effective in the pre-training process. For the TravelET dataset, we can see that the performance of TravelBERT\textsubscript{C} is almost identical to the baseline, which means simply using unstructured text to further pre-train the language model may not bring significant improvements to this task. TravelBERT\textsubscript{K} achieves improvements by +0.8\% in micro-F1 and macro-F1, which indicates that the entity typing task requires entity-centric heterogeneous knowledge. For the TravelOIE dataset, our TravelBERT\textsubscript{C} and TravelBERT\textsubscript{K} outperforms the baseline by +0.8\% and +1\% micro-F1 respectively. This demonstrates that heterogeneous knowledge is beneficial to open information extraction tasks. For the TravelQA dataset, we observe that TravelBERT\textsubscript{C} improves the MAP score by +2\% and improves MRR@N by more than +2\%. This means that pre-training with domain-specific unstructured text is helpful for this task. TravelBERT\textsubscript{K} improves the MAP score by +2.8\%. Baidu-ERNIE also achieves an identical result, which means that the model is good at handling Chinese question answering tasks. KDConv dataset is a knowledge-driven conversation dataset in which each response is generated based on a specific triple. Fine-tuning the BERT with unstructured text degrades the results. This means for knowledge-driven conversation tasks, simple pre-training with unstructured text in a specific domain may hurt performance. This is because, for knowledge-driven dialogue, the model may not be able to efficiently utilize the unstructured context. Using the proposed HKLM can improve hits-N and Distinct-N scores. This means that TravelBERT\textsubscript{K} can inject the travel knowledge of interest into the conversation. Baidu-ERNIE achieves the best results, which means that the model is good at handling Chinese dialogue tasks. We observe that the proposed HKLM achieves significant improvements on the TravelNER and TravelQA datasets, and has minor improvements on other datasets. This is because knowledge triples can enhance the representation of entities in the TravelNER dataset. Learning paragraph semantics helps understand the TravelQA dataset. Compared with TravelBERT\textsubscript{C}, the HKLM also greatly improves the results of the KDConv dataset which requires knowledge triples to return the correct information response. Nevertheless, since the labels of the TravelET dataset have a logical hierarchy, there is a lack of understanding of the hierarchical structure of classes (taxonomy) in the pre-training process. In addition, the task is a multi-label classification problem, and it is relatively simple to use the threshold to select the final labels. For the TravelOIE dataset, the data annotation relies on the information extraction mechanism of dependency parsing \cite{DBLP:conf/emnlp/QiuZ14}, but we did not specifically add linguistic knowledge during the pre-training process. These issues need to be further explored in the future. \subsection{Ablation Study} We perform ablation studies on the TravelNER and TravelQA datasets respectively because these two datasets can reflect the entity-oriented task and paragraph-oriented task. Then, we analyze the influence of knowledge triples, headings and KG quality in the pre-training process. \begin{table}[!htbp] \small \centering \begin{tabular}{l|ccc} \toprule Settings & P & R & F1 \\ \midrule TravelBERT\textsubscript{K} & 50.9 & 62.3 & 56.0 \\ --headings &49.3 &63.9 &55.6 \\ --triples &50.8 &60.4 &55.2 \\ --triples, headings &46.5&64.2&53.9 \\ \bottomrule \end{tabular} \caption{Ablation results on the TravelNER dataset} \label{ablNER} \end{table} As shown in Table \ref{ablNER}, we observe that removing the headings slightly reduces the F1 by -0.4\%. When we remove the knowledge triples, the F1 drops by -0.8\% because knowledge triples help learn the entity knowledge. This means that heterogeneous knowledge is beneficial for the TravelNER task. \begin{table}[!htbp] \centering \small \resizebox{\linewidth}{!}{ \begin{tabular}{l|cccc} \toprule Settings & MAP & MRR@10 & MRR@5 & MRR@1 \\ \midrule TravelBERT\textsubscript{K} & 85.2 &85.0 &84.7 & 78.4 \\ --headings &84.0 &83.8 &83.3 &76.6 \\ --triples &83.8 &83.5 &83.3 &76.3 \\ --triples, headings & 83.0 & 82.8 & 82.4 & 75.1 \\ \bottomrule \end{tabular} } \caption{Ablation results on the TravelQA dataset} \label{ablQA} \end{table} Removing the headings degrades the MAP score by -1.2\%, as shown in Table \ref{ablQA}. This means that headings benefit the TravelQA dataset. After removing the knowledge triples, the performance drops by -1.4\%. This means that headings and knowledge triples help understand the paragraph. When we remove both triples and headings, the model becomes TravelBERT\textsubscript{C} that uses part of the free text for pre-training. \begin{table}[!htbp] \small \centering \begin{tabular}{l|ccc} \toprule Settings & P & R & F1 \\ \midrule TravelBERT\textsubscript{K} & 50.9 & 62.3 & 56.0 \\ --50\% triples &47.9 &62.3 &54.2 \\ +noise &45.8 &62.0 &52.7 \\ \bottomrule \end{tabular} \caption{Results on the TravelNER dataset after pre-training with KGs of different quality} \label{noisener} \end{table} In addition to directly removing KG, we further explore the impact of the KG quality on the pre-training process. We adopt two methods to reduce the KG quality. First, we randomly remove some knowledge triples (--50\%) to simulate an incomplete knowledge graph. We observe that the result drops by -1.8\% in F1 score, as shown in Table \ref{noisener}. Second, we modify the knowledge triples (+noise) by randomly masking the attribute values to simulate a noisy KG. We observe that the F1 score drops by -3.3\%. This means that the deterioration of the KG quality will hurt the pre-training process. \section{Related Work} \subsection{Domain-specific Pre-training} Fine-tuning large PLMs \cite{DBLP:conf/naacl/DevlinCLT19,DBLP:conf/nips/BrownMRSKDNSSAA20,radford2019language,radford2018improving} have achieved state-of-the-art results in downstream NLP tasks. Continued pre-training PLMs on a large corpus of unlabeled domain-specific text is helpful to the domain of a target task. \citet{DBLP:conf/acl/GururanganMSLBD20} investigate the impact of domain-adaptive pre-training and tasks-adaptive pre-training on language models. Further research on the domain adaptability of PLMs has become a promising topic. For the scientific domain, there are SCIBERT \cite{DBLP:conf/emnlp/BeltagyLC19} and PatentBERT \cite{lee2019patentbert}. For the biomedical domain, there are BioBERT \cite{DBLP:journals/bioinformatics/LeeYKKKSK20}, extBERT \cite{tai2020exbert}, PubMedBERT \cite{DBLP:journals/corr/abs-2007-15779}, ClinicalBERT \cite{DBLP:journals/corr/abs-1904-05342}, MT-ClinicalBERT \cite{DBLP:journals/corr/abs-2004-10220}. For the financial domain, there are FinBERT \cite{DBLP:journals/corr/abs-1908-10063,DBLP:conf/ijcai/0001HH0Z20}. \citet{DBLP:journals/corr/abs-2004-02288} propose to mitigate catastrophic forgetting during domain-specific pre-training. However, most studies only use the plain text corpus without considering the document structure and structured knowledge, which leads to the loss of important information in learning. Our proposed method further incorporate the semi-structured and well-structured text. \subsection{Knowledge-aware Pre-training} Knowledge-aware pre-training is designed to expanding text input and introducing external knowledge resources into PLMs, rather than only considering the input text. \citet{DBLP:conf/iclr/XiongDWS20} propose WKLM to use the type-constrained entity replacement for knowledge learning, but they do not consider other attributes of entities. \citet{DBLP:conf/aaai/LiuZ0WJD020} propose K-BERT which computes the attention score between tokens and KG triples. This method changes the sentence structure by inserting triples in the fine-tuning stage. \citet{DBLP:journals/corr/abs-1911-06136} propose KEPLER that encodes entity description as entity embedding to jointly train the knowledge embeddings and masked language model. Baidu-ERNIE \cite{DBLP:journals/corr/abs-1904-09223} propose the whole word masking to mask whole word, entity and phrase. This method only considers the entity mention and sentence span. THU-ERNIE \cite{DBLP:conf/acl/ZhangHLJSL19} proposes the information fusion layer for the mutual integration of words and entities. LUKE \cite{DBLP:conf/emnlp/YamadaASTM20} proposes entity-aware self-attention and masks tokens and entities in pre-training. TaBERT \cite{DBLP:conf/acl/YinNYR20} proposes to learn the joint representations of textual and tabular data. We consider the alignment of multi-format text and simultaneously model them in the same representation space. \subsection{NLP Tasks Utilizing Document Structure} Document structure \cite{power2003document} describes the organization of a document into graphical constituents like sections, paragraphs, sentences, bulleted lists, and figures. Document structure mainly represents two levels: logical structure (such as outline, sections, etc.) and visual structure (such as element layout, font size, color, etc.). Document structure provides more information than sentences. Currently, it has become a popular trend to move research from sentence level to document level in many fields, such as DocRED \cite{DBLP:conf/acl/YaoYLHLLLHZS19}, QA \cite{DBLP:conf/emnlp/WangSGDJ20}, etc. Top-performing systems of EfficientQA \cite{min2020neurips,kwiatkowski2019natural} prove that considering infobox, lists and tables on Wikipedia webpages can lead to performance gains because of the explicit information. However, the structured information is only used as candidates for reading comprehension. \citet{DBLP:conf/acl/LockardSDH20} propose to encode visual elements including layout, font size, and color in the graph attention network to improve the performance of relation extraction on webpages. The document structure is mainly used as an input feature for downstream tasks. Different from their methods, we introduce the document structure in the pre-training process, enabling the model to learn the topic knowledge of paragraphs. \section{Conclusion and Future Work} This paper presents a pre-training approach incorporating unstructured, semi-structured and well-structured text in the same contextual representation space. Specifically, the proposed HKLM models the document structure, relevant knowledge triples and plain text and realizes the interaction between heterogeneous knowledge. We construct 4 tourism NLP datasets, and the experimental results show that the further use of multi-format text resources in pre-training can help improve the performance of downstream tasks. This paper mainly resolves the problem of how to find the correspondence between multi-format text resources in the pre-training process. The innovation of the proposed method lies in the use of entity-oriented heterogeneous knowledge resources. In the future, we plan to apply this method to the general domain.
{ "attr-fineweb-edu": 2.398438, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdB45qdmDApqFPMfj
\section{Introduction} \label{sec:intro} Sport video summarization, or highlights generation, is the process of creating a synopsis of a video of a given sport event that gives the viewer a general overview of the whole match. This process incorporates two different tasks: (1) to detect the most important moments of the event, and (2) organize the extracted content into a limited display time. While the second point is a widely-known problem in the multimedia and broadcasting community, the definition of \emph{what is a highlight} has different interpretations in the community. According to~\cite{hanjalic2005tmm}, highlights are ``those video segments that are expected to excite the users the most''. In~\cite{zhu2007tmm}, the focus relaxes from excitement to general attention, and thus salient moments are the ones that attract audience attention the most. These two definitions would imply to explicitly design specific models for extracting excitment from the crowd in one case and attention on the other. In this paper we overcome this problem by automatically learn visual features using deep architectures that discriminate between highlights and ordinary actions. \begin{figure}[t] \centering \begin{subfigure}[b]{.45\columnwidth} \centering \includegraphics[width=\linewidth]{ex_goal} \end{subfigure} \begin{subfigure}[b]{.45\columnwidth} \centering \includegraphics[width=\linewidth]{ex_nogoal} \end{subfigure} \caption{Example video sequences of a goal event (left) and standard play time (right).} \label{fig:excrop} \end{figure} Traditionally, extracting sport highlights has been a labor intensive activity, primarily because it requires good judgment to select and define salient moments throughout the whole game. Then, highlights are manually edited by experts, to generate a video summary that is significant, coherent and understandable by humans. State-of-the-art artificial intelligence is still far away from having solved the whole problem. In the last years, there has been an increasing demand for automatic and semi-automatic tools for highlights generation, mainly due to the huge amount of data (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot sport event videos) generated every day and made available through the Internet. Specialized broadcasters and websites are able to deliver sport highlights minutes after the end of the event, handling thousands of events every day. As a consequence, there has been extensive research in this area, with the development of several techniques based on image and video processing~\cite{bertini2003mir,chauhan2016ngct,hanjalic2003icip,hanjalic2005tmm,nguyen2014mmsp,tjondronegoro2004mm,zhu2007tmm}. More recently, many works started using additional sources of information to increase performances, including audio recordings~\cite{rui2000acmmm,xiong2003icassp}, textual narratives~\cite{suksai2016icsec}, social networks~\cite{fiao2016ace,hsieh2012icme,tang2012chi}, and audience behavior~\cite{conigliaro2013attento,conigliaro2013viewing,conigliaro2013observing,peng2011tmm}. Despite some solutions are already present on the market, performances are in general still fairly poor and we believe there is room for new research on this topic.\\ While previous work attempted to detect in sport videos actions that stimulate excitement~\cite{hanjalic2003icip} or attract attention~\cite{zhu2007tmm} of the audience, in this paper we reverse the problem by analyzing the audience behavior to identify changes in emotions, that can only be triggered by highlights on the game field. Specifically, we present a novel approach for sport highlight generation which is based on the observation of the audience behavior. This approach is based on the analysis of a set of space-time cuboids using a 3D-CNN architecture. All the samples are trained singularly, the result for each cuboid at a certain time step is then processed through an accumulator which generates a sort of highlight probability for the whole audience that will be used to perform the final ranking. The rest of the paper is organized as follows: in Section~\ref{sec:soa} we briefly present the state-of-the-art in automatic highlight detection. In Section~\ref{sec:method} we detail the proposed methodology, while in Section~\ref{sec:exp} we show some qualitative and quantitative results on a public dataset of hockey matches. Lastly, in Section~\ref{sec:concl} we draw some conclusions and perspectives for future works. \section{Related work} \label{sec:soa} Money and Angius~\cite{money2008vcir} provide an extensive literature survey on video summarization. According to the taxonomy proposend in that paper, related work can be classified into three categories: (1) internal summarization techniques; 2) external summarization techniques; and 3) hybrid summarization techniques. By definition, \emph{internal summarization techniques} rely only on information provided by the video (and audio) streams of the event. These techniques extract low-level image, audio, and text features to facilitate summarization and for several years have been the most common summarization techniques. \emph{External summarization techniques} require additional sources of information, not contained in the video streams. These are usually user-based information --\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot information provided directly from users-- and contextual information --such as the time and location in which the video was recorded. As for \emph{hybrid summarization techniques}, both internal and external information are analyzed, allowing to reduce the semantic gap between the low level features and the semantic concepts. \textbf{Social networks.} According to Hsieh \emph{et al}\onedot~\cite{hsieh2012icme}, the quantity of comments and re-tweets can represent the most exciting moments in a sport event. A highlight can be determined by analyzing the keywords in the comments and observing if the number of comments and re-tweets passes a certain threshold. Fi\~{a}o \emph{et al}\onedot~\cite{fiao2016ace} uses emotions shared by the spectators during the match via social networks to build a system capable of generating automatic highlight videos of sports match TV broadcasts. Auxiliary sources of information are TV broadcast videos, the audio, the analysis of the movement and manual annotations (when available). The system also allows for the user to query the video to extract specific clips (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot attacking plays of a specific team). \textbf{Text.} In~\cite{suksai2016icsec}, Suksai and Ratanaworabhan propose an approach that combines on-line information retrieval with text extraction using OCR techniques. This way, they are able to limit the number of false positives. \textbf{Audio.} Rui \emph{et al}\onedot~\cite{rui2000acmmm} presents a method that uses audio signals to build video highlights for baseball games. It analyzes the speech of the match announcer, both audio amplitude and voice tone, to estimate whether the announcer is being excited or not. In addition, the ambient sound from the surrounding environment and the audience are also taken into considerations. Built on this work, Xiong \emph{et al}\onedot~\cite{xiong2003icassp} handpicked the highlight events and analyzed the environment and audience sounds at each of those highlight events. They discovered that there exists a strong correlation between loud and buzzing noise and some major highlight events. This correlation exists in all the three sports being analyzed: baseball, golf, and soccer. \textbf{Audience.} Peng \emph{et al}\onedot~\cite{peng2011tmm} propose the Interest Meter (IM), a system able to measure user's interest and thus use it to conduct video summarization. The IM takes account attention states (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot eye movement, blink, and head motion) and emotion states (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot facial expression). These features are then fused together by a fuzzy fusion scheme that outputs a quantitative interest score, determine interesting parts of videos, and finally concatenate them as video summaries. In~\cite{conigliaro2013attento}, Conigliaro \emph{et al}\onedot use motion cues (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot optical flow intensity and direction entropy) to estimate the excitement level of audience of a team sport event and to identify groups of supporters of different teams. In~\cite{conigliaro2013viewing}, these features are used to identify highlights in team sport events using mean shift clustering. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{method} \caption{Sketch of the overall method.} \label{fig:cnn} \end{figure*} \section{Method} \label{sec:method} The proposed highlights detection methodology uses a 3D Convolutional Neural Network (3D-CNN) to extract visual features from video recordings of the audience of the event, and classify them in positive samples (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot when a highlight occurs) and negative samples (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot standard play or timeouts). From empirical observations, the audience reaction of a highlight (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot a goal) lasts for at least the 10 seconds that follows the event itself. For this reason, temporal resolution is not a critical parameter and downsampling the video from 30 to 3 fps allowed us to reduce the computational burden without losing the informative part of the video. The 3D-CNN cuboid is extracted from a manually selected rectangular area that roughly contained the bulk of the audience, using a uniform grid with fixed spatial dimension of 100$\times$100 pixels, while the temporal resolution has been set to 30 frames. These parameters are the result of an a priori intuition that each block should be able to represent a portion of spectators which should not be too large, in order to reduce the computational burden, but at the same time it should not be too small since this would bring to be too much location dependent. For our model we used a sliding window with a stride of 50 pixels resulting in a maximum overlap between two crops of 50\% In order to detect and rank the most important moments in the video sequence we follow the idea of Conigliaro et al. \cite{conigliaro2015cvpr}, where information accumulators along time have been proposed to segment supporters of the two different playing teams. Our goal is however different: unlike them, we are interested in a global analysis of the excitement of the audience regardless of the supporting preference at a certain time. For this reason we are using an accumulator strategy over the whole audience location in the scene. Each spatio-temporal cuboid $C_i$, $i=1,...,N$ represents a sample that is fed into a 3D-CNN and analyzed independently; then, for each time instant the related probability score $p_i$, $i=1,...,N$ of being a positive class is accumulated over all the samples in the spatial dimension, generating a scalar value representing the \emph{Highlight Likelihood} (HL) that is a score representing how likely a particular instant can be considered an highlight or not. A sketch of the overall system is shown in Fig.~\ref{fig:cnn}. \subsection{Network Architecture} Inspired by earlier works on action recognition~\cite{ji2013pami,tran2015iccv}, we use a 3D Convolutional Neural Network composed by 4 convolutional and 3 fully connected layers. The network takes as input video cuboids of 100$\times$100$\times$30, where the first two numbers refer to the spatial dimension while the third is the temporal depth (number of frames). The first two convolutional layers are composed 12 filters 3$\times$3$\times$3, to capture spatio-temporal features from the raw data. These are followed by a 2$\times$2$\times$2 max pooling layer to detect features at different scales. In the latter two convolutional layers, 8 3$\times$3$\times$3 convolutional filters have been used. In all convolutional layers the ReLU activation has been used. The network is then unfolded with a flatten layer followed by three fully connected layers of decreasing dimensionality (32, 8, and 2 neurons respectively). The final classification task is achieved by a softmax layer that outputs the probability of a test sample to belong to each of the two classes: ``highlight'' and ``standard play''. \section{Experiments} \label{sec:exp} In this section we provide both qualitative and quantitative results to validate our proposed methodology. For the evaluation we adopted the S-Hock dataset~\cite{setti2017cviu}, a publicly available dataset composed by 6 ice-hockey games recorded during the Winter Universiade held in Trentino (Italy) in 2013. This dataset, besides a set of short videos heavily annotated on low level features (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot people bounding boxes, head pose, and action labels), it provides also a set of synchronized multi-view full matches with high-level event annotation. In these games, the labeling consist in the time position of meaningful events such as goals, fouls, shots, saves, fights and timeouts. In this work we considered only two matches: the final match (Canada-Kazakhstan) which is used for training the neural network, and the semi-final match (USA-Kazakhstan), used for testing. \subsection{3D-CNN training procedure} As mentioned briefly earlier, the positive class is named ``highlights'' and it represents all the spatio-temporal cuboids starting when a team scores a goal while the negative class (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot ``standard play'') includes other neutral situations happening during the game. In this work we excluded all the other significant annotated events (fouls, fights, etc.) to reduce the number of classes\footnote{These events are indeed generating different types of excitement, we could not investigate further for lack of annotated data, but this is an argument that we consider worthed of further research.}. In training phase the samples belonging to the two classes have been balanced to avoid dataset bias. The S-Hock dataset provides a set of synchronized videos of the games including several views of the audience, at different resolution/zoom level, and of the complete game footage. The video acquisition is done from different points of view (frontal and slightly tilted to the side), in this work we used all these views to ensure a more robust model of training that is able to learn features that are more possibly scale and position invariant. Positive and negative samples are then splitted into training and validation sets with a ratio of 70\%-30\%. Data augmentation procedure has been performed (horizontal flips in the spatial dimension) not only to increase the amount of training data but also to augment the invariance of the network. The final optimization is proposed as a classification problem, minimizing the categorical cross-entropy between the two classes. For this procedure we used the \emph{RMSprop} algorithm, a generalization of the resilient backpropagation \emph{rprop} algorithm that extends the ability to use only the sign of the gradient and to adapt the learning rate separately for each weight, to improve the work with minibatches. In our experiments we use minibatches of 64 samples each. A Dropout layer with 50\% probability to disconnect the link is applied before the first two fully connected layers to reduce overfitting. The procedure iterates over the whole dataset until convergence, usually reached after about 10 epochs. The whole training procedure takes about 2 hours on a machine equipped with a NVIDIA Tesla K-80 GPU, using Keras/TensorFlow framework. The whole resulting dataset is composed of a total of 32,000 training samples. \subsection{Quantitative Results} \begin{figure}[t] \centering \includegraphics[width=.5\columnwidth]{res_roc} \caption{ROC curve} \label{fig:res_roc} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\columnwidth]{res_time_1} \end{subfigure} \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\columnwidth]{res_time_2} \end{subfigure}\\ \vspace{1em} \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=\columnwidth]{res_time_3} \end{subfigure} \caption{Summed probabilities of highlights over all the crops in the scene. As visible, peaks in the curve nicely corresponds to a highlight.} \label{fig:res_time} \end{figure} Here we report a quantitative performance evaluation of the 3D-CNN in detecting positive and negative highlight samples. From the second period of the testing game, we randomly selected 3000 positive samples as well as the same number of negative samples and we fed them into the trained network. In Fig.~\ref{fig:res_roc} the ROC curve is reported. The Area Under the Curve (AUC) is 0.87. Binary classification is performed by assigning the sample to the class corresponding to the higher score; under this conditions the network reaches 78\% of accuracy, 69\% of precision and a recall of 84\%. Results themselves are quite good considering the difficulty of the task, however, our goal is different, since we are using those results in a more sophisticated framework to infer and rank interesting events during the whole game. Consequently we expect a certain amount of noise in such prediction since in many cases the sample may be partially filled with empty seats (see Fig.~\ref{fig:res_dots} ), producing a wrong or at least biased prediction toward the negative class. However, this problem is minimized with the use of the accumulator approach and due to the fact that the empty-seats location will be very little informative in the whole sequence, while the crowded locations, where most of the spectators are situated, will convey most of the information used for the final decision. \subsection{Qualitative examples} We also provide qualitative results to validate our approach. Fig.~\ref{fig:res_time} shows the HL score, summed over all the cuboids, at every non overlapping 10-second slice during an entire match (3 periods of 20 minutes plus timeouts). Goals are clearly identified in the first two periods, while in the third one other events also trigger the audience behavior; in particular, there are two prominent events that don't correspond to goals at 18:45 (which is caused by a player almost scoring) and at 28:15 (which is caused by a foul in front of the goaltender, and the resulting penalty). We can easily see that there is a correlation between HL score and important events in the game and that goals usually cause the biggest reaction on the spectators. \begin{figure} \centering \begin{subfigure}[b]{.47\textwidth} \centering \includegraphics[width=\columnwidth]{res_dots_pos} \caption{Highlight} \end{subfigure} \hspace{.04\textwidth} \begin{subfigure}[b]{.47\textwidth} \centering \includegraphics[width=\columnwidth]{res_dots_neg} \caption{Standard play} \end{subfigure} \caption{Probability scores given by a subset of the crops (chosen to be non overlapping for visualization purposes); each dot represents a crop which describes part of the scene. Green dots represent crops classified as people reacting to an highlight (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot cheering) while the red dots represent the crops classified as people with a "standard" behavior.} \label{fig:res_dots} \end{figure} \section{Conclusions} \label{sec:concl} In this paper we propose a method to temporally locate highlights in a sport event by analyzing solely the audience behavior. We propose to use a deep 3D convolutional neural network on cuboid video samples to discriminate between different excitement of the spectators. An spatial accumulator is used to produce a score which is proportional to the probability of having an interesting highlight in that precise time location. This enables the model to identify goals and other salient actions. Despite being very simple, the model we present provides good preliminary result on a public dataset of hockey games, encouraging further research based on this approach. In our opinion, the main limit of this model is in the way we take into account the temporal information; indeed we extend a standard CNN to work with 3D data, where the third dimension is time. A more sophisticated model, such as recurrent neural networks (RNN) and long-short term memory (LSTM), could benefit the final inferential results. As future work we intend to replace the accumulator with such a temporal model, expanding the classification to a multiclass problem in order to detect different events. In order to do so, the dataset has to be enlarged possibly on a different location to make sure the network is learning more general discriminative features. {\small \bibliographystyle{splncs03}
{ "attr-fineweb-edu": 2.087891, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbAA4dbgj407o0vuE
\section{Introduction} In the travel industry, the volume of sales mainly relates two factors like click rate (i.e., inquiry rate) and translation of it to conversion rate. Multiple factors can hinder the customers from purchasing a ticket in spite of searching for a flight itinerary. The click to conversation rate can be increased if certain number of offers can be recommended to a certain set of customers. Therefore, personalized recommendation based on similarity of customers can be very effective to improve the business strategy of the travel company. The customers can be segregated depending on various features like days of advance flight booking, distance covered during the travel, number of their children. Hence, clustering algorithms have a major role in order to segment the customers in a better way. Clustering algorithms \cite{jain99} are used in travel context to find sets of customers with similar needs and requirements, and to identify hidden relationships between their search query. However, traditional clustering algorithm results depend significantly on the algorithmic configuration used, i.e., the algorithm chosen and its parameterization, and its adequacy to the data space properties. Hence, clustering the same set of customers with different algorithmic configurations can produce significantly different solutions. \begin{wrapfigure}{r}{0.34\textwidth} \centering \includegraphics[width=0.34\textwidth]{Data_Space_Distribution.png} \caption{Bi-dimensional data space showing different underlying clustering models.} \label{Fig:data_space} \end{wrapfigure} Choosing an adequate algorithmic configuration, and defining the final number of clusters as required by most existing clustering approaches, are major practical issues when the required prior knowledge of the data space properties is unavailable. Consider for example the bi-dimensional data space represented in Fig.~\ref{Fig:data_space} that shows different sub-spaces where groups of objects correspond to different underlying clustering models, e.g., centroid, density or model-based. Consensus clustering approaches combine multiple clustering results, obtained from diverse algorithmic configurations, to generate a more robust clustering solution. By combining the results of algorithms based on different modeling assumptions and for different parameterizations, that is identifying agreements between these \emph{base clusterings} and quantifying the weight of repetitive groups of objects among clusters, these groups of objects can be detected. Therefore, consensus clustering, that was shown to be an effective approach to generate quality clustering solutions \cite{ayad2008,strehl2002,ZhongYZL15}, is an interesting solution for travel context where little prior knowledge of the evolving data space is available. However, to the best of our knowledge, no study on the integration of consensus clustering for better personalized recommendation in travel context was reported in the literature. In this paper, we study the integration of consensus clustering through a multi-objective optimization process for the clustering of flight search queries. This process aims to optimize the selection of flight recommendations that are returned by the Amadeus flight search engine. For that purpose, a clustering solution is used to segment the space of customers, so that the search engine is optimized independently for each cluster, and customers with different needs and requirements are provided with different recommendations. In this context, we have no prior knowledge about the data space modeling assumptions, like data distribution or natural number of clusters, and choosing an appropriate algorithmic configuration is an important issue. Using classical clustering approaches requires to apply the different clustering algorithms with different parameterizations multiple times and compute the Amadeus business metric for each clustering solution, implying numerous time consuming computations. This metric is the difference between the estimated booking probability of the flight recommendations output by the search engine before and after the optimization of the flight search engine. In order to reduce computation cost and optimize the search space exploration, i.e., the potential consensus clusterings, we propose a new ensemble clustering framework. The clustering ensemble problem is usually posed as an optimization problem where the average similarity of consensus solution with the base clusterings is maximized in order to obtain a better aggregation. However, since a consensus solution can be very similar to one base clustering whereas very distant from others, and to remove any kind of bias toward a particular clustering solution, minimizing the standard deviation of these similarity values is also necessary. The proposed framework uses a multi-objective clustering ensemble approach that simultaneously optimizes two objective functions \cite{ChatterjeePasquier:2019}: The maximization of the similarity between the consensus solution and the base clusterings, and the minimization of the standard deviation of these similarities for a consensus solution. A formal proof is provided demonstrating that the proposed approach automatically generates a number of clusters at least as appropriate as approaches that consider only co-occurrences of two objects in base clusters for generating a consensus clustering. This framework integrates the ensemble clustering solution and a mapping function to categorize a new customer in an appropriate cluster to improve the Amadeus flight search recommendation. We present an extensive analysis, by generating diverse sets of base clusterings from multiple perspectives, to demonstrate how the ensemble method performs in terms of consensus solution for customer segmentation as well as improving the Amadeus business metric for better personalized flight recommendation. Experimental results with comparative analysis of state-of-the-art methods show that it performs well in majority cases. Interestingly, it automatically produces the number of clusters which is close to the best number of cluster produced by other methods that require to specify it each time as input. \section{Related Work} Over the last few years, clustering ensemble has been employed as a useful tool to overcome the drawback of classical clustering algorithms by deriving better clustering solutions by consensus. For a given dataset, the clustering ensemble of base clusterings can be produced by applying the same clustering algorithm multiple times with different parameterizations, via sub sampling, or projection of the dataset into different sub-spaces. The primary objective of ensemble clustering algorithms is to combine base clustering solutions in such a way that a robust solution is generated to improve the quality of the results compared to base clustering solutions. Different approaches have been developed to address this topic \cite{Alqurashi2018,strehl2002,ayad2008,Fred:2002,Liu2014AWS,ZhongYZL15}, and the various state-of-the-art methods can be classified into some major categories described hereafter. Several approaches consider the clustering ensemble problem as a clustering of categorical data \cite{Nguyen2007}. Another category of clustering ensemble methods relies on generating a pair-wise similarity matrix. This similarity matrix basically considers the co-association between the objects occurring together in a same cluster for a clustering solution \cite{Fern2003}. An alternative method does not rely on an object-object co-association matrix but derives a consensus solution from a cluster association matrix \cite{Mimaroglu2012}. Other approaches consider clustering ensemble as a graph, or hypergraph, partitioning problem, and various graph partitioning algorithms were proposed to obtain the consensus solutions \cite{Fern2003,strehl2002}. Among graph partitioning based approaches, Strehl and Ghosh modeled this as a hypergraph partitioning problem \cite{strehl2002} and proposed three partitioning approaches: (i) Cluster-Based Similarity Partitioning Algorithm (CSPA), (ii) Hypergraph Partitioning Algorithm (HGPA) and (iii) Meta Clustering Algorithm (MCLA). Another recently proposed graph partitioning based novel consensus function namely, weak evidence accumulation clustering (WEAC) and four variants of it \cite{Huang:2015} outperform several other existing baseline approaches. The first three variants are basically agglomerative methods, namely average-link (AL), complete-link (CL) and single-link (SL). The last one GP-MGLA is a graph partitioning based consensus method. However, in all these approaches, the number of clusters is required as input. Among other studies, the approach proposed in \cite{Mimaroglu2012} effectively generates a consensus clustering with an automatically defined number of clusters. This approach visualizes the base clustering solution as an undirected weighted graph, and Prim's algorithm is adapted to make a minimum-cost spanning tree of the weighted graph. Another approach proposed by the same authors also generates automatically the number of clusters, but requires to specify a relaxation parameter \cite{Mimaroglu:2010}. \section{Problem Formulation} Let $X = \{x_1, x_2, \ldots, x_p\}$ denote a set of $p$ customers, where $x_i\in \mathbb{R}^d$, $d$ the number of features used to describe the search query and $Y$ be the set of $n$ clustering algorithms. Here each $x_j$ denotes the customer who searches a query for flight booking. Suppose $C = \{c_1, c_2, \ldots, c_n\}$ be the set of base clustering solutions obtained after applying $n$ clustering algorithms. Each $c_i$ partitions $p$ customers into $k_i$ clusters such that $c_i = \{x_1^{i}, x_2^{i}, \ldots, x_p^{i}\}$, where $c_i$ belongs to the set of all possible partitions of $X$ and $x_j^{i}$ denotes the label of $j^{th}$ customer according to the $i^{th}$ clustering: $x_{j}^i \in \{1, 2,\ldots,k_i\}$ $\forall$ $j \in \{ 1, 2, \ldots, p \}$. Note that, each $c_i$ might comprise different number of clusters $k_i$. Here, the goal is to derive the best aggregated ensemble solution from the base clustering ones, while automatically determining the number of clusters. In the subsequent sections of the paper we use both the terms customers and objects in an analogous way. \section{Clustering Ensemble Framework} The proposed framework integrating consensus clustering optimization in Amadeus flight search engine is presented in Fig.~\ref{Fig:framework}. The upper part of the chart shows the creation of the search space, that is the Refined Ensemble Clustering, from the dataset. The lower part of the chart shows the semi-supervised classification process for learning a customer classification model using consensus clusters multi-objective optimization, combined internal and external validation, characterization of cluster segments and prediction of recommendation class. \begin{figure}[hbt] \centering \includegraphics[width=0.9\linewidth]{Framework_Flowchart.png} \caption{Semi-supervised classification framework for customer segmentation.} \label{Fig:framework} \end{figure} The central four phases of the proposed ensemble clustering framework are detailed in the following subsections. Initially, the base clustering solutions are refined by estimating the maximum number of clusters that can be present in the base clustering. To estimate this maximum number of clusters, a weighted co-association matrix \cite{Fred:2002} is constructed and an iterative process is applied on this matrix (Sec~\ref{subsection:matrix}). Then, based on the estimated number of clusters, the base clustering solutions are relabeled according to a reference clustering solution (Sec~\ref{subsection:label}), and thus some solutions are refined. Ultimately, we combine the original base clustering with the refined set of clustering solutions and thus this ensures the diversity of the clustering solutions. In this step, we use a NSGA-II based multi-objective optimization \cite{KDeb2002} to overcome the important complexity of the search for a single solution that is optimal in terms of all the objective functions. Hence, in terms of multi-objective optimization, optimality is usually denoted using the concept of Pareto optimality \cite{KDeb2002}. A Pareto optimal set basically contains multiple solutions making a trade-off between multiple objectives. This NSGA-II method is applied on the set of refined clustering solutions to produce a final set of non-dominated Pareto optimal solutions (Sec~\ref{subsection:multi-objective}). Finally, the consensus solution is integrated in the Amadeus application using a mapping function for classifying new customers and make the Amadeus flight search engine return more personalized recommendations (Sec~\ref{subsection:mapping}). \subsection{Weighted Co-association Matrix based on Confidence} \label{subsection:matrix} In this process, instead of a classical co-association matrix, a weighted co-association matrix is constructed based on different factors like quality of the clustering solution and pair-wise confidence of two objects. The quality of a solution is measured by the average similarity of this solution in terms of Adjusted Rand Index (ARI) \cite{hubert:1985} with respect to the other solutions. To exemplify the pair-wise confidence of the objects remaining in a same cluster, suppose we have two clustering solutions of 9 objects with respectively 3 and 8 clusters represented as clusterings \{1,1,1,2,2,2,3,3,3\} and \{1,2,3,4,4,5,6,7,8\}. Two objects with the same label means that they are assigned to the same cluster in the clustering. We can see that the $4^{th}$ and $5^{th}$ objects are in the same cluster in both clustering solutions. However, since clustering solution 2 contains a larger number of clusters, the selection criteria for clustering is more restrictive than in solution 1. This means that the confidence of assigning two objects to the same cluster varies depending on the number of partitions in the clustering solution. A higher number of partitions means a better confidence in the grouping of objects in the same cluster. Therefore, both the quality and the number of clusters of a clustering solution are considered to build the weighted co-association matrix. This co-association matrix can be treated as a similarity matrix, where the edge weights depend on these two factors, being calculated on the basis of how many solutions agree to group two particular objects in the same clusters. Furthermore, the quality of the solution, in terms of average similarity with respect to the other solutions, and the confidence of two objects being members of the same clusters are also considered. In this process, the solutions that contain a higher number of clusters are given more weight than clustering solutions having less number of clusters. Besides, quality being a major issue in this purpose, therefore more weight is imposed on the quality metric (i.e., similarity) than the number of components. The weight due to the confidence and quality are added to compute the final weight as their similarity. If $n$ is the number of base clustering and label of each object $j$ is represented by $r_j$, then the similarity $Sim(i,j)$ of two objects $i$ and $j$ is represented by using Eqn. \ref{Eq:coassociation_metric}. \scriptsize{ \begin{equation} Sim(i,j) = \sum_{p=1}^{n} (I(r_i=r_j)*cluster(p)) + 2*w \sum_{p=1}^{n} (I(r_i=r_j)*weight(p)) \label{Eq:coassociation_metric} \end{equation}} \normalsize Here, $I$ represents an indicator function and it returns $1$ when the two objects have same labels, otherwise it returns 0. In this Eqn~\ref{Eq:coassociation_metric}, $cluster(p)$ is the number of clusters in the $p^{th}$ solution and $weight(p)$ is the weight measured by the similarity of $p^{th}$ solution with other base clustering solutions. Due to the difference of two ranges, i.e., variation in number of clusters and similarity value in terms of ARI, $w$ is used to make these two ranges in the same scale. To illustrate, consider there are $n$ number of base clustering solutions. So, the similarity value of each solution can be computed by comparing it with other $(n-1)$ solutions using ARI metric to obtain $(n-1)$ similarity values. The average similarity value is calculated from these $(n-1)$ similarity values. At the same time, the number of clusters present in each clustering solution can be computed, and we thus measure the average number of clusters that can be present in a clustering solution. Now, $w$ is calculated by the ratio between these two values (i.e., ratio of average number of clusters with respect to the average similarity value). The similarity matrix is then transformed into a graph where objects are vertices and edge weights can be regarded as the strength of bonding between the objects. This similarity matrix is then transformed into an adjacency matrix according to a threshold value, and a minimal threshold value is chosen by $(1/t)$ fraction of maximum edge weight. In this step, the value of $t$ should be chosen in such a manner that each object cannot form a separate cluster. In our experiments, the value of $t$ was set to 10. Then, the threshold value is increased by a small amount, and the induced deletion of edges produces another adjacency matrix. From this adjacency matrix, the strongly connected components are extracted and the number of connected components is observed. This task is repeated for a number of time while varying the threshold value in step-wise manner (step size). This fixed step size is chosen based on sensitivity analysis such that each object cannot produce a separate cluster. Therefore, in every step the adjacency matrix is formed and the number of connected components are derived. It should be noted that the number of connected components remains fixed for certain threshold values. Hence, obtaining the same number of connected components over successive iterations denotes the stability of this particular number of connected components since several clustering solutions agree regarding that specific number of components. A sorting over the number of connected components is performed depending on the stability values and the value is chosen from the clustering solutions having the highest similarity value. As earlier mentioned, the ARI \cite{hubert:1985} is used to measure the similarity of a clustering solution with respect to the others. The final number of clusters cannot be directly selected from the base clustering with highest average similarity since a few clustering solutions can have a high similarity but have a number of connected components without sufficient stability. Finally, the initial labeling of the base clusterings are changed according to this estimated number of clusters as described in the following subsections. \begin{theorem} The weighted co-association matrix of a clustering solution being calculated from the numbers of co-occurrences of two objects in the same cluster, the confidence of objects co-occurrences, and the quality of the clustering solution, the approach will generate a number of clusters that is at least as close to the number of clusters in the best potential agreement than the number of clusters calculated using a simple co-association matrix where only the numbers of co-occurrences of two objects in clusters are considered. \end{theorem} \begin{proof} Given two clustering solutions $S_p$ and $S_q$ with $m$ and $n$ number of clusters respectively, suppose the confidence of two objects $i$ and $j$ of remaining in same cluster are $Conf^{m}_{S_p}$ and $Conf^{n}_{S_q}$. Now according to the proposed model, the values in the co-association matrix not only depend on the number of co-occurrences of two objects being in a same cluster. Rather, along with this, the confidence of two objects and the quality of the solutions are also considered. Suppose $G$ and $G^{\prime}$ are the graphs constructed from normal co-association matrix (where only count of co-occurrence of two objects lying in same clusters is considered) and weighted co-association matrix (where co-occurrence, confidence and quality of the clustering solutions are considered). Accordingly to the proposed model, the confidence of two objects remaining in the same clusters is higher for clusterings with a greater number of partitions than for clusterings with lower number of partitions. Therefore, we write $Conf^{m}_{S_p}(i,j) \ge Conf^{n}_{S_q}(i,j)$ where $m > n$ and if $(i,j) \in C^{m}_{S_p}$ then $(i,j) \in C^{n}_{S_q}$. Here, $C^{m}_{S_p}$ and $C^{n}_{S_q}$ are two clusters in clustering solution $S_p$ and $S_q$ where the two objects $i$ and $j$ are the members. Now, the weighted co-association matrix is constructed based on confidence, co-occurrence and quality of the clustering solution. The edges are iteratively removed by increasing repeatedly the threshold $\delta_t$ by a very small amount and observing each time the number of connected components obtained. This number of connected components basically denotes the number of clusters. Let $N_e(G)$ and $N_e(G^\prime)$ be the number of edges deleted from graphs $G$ and $G^\prime$ each time. Due to the weight in $G^\prime$ as mentioned previously, it can be written that, $\forall \delta_t$: $N^{\delta_t}_e(G) \ge N^{\delta_t}_e(G^\prime)$, where $N^{\delta_t}_e(G)$ and $N^{\delta_t}_e(G^\prime)$ denote the number of edges $e$ with same weight $\delta_t$ in graph $G$ and $G^\prime$, respectively. Therefore, $\forall \delta_t$: $N^{\delta_t}_c(G) \ge N^{\delta_t}_c(G^\prime)$, where $N^{\delta_t}_c(G)$ and $N^{\delta_t}_c(G^\prime)$ are the number of connected components extracted from $G$ and $G^\prime$ for each value of threshold $\delta_t$. We can thus compare the changes in the number of connected components for any two successive iterations in these two graphs, and we have $\frac{d(N_c(G^\prime))}{dt} \le \frac{d(N_c(G))}{dt}$ for any two successive iterations. Hence, we obtain a number of clusters that is at least as close to the number of clusters in the best agreement than the number of clusters obtained using a classical co-association matrix, where only the co-occurrences of two objects in a same cluster are considered. \end{proof} \subsection{Label Transformation} \label{subsection:label} Label transformation aims to unify the labels of common clusters among different clustering solutions. Let's consider two clustering solutions $c_i =$ $\{x_{1}^{i},$ $x_{2}^{i},$ $\ldots,$ $x_{p}^{i}\}$ and $c_j = \{x_{1}^{j}, x_{2}^{j},$ $\ldots,$ $x_{p}^{j}\}$ that partition the $p$ customers into $k$ and $m$ clusters respectively. Here, each $x_{p}^{i}$ denotes the labeling of the $p^{th}$ object according to $c_i$ clustering solution. However, since the cluster labels are symbolic and generated by different processes, there is no correspondence between them in the different clusterings. In order to represent the clustering solution $c_i$ according to the representation of clusters in solution $c_j$, the number of objects with different labels in $c_j$ and $c_i$ is counted for each label in $c_i$. Majority voting is then performed among the labels to determine the final corresponding label in $c_i$. After estimating the most likely number of clusters, the base clustering solutions containing a higher number of clusters than the estimated value are transformed based on a reference solution containing this estimated number of clusters. As mentioned in the last section, the final number of cluster is selected based on the quality of the clustering solutions (highest similarity) of the input clustering. Hence, at least one clustering solution with that specific number of cluster should be present. To ensure the diversity in the search space, the original base clustering solutions are combined with the refined set of clustering solutions. After that, the multi-objective optimization algorithm is applied to derive the non-dominated Pareto optimal consensus solutions. \subsection{Multi-objective Optimization Algorithm} \label{subsection:multi-objective} In this subsection, we outline the utilization of NSGA-II \cite{KDeb2002} with an aim to produce non-dominated near-Pareto optimal solutions. This is explained step-wise below. \begin{itemize} \item[$\bullet$] {\bf Encoding Scheme.} Here, the parameters in the search space are represented in the form of a string (i.e., chromosome). Each chromosome represents a clustering solution and chromosomes are encoded with integer value denoting the class label of each object. To exemplify, a chromosome is encoded like $\{r_1, r_2, \ldots, r_n \}$, where $r_i$ represents the class label of $i^{th}$ object. Suppose the encodings of two chromosomes are $\{3,3,2,2,2,1,1,1\}$ and $\{2,2,3,3,3,1,1,1\}$. Here both chromosomes represent the same clustering solutions where objects $\{1,2\}$ are in the single cluster, objects $\{3,4,5\}$ are in another clusters, and objects $\{6,7,8\}$ are in other clusters. \item[$\bullet$] {\bf Initial Population.} In the initial population, the base clustering solutions obtained after applying different clustering algorithms are taken. In addition to that, the clustering solutions after refinement are included in it. In this way, the diversity of the clustering solutions is maintained. Note that, the number of clusters in each solution of the initial population is not necessary the same. \item[$\bullet$] {\bf Selection.} Each chromosome is associated with a fitness function that corresponds to the amount of goodness (fitness value) of the solution encoded in it. The competent chromosomes are selected for further breeding depending on the concept of survival of ``fittest''. In this context, crowded binary tournament is selected as the selection strategy \cite{KDeb2002}. \item[$\bullet$] {\bf Crossover.} Crossover is a probabilistic procedure to exchange information between two parent chromosomes. In this paper, we use the same crossover operation as described in \cite{chatterjee2013}. Note that, in the clustering ensemble problem, we cannot directly apply the crossover operation. The reason is that two chromosomes can depict the same clustering solution (same fitness value) but with different representations. Then, applying single point/multi point crossover can distort the quality of these solutions. To explain this in more details, suppose there are two chromosomes representing same solutions $\{2,2,2,1,1,3\}$ and $\{3,3,3,2,2,1\}$. If single point crossover is performed in $3^{rd}$ position, then the new chromosomes become $\{2,2,3,1,1,3\}$ and $\{3,3,2,2,2,1\}$. Hence, the fitness values are decreased although the original solutions were the same. To overcome this limitation we use a bipartite graph based approach described in \cite{chatterjee2013}. \item[$\bullet$] {\bf Mutation.} In this operation, each chromosome goes through mutation with a slight probability $M_p$. Here, a small float value is added or subtracted to the label of the chromosomes. Note that, as the label of each object is an integer, after the mutation operation the float values obtained are converted into the nearest integer values. \end{itemize} In this algorithm, the two following objective functions are simultaneously optimized. The first is based on the ARI measure to consider the similarity of a clustering solution with other solutions. The second is based on the standard deviation of similarity values to identify potential bias toward a specific clustering solution. Therefore, the objective functions are maximization of ARI similarity values and minimization of standard deviation among similarity values for a clustering solution with respect to other clustering solutions. \subsection{Mapping Function} \label{subsection:mapping} This application aims to classify new customers in order to customize flight recommendations according to the customer's class. Since no labeled data exist, the consensus clustering solutions are used in a semi-supervised manner. Consensus clusters, corresponding each to a customer segment, are characterized to discriminate them, and new customers are classified by assigning them to the cluster they are the most similar to. The characterization of each cluster is represented as the center point of the cluster from the sample features. That is a mean vector of feature values of samples in the cluster. The similarity between a new customer and each cluster is then computed based on these vectors, and the customer is assigned to the cluster with maximal similarity. The mapping function for clustering solution $C^i$ is denoted by $f:x\mapsto \mathrm{argmin}_{\{1 \leq l \leq k \}} (d(\gamma_l,x))$, where $x$ is a new customer, $k$ is the number of clusters in the solution, and $\gamma_l = \sum_{j=1}^p I(l - x^i_j) x_j / \sum_{j=1}^p I(l - x^i_j) $ is the centre of cluster $l$ where $I(x) = 1$ for $x = 0$ and 0 $\forall x \in \mathbb{R}^*$. \section{Experimental Design \& Results} In this section, we first detail the Amadeus dataset. Then, we discuss the analysis of the proposed model with comparison to other existing models. In the experiments, the crossover rate is 0.9, mutation rate is 0.01 and population size is twice the number of the base clustering solutions. \subsection{Dataset Description and Preprocessing} To prepare the dataset, we extracted search queries of flight bookings for flights departing from the US during one week on January 2018. There are 9 relevant features: Distance between two airports, geography, number of passengers, number of children, advance purchase, stay duration, day of the week of the departure, day of the week of the return, and day of the week of search. In ``Geography" the values are 0 for domestic flights (origin and destination are in the same country), 1 for continental flights (origin and destination belong to the same continent), and 2 for intercontinental flights. As this dataset contains a very large number of customers (in millions), and as many of them have very similar feature values, the populations are divided into some strata based on similar characteristics. Then sampling is performed on these sub-population to generated a stratified sampling of the whole dataset while preserving the distribution properties of the original dataset. For example, snapshots of the distribution of the ``distance between airports'' feature values in the original dataset and in sample datasets are shown in Fig. \ref{Fig:Distribution_original_data}. Finally, three stratified sample datasets were generated with sizes of 500, 1000 and 1500 samples. \begin{figure}[hbt] \centering \includegraphics[width=.32\textwidth]{histogram_distance_Original_3.jpg} \includegraphics[width=.32\textwidth]{histogram_distance_reduced_1000_3.jpg} \includegraphics[width=.32\textwidth]{reduced_data_distance_hist_1500_3.jpg} \caption{Snapshot of the distribution of feature ``distance between two airports'' for original dataset (Left), sample of size 1000 (Middle) and sample of size 1500 (Right) after stratified sampling.} \label{Fig:Distribution_original_data} \end{figure} \subsection{Definition of Base Clusterings} \label{subsection:base} After generating the sampled datasets, the $K$-means algorithm was applied, with feature random subspace selection, for different values of $K$ to generate base clusterings with different numbers of clusters. Two sets of clustering solutions were generated for each sample-sized dataset. As there is no ground truth available for these datasets, the number of clusters parameter $K$ was not kept fixed, but was instead varied between two to eight while applying the clustering algorithms for generating the base clustering solutions. In the proposed approach, there is no constraint on the $K$ parameter values that are used to generate base clusterings. As described earlier, for each of the three sample-sized datasets, two different sets of base clusterings were generated. The first set of base clustering solutions was generated by applying $K$-means ten times, fixing the $K$ parameter for five (or six) of them, and varying the $K$ parameter to generate the remaining five (or four) clustering solutions. To generate the second set of base clustering solutions, the value of the $K$ parameter was varied for each $K$-means execution. \subsection{Experimental Results} \label{subsection:experimental_results} In the first experiment, the accuracy of consensus solutions produced by different methods is measured using the classical ARI metric. Since in this context no ground truth is available, internal validation of the solution is performed by comparing it to the base clustering solutions. The average similarity of the consensus solution with the base clustering solutions denotes the goodness of the method. We made an extensive analysis on different datasets and compared the algorithm with other state-of-the-art methods described in \cite{Huang:2015,strehl2002,Mimaroglu2012}. Since the weak evidence accumulation clustering (WEAC) method and four of it variants \cite{Huang:2015} were reported to outperform similar existing other approaches, we choose it as one of the methods for comparison purpose. The performance of the proposed model were also compared with the well-known classical methods like CSPA, HGPA, MCLA \cite{strehl2002}. If a very limited number of ensemble algorithms automatically estimates the number of clusters, the promising method DiCLENS \cite{Mimaroglu2012} was also used for comparison purposes. Specifically, DiCLENS is the most appropriate for comparison purposes as it produces the number of cluster automatically, similarly to the proposed method. The results on Amadeus travel dataset for the different samples sizes are given in Tables \ref{table:result_1}-6. In Tables \ref{table:result_1}-6, the consensus solution produced by each algorithm is compared with all the base clustering solutions in terms of ARI. The average similarity obtained is reported in the tables along with the highlighting of the two best scores. As mentioned before, we generate two sets of base clustering solutions for each sample (as described in section \ref{subsection:base}). It can be realized that the better solution can be achieved if there is a knowledge about the number of cluster or passing $K$ value each time while executing the algorithm. In Tables \ref{table:result_1}-3, the result obtained for the first set of base clustering on the three sample datasets are reported and effectiveness of the method is shown. In Tables 4-6, the results obtained for the second set of base clustering for each sample, where each clustering solution contains different number of clusters which makes the problem more difficult, are reported. It can be observed that for both set of experiments, the proposed method gives consistently good performances even though the number of clusters is not given as input. Furthermore, the proposed method produces in several cases the same best quality clustering solution produced by the other methods. Also, the number of clusters predicted automatically by the proposed method is very close to the one that generates the best ensemble among all methods. Besides, defining the number of clusters for executing the other state-of-the-art approaches (like WEAC, CSPA, MCLA, etc) is extremely difficult because there is no initial knowledge of the value upto which the number of cluster should be varied. It is seen that in majority cases the proposed approach outperforms DiCLENS and produces equally good solutions when compared with any other state-of-the-art approaches in terms of ARI measure. The non-dominated Pareto front where each point corresponds to the rank-1 solution for different data are demonstrated in Fig. \ref{Fig:plot}. In these Pareto front all of the solutions are equally good, however, the final solution is selected based on the highest similarity with the base clusterings. \begin{table}[hbt!] \tin \parbox{.48\linewidth}{ \centering \caption{Performance values on the base clustering with 500 samples. Here, six out of ten input clustering solutions contain five clusters and the other solutions contain three, four, six and seven clusters.} \label{table:result_1} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Algorithm & K=3 & K=4 & K=5 & K=6 & K=7 & K=8\\\hline CSPA & 0.5919 & 0.5633 & 0.5539 & 0.6472 & 0.5388 & 0.4778 \\\hline MCLA & 0.6117 & 0.7293 & 0.8218 & 0.7217 & 0.8066 & 0.6263 \\\hline HGPA & 0.2176 & -0.0052 & 0.3447 & 0.2692 & 0.2388 & 0.0089 \\\hline WEAC-SL & 0.3972 & 0.6654 & {\bf 0.8275} & 0.8056 & 0.7924 & 0.7770 \\\hline WEAC-AL & 0.3637 & 0.5964 & {\bf 0.8275} & 0.8066 & 0.7917 & 0.7683\\\hline WEAC-CL & 0.6001 & 0.6654 & {\bf 0.8275} & 0.8149 & 0.8002 & 0.6913\\\hline GP-MGLA & 0.6001 & 0.6939 & {\bf 0.8275} & 0.7240 & 0.6995 & 0.6731 \\\hline DiCLENS & -- & -- & {\bf 0.8275} & -- & -- & -- \\\hline Proposed & -- & -- & {\bf 0.8275} & -- & -- & -- \\\hline \end{tabular} } \hfill \parbox{.48\linewidth}{ \centering \caption{Performance values on the base clustering with 1000 samples. Here, six out of ten input clustering solutions contain seven clusters and the other solutions contain three, four, five and six clusters.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Algorithm & K=4 & K=5 & K=6 & K=7 & K=8 & K=9\\\hline CSPA & 0.5132 & 0.5376 & 0.7162 & 0.7044 & 0.6201 & 0.5814 \\\hline MCLA & 0.6025 & 0.6139 & 0.7822 & 0.8173 & 0.8139 & 0.7455 \\\hline HGPA & -0.0030 & 0.2010 & 0.3302 & 0.4642 & -0.0048 & -0.0049 \\\hline WEAC-SL & 0.4768 & 0.6188 & 0.7490 & {\bf 0.8177} & 0.8140 & 0.8020 \\\hline WEAC-AL & 0.3353 & 0.5507 & 0.7490 & {\bf 0.8177} & 0.8166 & 0.8043\\\hline WEAC-CL & 0.6025 & 0.7184 & 0.7490 & {\bf 0.8177} & 0.8166 & 0.7964\\\hline GP-MGLA & 0.6047 & 0.7184 & 0.7583 & {\bf 0.8177} & 0.7975 & 0.7788 \\\hline DiCLENS & -- & 0.7183 & -- & -- & -- & -- \\\hline Proposed & -- & -- & & -- & {\bf 0.8177} & -- \\\hline \end{tabular} } \end{table} \begin{table}[hbt!] \tin \parbox{.48\linewidth}{ \centering \caption{Performance values on the base clustering with 1500 samples. Here, six out of ten input clustering solutions contain five clusters and the other solutions contain three, four, six and seven clusters.} \label{table:result_2} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Algorithm & K=3 & K=4 & K=5 & K=6 & K=7 & K=8\\\hline CSPA & 0.4464 & 0.3896 & 0.4580 & 0.4002 & 0.3865 & 0.3504 \\\hline MCLA & 0.4494 & 0.5476 & 0.4962 & 0.4352 & 0.3622 & 0.3589 \\\hline HGPA & -0.0009 & 0.2474 & 0.3913 & 0.3544 & 0.2761 & 0.2743 \\\hline WEAC-SL & 0.4882 & {\bf 0.5584} & 0.5581 & 0.5573 & 0.5557 & 0.5531 \\\hline WEAC-AL & 0.4049 & {\bf 0.5584} & 0.5567 & 0.5428 & 0.5391 & 0.5308\\\hline WEAC-CL & 0.4882 & {\bf 0.5584} & 0.5581 & 0.5442 & 0.5359 & 0.4789\\\hline GP-MGLA & 0.4882 & {\bf 0.5581} & 0.5025 & 0.5009 & 0.4866 & 0.4839 \\\hline DiCLENS & -- & {\bf 0.5581} & -- & -- & -- & -- \\\hline Proposed & -- & {\bf 0.5584} & -- & -- & -- & -- \\\hline \end{tabular} } \hfill \parbox{.48\linewidth}{ \centering \caption{Performance values on the base clustering with 500 samples. The 5 clustering solutions each contains different numbers of clusters ranging from 3 to 7.} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Algorithm & K=2 & K=3 & K=4 & K=5 & K=6 & K=7 \\\hline CSPA & -- & 0.5583 & 0.5644 & 0.5410 & 0.6214 & 0.5422 \\\hline MCLA & -- & 0.5841 & 0.7088 & 0.6480 & 0.7263 & 0.5462 \\\hline HGPA & 0.2176 & 0.3487 & -0.0054 & 0.1360 & 0.6188 & 0.4892 \\\hline WEAC-SL & 0.1847 & 0.3689 & 0.6283 & 0.6166 & {\bf 0.7425} & 0.7291 \\\hline WEAC-AL & 0.0991 & 0.2950 & 0.5152 & {\bf 0.7525} & 0.7263 & 0.7211 \\\hline WEAC-CL & 0.4638 & 0.5919 & 0.6945 & {\bf 0.7525} & 0.7263 & 0.7163\\\hline GP-MGLA & 0.4638 & 0.5947 & 0.7088 & {\bf 0.7525} & 0.7263 & 0.7113 \\\hline DiCLENS & 0.1847 & -- & -- & -- & -- & --\\\hline Proposed & -- & -- & -- & 0.7378 & -- & -- \\\hline \end{tabular} } \end{table} \begin{table}[hbt!] \tin \parbox{.48\linewidth}{ \centering \caption{Performance values on the base clustering with 1000 samples. The 5 clustering solutions each contains different numbers of clusters ranging from 5 to 9.} \label{table:result_3} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Algorithm & K=4 & K=5 & K=6 & K=7 & K=8 & K=9\\\hline CSPA & 0.5278 & 0.5472 & 0.7094 & 0.6695 & 0.5985 & 0.5967 \\\hline MCLA & 0.6497 & 0.6879 & 0.7772 & 0.6712 & 0.7502 & 0.6941 \\\hline HGPA & -0.0030 & 0.3478 & 0.3814 & 0.5099 & -0.0047 & -0.0047 \\\hline WEAC-SL & 0.6038 & 0.6787 & 0.7722 & 0.7713 & 0.7863 & 0.7810\\\hline WEAC-AL & 0.5247 & 0.6910 & 0.7716 & 0.7713 & 0.7711 & 0.7861\\\hline WEAC-CL & 0.6485 & 0.7651 & 0.7722 & 0.7883 & {\bf 0.7882} & 0.7861\\\hline GP-MGLA & 0.6484 & 0.7683 & 0.7722 & {\bf 0.7865} & 0.7724 & 0.7555 \\\hline DiCLENS & -- & 0.7683 & -- & -- & -- & -- \\\hline Proposed & -- & -- & -- & {\bf 0.7865} & -- & -- \\\hline \end{tabular} } \hfill \parbox{.48\linewidth}{ \centering \caption{Performance values on the base clustering with 1500 samples. The 5 clustering solutions each contains different number of clusters ranging from 3 to 7.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Algorithm & K=3 & K=4 & K=5 & K=6 & K=7 \\\hline CSPA & 0.3978 & 0.4553 & 0.4985 & 0.4916 & 0.4262 \\\hline MCLA & 0.4863 & 0.5438 & 0.5203 & 0.4957 & 0.3452 \\\hline HGPA & 0.1735 & -0.0011 & -0.0015 & 0.3053 & 0.3305 \\\hline WEAC-SL & 0.2460 & 0.3661 & 0.4973 & 0.5516 & 0.5527 \\\hline WEAC-AL & 0.1776 & 0.3371 & {\bf 0.5736} & 0.5714 & 0.5589 \\\hline WEAC-CL & 0.5130 & 0.5522 & {\bf 0.5736} & 0.5714 & 0.5589 \\\hline GP-MGLA & 0.5186 & 0.5389 & {\bf 0.5698} & 0.5615 & 0.5508\\\hline DiCLENS & -- & -- & 0.5581 & -- & -- \\\hline Proposed & -- & -- & -- & 0.5553 & -- \\\hline \end{tabular} } \end{table} \begin{figure*}[hbt] \centering \includegraphics[width=0.45\textwidth]{Amadeus_500_Image.eps} \includegraphics[width=0.45\textwidth]{Amadeus_1500_Image.eps} \caption{Pareto front for Amadeus data of sample size 500 (Left) and 1500 (Right).} \label{Fig:plot} \end{figure*} In this work, consensus clustering techniques are integrated in the flight recommendation selection optimizer of the Amadeus flight search engine. Consensus clustering solutions are used for customer segmentation with the objective to optimize the selection strategy according to the diverse categories of users and their different needs. The flight selection depends on a quantity defined as a linear combination of different criteria, including the recommendation price and diverse convenience criteria of the flights. The linear combination of weights is optimized to maximize the booking probability of the returned recommendations. This booking probability is estimated using a Customer Choice Model \cite{lheritier2018} and a mapping function is necessary to assign a new customer to a particular cluster. If the mapping function described in Sect. \ref{subsection:mapping} was used, the $K$-Nearest Neighbors approach was also tested to predict the label of a new customer, by finding the $K$-closest customers of this new customer and performing a majority voting over their labels. However, considering only $K$ neighbors for predicting the label seems to induce some information loss, and therefore a better accuracy can be obtained using the method described in Sect. \ref{subsection:mapping}. Experiments presented in Table~\ref{table:result_Amadeus} were conducted on the first set of base clustering solutions, along with the consensus solutions for 500 customers, to perform the optimization process. The performances were then evaluated according to the Amadeus business metric used during this optimization process: The relative difference between the sum of all the booking probabilities of the flight recommendations returned by the optimized solution and by the reference solution. This reference solution is defined by setting all weights to zero except for the recommendation price, and it corresponds to the default configuration of the flight search engine. The Amadeus business metric indicates to which extent the optimized solution improves the attractiveness of the recommendations returned by the search engine. The obtained percentage reported in the table represents how much the proposed clustering technique improves the internal objective function used to select the flight recommendations in the flight search engine. Although there is not a direct link between this improvement and the conversion rate, this percentage represents a surrogate measure to it: The difference of conversion rate induced by the new configuration. \begin{table}[hbt] \tiny \centering \caption{Performance measure on booking probability improvement in terms of Amadeus business metric.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Algorithm & K=3 & K=4 & K=5 & K=6 & K=7 \\\hline Base clusterings & 49\% & 8.9\% & 24.4\%,21.6\%,12.2\%,4.4\%,21.5\%,18.6\% & 21.6\% & 21.7\% \\ \hline CSPA & 29.7\% & 14\% & 27.6\% & 36.5\% & 19.4\% \\\hline MCLA & 22.6\% & 19.7\% & 12.7\% & 13.3\% & 27.7\% \\\hline HGPA & 31.3\% & 13.5\% & 31.9\% & 37.3\% & 22.5\% \\\hline WEAC-SL & 6.9\% & 36.8\% & 13.2\% & 11.6\% & 21.0\% \\\hline WEAC-AL & 22.7\% & 31.2\% & 19.8\% & 21.2\% & 12.8\% \\\hline WEAC-CL & 20.8\% & 28.6\% & 32.2\% & 36.1\% & 26.6\% \\\hline GP-MGLA & 16.8\% & 19.7\% & 30.0\% & 27.6\% & 29.6\% \\\hline DiCLENS & -- & -- & 28.6\% & -- & -- \\\hline Proposed & -- & -- & 23.6\% & -- & -- \\\hline \end{tabular} \label{table:result_Amadeus} \end{table} The proposed consensus clustering solution gives a better average improvement than most of the base clustering solutions. It is a good compromise among consensus solutions as it constantly gives good ARI values and its business metric is over the median of all other consensus methods. Additionally, it saves time compared to the current process in which we compare $N$ base clustering solutions based on the result of the optimization process, the processing time can be divided by $N$. This is an important feature since the optimization process is the bottleneck part of the application. It can be seen that some consensus algorithms such as HGPA or CSPA, give higher improvements in term of Amadeus business metric than the proposed method for some $K$ values. However, as shown in Tables~1-6, HGPA has a very low ARI, which indicates that it failed to combine the base clustering solutions. As it deviates significantly from base clustering solutions, we cannot rely on its solution, and a similar reasoning is applicable to other algorithms. In the current Amadeus process, retrieving the business metric for one clustering solution is time consuming, and it is not feasible to compute it for all consensus solutions before selecting one of them, as we did in this study. Therefore, we need to choose a reliable algorithm showing acceptable results in terms of both ARI and business metric. During the baseline experiments assuming some prior business knowledge about the main features, an improvement of 23.3\% was obtained, which is equivalent to the improvement for the proposed model that does not use any prior knowledge. We assume the solution is composed of 6 segments (Business domestic, Business international, Week-end domestic, Week-end international, Others domestic and Others international). Furthermore, this assumption that is not data-dependent may not be applicable for all search query datasets, depending on the market, the time period, etc. Hence, it is more reliable to depend upon multiple diverse clustering solutions and an appropriate consensus generation process. \section{Conclusion} In the travel industry, identifying segments of customers that have close needs and requirements is a key step for generating better personalized recommendation. We propose a multi-objective optimization based consensus clustering framework to improve customer segmentation and provide better personalized recommendation in the Amadeus flight search engine. This framework aims to overcome the issues encountered when the segregation of customers relies on a single clustering algorithm that is based on modeling assumptions that do not match, partly or entirely, with the data space regarding the number of clusters, distributions, etc. In the context of the selection of flight recommendations returned by Amadeus flight search engine, the proposed framework hold some properties required to generate relevant consensus clustering such as demonstrated by the theoretical proof of its adequate estimation of the number of clusters. This consensus clustering based solution was integrated in the Amadeus flight search engine, and its capability to generate better personalized recommendation, while reducing calls to the time consuming part of current Amadeus process, were demonstrated. The efficiency of the proposed approach regarding application objectives and performances was also shown throughout experiments conducted on Amadeus customer search query data to compare it with other existing approaches. As a future direction, we intend to study how other objective functions can be deployed in order to obtain a better clustering solution aiming to improve booking probability. \bibliographystyle{splncs04}
{ "attr-fineweb-edu": 2.130859, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUd7jxK7IDND_hDPW5
\section{Introduction} Although the basic rules of soccer have barely changed since the 1920s\footnote{the introduction of the two-opponent version of the offside rule}, they intrinsically enabled teams to develop distinctive strategies that dominated the soccer landscape for several years (e.g., the Hungarian and the Dutch team of the 50s and 70s)~\cite{jonathan2013inverting}. However, in these days the possibilities to prepare against these teams and thus ruin their strategies was limited due to technological reasons. Nowadays, technological advancements of the last decade allow team staffs to view any first division soccer game on a short notice; hence, it seems to be challenging to have and sustain a unique playing characteristic---which is additionally successful too---in the global soccer space. Does such a unique, recognizable style exist in soccer nowadays? The identification and understanding of the style of soccer teams have practical impacts apart from the esthetics of the game. The players of a team should obey the style (i.e., strategy) of the team to maximize the team's chance to win. Hence, it is crucial to raise youngsters and to sign players who are capable of playing according to the style of the team. Failing to do so not only has an impact on the success but also on the profitability of the club. There are numerous examples of newly signed players who were not compatible with the style of their new clubs~\cite{worstsignings}, therefore, there is a need for a quantitative analysis of team's style to avoid these discrepancies. The rareness of goals is the profoundest feature of soccer that distinguishes it from other team sports. Although the teams have 11 players and 90 minutes to score in each match, it is not unusual to have a goalless draw as the final score~\cite{anderson2013numbers}. These results are not solely due to a spectacular performance of the goalkeepers but rather a consequence of the low number of scoring chances. Hence, metrics related to scoring cannot describe the style (i.e., the strategy) of a soccer team. Passes, on the other hand, happen numerously in every game irrespective of the quality of the teams. The pass network of a soccer team consists of the players as vertices and the passes between the players as the edges. Prior art focused either on the high-level statistics of the pass networks (e.g., betweenness, shortest paths) or the strength of the connection between pairs of players~\cite{duch2010quantifying,pena2012network,narizuka2013statistical,lucey2013assessing}. These metrics describe the static properties of a pass network, e.g., the metrics aggregate all the passes into one network, neglect the order of passes, etc. On the contrary, we focus on the dynamic aspects of the pass networks by examining the ``flow motifs'' of the teams. We propose the concept of ``flow motifs'' to characterize the statistically significant pass sequence patterns. It extends the idea of the network motifs, highly significant subgraphs that usually consists of three or four nodes, suggested by Milo et al.~\cite{milo2002network} that mainly apply to the static complex networks (e.g., food webs, protein-structure networks, and social networks). We extend their work towards ``flow motifs'' to analyze pass networks that are highly dynamic and in which the order of connections is important. Our methodology starts with the extraction of the passing sequences, i.e., the order of players whom the ball traversed. Afterwards, we determine computationally the significance of the different k-pass-long motifs in the passing style of the teams. Our flow motif profile focuses on how ball traverses within a team. We not only count the number of passes, but also check which players are involved in, and how they organize the flow of passes. Based on these computed flow motif profiles, we finally cluster the teams. To the best of our knowledge, our study is the first of its kind that investigates motifs in soccer passing sequences. Our contribution is twofold: \begin{enumerate} \item we propose a method to quantify the motif characteristics of soccer teams based on their pass networks, and \item we identify similarities and disparities between teams and leagues using the teams' motif fingerprints. \end{enumerate} In the recent decade, several data-provider companies and websites have arisen to annotate soccer matches and to publish soccer datasets. For example, such initiatives include Prozone~\cite{prozone}, OptaPro~\cite{optapro}, Instat Football~\cite{instat}, and Squawka~\cite{squawka}, among others. The prevalence of data-providers enables us to take a data-driven, quantitative approach to identify the styles of the soccer teams. We focus on the 2012/13 seasons of major European soccer leagues and analyze the passing strategies of the teams throughout the whole season. \section{Methodology} The ``flow motifs'' of a pass network, in which players are linked via executed passes, consist of a given number of consecutive passes, namely, an ordered list of players who were involved in the particular passes. Throughout this paper we focus on motifs consisting of three consecutive passes, however, it is straightforward to generalize our methodology to investigate motifs with fewer/more passes. Our methodology relaxes the identity of the involved players, i.e., it does not differentiate motifs based on the names of the players, rather focuses on the certain structure of the passes. There are five distinct motifs when we analyze three-pass long motifs: ABAB, ABAC, ABCA, ABCB, and ABCD. For example, the motif ABAB denotes the following pass sequence: first, player 1 passes to player 2; second, player 2 passes the ball back to player 1; and finally, player 1 passes again to player 2. If a similar pass sequence happens between player 3 and player 4 the identified motif is ABAB again (i.e., the crucial characteristic is what happened and not between whom). Our methodology quantifies the prevalence of the flow motifs in the pass networks compared to random networks whose degree distribution is the same. To achieve this, we start with a list of passes that a team made during a match. The format of a pass record is \[ p_n = < \textrm{player}_i(n),\textrm{player}_j(n),t(n)> \] where $\textrm{player}_i(n)$ passed the ball to $\textrm{player}_j(n)$ in the $t(n)$ time instance. Second, we derive all the ball possessions that a team had. A ball possession $<p_1,p_2,⋯,p_n>$ consists of such passes that fulfill two constraints: \begin{eqnarray*} \textrm{player}_j(m) = \textrm{player}_i(m+1), \forall m \in \{1,\dots,n-1\} \\ t(m+1) - t(m) \leq T_{\textrm{max}}, \forall m \in \{1,\dots,n-1\} \end{eqnarray*} where $T_{\textrm{max}}$ denotes the time threshold between two passes. These constraints assure that the passes are consecutive (i.e., a player receives the ball and then passes it forward) and not having major breaks. Throughout our study, we use $T_{\textrm{max}}=5\textrm{sec}$ to determine if two passes are belonging to the same ball possession. Third, we extract all the three-pass long sub-possessions from the ball possessions (e.g., a ball possession having $n$ passes contains $n-2$ motifs) and convert the player identifiers into the appropriate A, B, C, and D labels to assemble the motifs. For example, a ball possession where the ball moves between players as $2 \rightarrow 4 \rightarrow 5 \rightarrow 6 \rightarrow 4 \rightarrow 6$ translates into three motifs, namely, ABCD, ABCA, and ABCB: \[\overunderbraces{&&\br{3}{ABCA}}% {&2 \rightarrow &4 \rightarrow &5 \rightarrow 6& \rightarrow 4& \rightarrow 6&} {&\br{3}{ABCD}&&} \] After having the motifs that are present in the pass network, we quantify the prevalence of the motifs by comparing the pass network of the team to random pass networks having identical properties (in particular, the number of vertices and their degree distribution). Specifically, we perturb the labels of the motifs prevalent in the original pass network randomly and such we create pseudo motif-distributions. In our data analyses, we generate 1000 random pass networks for each original pass network. Finally, we compute the z-scores (a.k.a. standard scores) of the motifs by comparing the original and the constructed random pass networks. As a result, we have a characteristic of the (passing) style of a team for every match---in terms of the z-scores of the motifs. \begin{figure}[tb] \centering \includegraphics[width=9cm]{figs/team_motif_ABAC.eps} \caption{The prevalence of the ABAC motif in case of the teams of the Spanish first division (median, quartiles) with respect to their z-scores. FC Barcelona applies the ABAC motif much more frequently than any other team in the league.} \label{fig:spain_abac} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=9cm]{figs/team_motif_ABCD.eps} \caption{FC Barcelona uses the ABCD motif less often than the other teams.} \label{fig:spain_abcd} \end{figure} \begin{figure*}[tb] \centering \includegraphics[width=5.5cm]{figs/team_motif_ABAB.eps} \includegraphics[width=5.5cm]{figs/team_motif_ABCA.eps} \includegraphics[width=5.5cm]{figs/team_motif_ABCB.eps} \caption{Z-scores of the ABAB, ABCA, and ABCB motif in the Spanish league.} \label{fig:spain_all} \end{figure*} \section{Data analysis and results} We use publicly accessible information on the pass networks of soccer teams. In particular, the dataset contains information from the 2012/13 seasons of the Spanish, Italian, English, French, and German first division. For example, the part of the dataset that contains information on the Spanish league spreads 20 teams, 380 matches, and more than 250 thousands of passes. We quantify the motif characteristics of the teams using the aforementioned dataset. We first present results on the passing styles of teams in the Spanish first division and later on we compare our finding with the other European leagues and teams. We compare the Spanish teams with respect to their ABAC motifs in Figure~\ref{fig:spain_abac}. Most of the teams have similar z-scores, i.e., apply the ABAC pass motif to comparable extent. However, FC Barcelona has a quite distinct strategy: applies ABAC motifs significantly more often than the other teams (the difference is at least 2.5 standard deviation). The trend is similar in case of the ABCD motif; the only difference is that the majority of the teams have notably larger z-scores than FC Barcelona (Figure~\ref{fig:spain_abcd}). This means that FC Barcelona applies this motif significantly less frequently than the other teams. In general, FC Barcelona uses structured motifs (i.e., motifs with more back and forth passes such as ABAB, ABAC, and ABCB) more often than simpler ones compared to other teams. We present the results of the remaining motifs in Figure~\ref{fig:spain_all}. \vspace{0.5cm} We next analyze the similarities and the differences of the teams' motif characteristics via cluster analysis. First, for each team, we construct a feature vector representing the team's usage of motifs. We use the mean of the z-scores of the five distinct motifs as the features (by averaging the z-scores over 38 matches a team had in the season). Afterwards, we cluster the teams based on their five-motif long feature vectors. We use two methods for cluster analysis: k-means and hierarchical clustering. We illustrate the result of the k-means clustering in Figure~\ref{fig:kmeans} (the clusters are color-coded), where the ratio of the within the cluster and the total sum of squares is 90.3\%. For example, the cluster that contains Atletico Madrid and Athletic Bilbao, among others, is characterized by extensive usage of ABAB and ABCA motifs. While most of the teams are clustered in three major groups, FC Barcelona is separated from the other teams. FC Barcelona is the only team in its cluster; hence, it has a distinctive motif characteristics. The Ward hierarchical clustering algorithm reveals similar trend as shown in Figure~\ref{fig:ward}. Again, FC Barcelona has a solitary style while the other teams are having resembling features. The implications of the two clustering schemes are consistent: FC Barcelona had a unique, significantly different passing style than any other team in the Spanish league. \begin{figure}[tb] \centering \includegraphics[width=9cm]{figs/LAL_clusters.eps} \caption{K-means clustering of the teams in the Spanish league. One of the four clusters contains only a single team, namely, FC Barcelona that has an unique style based on its passing motifs.} \label{fig:kmeans} \end{figure} \begin{figure}[tb] \centering \includegraphics[clip=true,trim=1cm 2.5cm 0cm 2.2cm,width=9cm]{figs/LAL_ward_hierarhical_cluster.eps} \caption{Ward hierarchical clustering of the soccer teams in the Spanish league. FC Barcelona does not belong to any major groups of the teams.} \label{fig:ward} \end{figure} Finally, we take a broader point of view and investigate whether the style of FC Barcelona remains unique if we consider teams of four additional European soccer leagues. We show the teams in Figure~\ref{fig:kmeans_all} based on their motifs using principal component analysis. Although we analyze more teams that have more variation in their pass characteristics, FC Barcelona still able to maintain its rare, distinct style. It is surprising that Torino, an Italian team nearly relegated at the end of the season, has a style diverse from the vast majority of the considered teams and shares properties with teams like Lille, Milan, and Juventus---dominant teams in the French and Italian leagues. The distinctive feature of Torino's strategy is that it involves less frequent usage of the ABCA motif. \begin{figure}[tb] \centering \includegraphics[width=8cm]{figs/ALL_kmeans_clusters_1.eps} \caption{The style of soccer teams of the Spanish, Italian, English, French, and German soccer leagues. FC Barcelona has a unique style even on a European scale.} \label{fig:kmeans_all} \end{figure} \section{Future Work} The presented results illustrate the potential of analyzing the flow motifs of soccer teams. There are several ways to extend the investigation of pass motifs to reveal finer-grained details of teams and players. As future work, we plan to address three areas: (i) condition the pass motifs based on the results of the matches, (ii) study the impact of home and away games on the prevalence of the motifs, and (iii) explore the players' involvement in the different motifs. \section{Conclusions} We proposed a quantitative method to evaluate the styles of soccer teams through their passing structures. The analysis of the motifs in the pass networks allows us to compare and differentiate the styles of different teams. Although most teams tend to apply homogenous style, surprisingly, a unique strategy of soccer is also viable---and quite successful, as we have seen in the recent years. Our results shed light on the unique philosophy of FC Barcelona quantitatively: the famous tiki-taka does not consist of uncountable random passes but rather has a precise, finely constructed structure. \newpage \bibliographystyle{abbrv}
{ "attr-fineweb-edu": 2.236328, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeL3xK6EuNA_gOpmY
\section{Introduction} There is a significant demand for obtaining information on areas that are yet to be visited. This can be achieved by accessing a map of the area, reading documents regarding regional features or tourist information, and viewing captured pictures and movies of that area. Each of these methods has advantages and disadvantages. A map can provide a bird's-eye view of the entire target region at a single glance; however, this information is limited, depending on the type of map~\cite{map_advantage}. Documents provide detailed descriptions of areas. Nevertheless, it is generally necessary to refer to several documents to grasp the overall image of the area. Contrastingly, pictures and movies are excellent for elucidating the view and atmosphere of a location. Nonetheless, it is not easy to grasp the entire view of an area using this approach. Currently, there are various interfaces of location information that combine different types of data of an area~\cite{yahoo} ~\cite{geographical_survey} ~\cite{bing}\\~\cite{mapillary} ~\cite{mapfan} ~\cite{offmaps}~\cite{sugimoto_icmr}. For example, on websites such as the Japanese Geographical Survey Institute \cite{geographical_survey}, information indicating the characteristics of an area can be visualized via graphs, etc., by specifying a prefecture on the map. This is realized via an interface that displays detailed information of the target areas, while grasping a huge geographic overview of Japan. Google Street View (GSV)~\cite{street_view} is an interface that combines maps and images. It provides users with omnidirectional images corresponding to a chosen location on the map. Users interact with these images, and movements on the map are updated on the displayed content of those images. This creates a realistic user experience as it enables users to experience the place as if they were there. GSV is widely used and offers convenience for tasks such as guidance and pre-learning about a target area~\cite{use_gsv}. However, GSV is not a perfect interface for the abovementioned tasks. In GSV, street images are significantly sparse. To view all the images along a route on GSV, users need to several transitions between those images; consequently, the user has to interact multiple times with the system~\cite{gsv_dis}. This is quite tedious and occasionally difficult. Such image transitions may result in the user getting lost or being led along an incorrect direction. Additionally, because of the interval between images, the user does not experience a continuous movement. However, the use of videos instead of sparse images can solve these problems that are experienced by GSV users. When using videos, we can select the starting point and direction on the map, and play back a video along the streets of the map along the desired direction. It is possible to place video transitions at the intersections and subsequently change the direction of movement through interactions. Hence, the use of videos can eliminate the need for excessive interactions and also enable users to experience continuous movement. Forty years ago, a research project prototyped Movie Map\cite{movie_map} based on analog video technology was constructed. Movie Map was used to play street movies corresponding to the directions of a car in Aspen city. These street segment movies were recorded on multiple optical disks and were played according to the user's inputs. The Movie Map was produced only once. This was because it was not easy to simultaneously acquire video and geotags. Furthermore, multimedia interaction machines were not readily available. Therefore, GSV has been more successful in recent years. We build our Movie Map by incorporating current technology. This renovated version can be used for pre-learning directions, or virtual tours for walkers in certain areas, such as commercial areas around a station. We acquire omnidirectional videos along the streets of a target area, analyze the camera positions of videos using the visual simultaneous localization and mapping (vSLAM) technique, and associate the corresponding video frames with locations on the map. We detect intersection frames among videos based on location information and visual features, and the segmented videos are subsequently organized by intersections. Thereafter, we build an interface to intuitively explore a target area. Thus, the proposed Movie Map enables users to easily move along the streets in an area by synthesizing views connected via video segments. We evaluate the effectiveness of route synthesis by connecting video segments at the intersections and using the rotating synthesized views, obtained by blending intersection frames, of the two relevant videos. We also evaluate the advantages and disadvantages as well as the user satisfaction of the proposed interface for exploring tasks and compare them to GSV. Our contributions are as follows. \begin{itemize} \item We build a Movie Map that allows the user to interactively explore a certain target area displaying omnidirectional street videos captured along the streets in the target area. After capturing the videos in the area, the processing required for the video database is automated except for the assigning of two reference points per video. We capture the omnidirectional videos of two directions along streets in areas around Kyoto station and Namba station. The system is easily applicable to different areas. \item We produce a natural transition from streets to other streets at intersections by generating turning views. In the user study, turning views were highly evaluated compared to directly switching and simple rotation. \item We evaluated our system against GSV under a scenario where users explore an area by looking for a specific location; our system was evaluated higher in terms of exploring comfort. \item As an extension, we use virtual billboards in the view of the Movie Map. The billboards are shown at the locations of the shops and stores in the views. When the billboards are clicked, their associated information pops up. \end{itemize} Recent work by Sugimoto et al.~\cite{sugimoto_icmr} demonstrated a Movie Map that can show a synthesis route video by determining the route on a map in advance. This does not allow users to freely explore the specific area. In this work, the proposed system enables users to explore the area without specifying the route in advance. \section{Related Work} \subsection{Movie Map} The concept of a Movie Map was originally proposed by Lippmann four decades ago~\cite{movie_map} and the prototype system is well known as the Aspen Movie Map for the Aspen city area. It is an interactive map, built by using video disc technology to engage the user in a simulated "drive" through an unfamiliar space. In this system, panoramic images were captured by four cameras that were placed at $90^\circ$ intervals in a horizontal circle every 10 feet; their precise locations were measured via GPS. Images along roadways and at intersections were stored in several optical disk drives. A user could interactively explore the area on a touch screen by specifying the direction of movement. The Movie Map provided a methodology for visualizing a certain area by only using multiple video segments. However, the technology used at that time was not sufficiently advanced and therefore the map could not be scaled. Until now, minimal research in this field has been conducted; they are summarized in~\cite{movie_map_naimark}. There are limited examples of Movie Maps~\cite{see_banff}~\cite{sugimoto_icmr}. ~\cite{see_banff} replay captured route movies without any route synthesis. Thus, it cannot be efficiently generalized for a Movie Map that covers a certain area. Sugimoto et al.~\cite{sugimoto_icmr} present a Movie Map with a interface in which a user specifies his/her route on a map in advance. Consequently, the user cannot freely explore the target-area. GSV emerged in 2007; it was initially used on several cities in the US before expanding globally~\cite{gsv_area}. It provides an interactive slide-show type view of maps. Presently, unlike GSV, the use of a Movie Map is highly uncommon. GSV is not necessarily better than Movie Map regarding user experience. \cite{movie_map} claimed that Movie Map allows the user to experience an area as if they were driving through the area. Such an experience is provided to the user by using continuous visual information; this cannot be obtained by GSV. In this regard, Movie Maps are still worth studying. Therefore, we take advantage of today's technology to improve a Movie Map for walkers in certain city areas. \subsection{Visual Simultaneous Localization and Mapping} \begin{figure}[t] \begin{center} \includegraphics[keepaspectratio,scale=0.35]{system_abstract.png} \caption{Outline of the system} \label{system_abstract} \end{center} \end{figure} Visual simultaneous localization and mapping (vSLAM) is a technique used for 3D reconstruction and camera position estimation by using a video captured by a single camera~\cite{slam1}~\cite{slam2}~\cite{slam3}. Visual features and camera parameters are used for the optimization of relative changes in the camera position and orientation between consecutive frames. In our study, we apply OpenVSLAM~\cite{openvslam2019}, which is one of the opensource vSLAM software, to capture street videos and to estimate their relative trajectory. OpenVSLAM is different when compared with other vSLAMs; this is because it can treat omnidirectional videos. Omnidirectional images contain information in all directions, and structural patterns such as those of buildings that usually appear on the left and right side of the camera which is directed to move along the streets. Therefore, OpenVSLAM can work with a high number of visual features that exist in an omnidirectional video for accurate estimation. We performed vSLAM on each video independently. The camera locations derived by OpenVSLAM from the omnidirectional videos were determined to be reasonably accurate. We used two reference points for each video and aligned the corresponding camera locations with the map coordinates. Generally, vSLAM accumulates errors, which result in a scale drift problem~\cite{scale_drift}. The vSLAM technique addresses this problem via loop closing~\cite{loop_closing}, which is a constraint that the camera imposes, and in which localized visual features coincide at the loop-closing. However, each video in our study is captured along a street and does not possess any loop closures. A loop closing algorithm is therefore not applicable to our videos. \subsection{Photometric Reconstruction} These days, several images or videos about a specific area can be captured by using mobile devices~\cite{pedestrian_navigation}. They contain a lot of information, while the original data collection is difficult to understand because it is not structured~\cite{world_photo}. In previous works, analysis of the spatial relationship among data, and intuitive transitions between sources enable users to experience a virtual tour in the target area~\cite{photo_tourism}~\cite{video_scape}~\cite{street_slide}~\cite{image_based_exploration}. Photo tourism~\cite{photo_tourism} calculates the relative camera poses of collected images by using structure from motion (SfM)~\cite{sfm}. On the interface, it visualizes the spatial relationship and switches the images by user input. In our study, the spatial relationship between two source videos is represented as one intersection. By limiting the points of the video switching at intersections, the relationships are represented in a simple manner. \begin{figure}[t] \begin{center} \includegraphics[keepaspectratio,scale=0.3]{capture_lines.png} \caption{One of the shooting areas ($1km^2$ around Kyoto station). The lines show the streets we captured.} \label{capture_lines} \end{center} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[keepaspectratio,scale=0.40]{mapping_example.png} \caption{Mapping of trajectories onto a common map. The areas marked A and B indicate the search area in the exploring experiment described in Section 5.2.} \label{mapping_example} \end{center} \end{figure*} \section{Movie Map Building System} In this section, we describe the processing flow of video data. Figure \ref{system_abstract} shows the outline of our Movie Map system. The map building system is divided into three stages: acquisition, analysis, and management of data. These are further subdivided into several processes. To build a Movie Map in practice, we only have to input the captured omnidirecional videos and two reference coordinates per video into the system, and the system automatically outputs the materials that are used in our exploring interface. Our system is easily applicable to various areas. \subsection{Data Acquisition} We captured omnidirectional videos along the streets in the target area, and then manually assigned coordinate reference information. \subsubsection{Acquisition of Street Videos}\, \noindent We captured street videos using an omnidirectional camera. This was achieved by a person carrying the omnidirectional camera and walking on the streets. We captured videos along streets in areas surrounding the Kyoto station and Namba station. The overall picture of the area with a size of $1km^2$ around Kyoto station is depicted in Figure. \ref{capture_lines}. We physically walked in both directions to capture videos of a single street. Acquiring two-way-videos is necessary for our task of producing a natural feel when moving forward and backward on the map. Moreover, it should be noted that the streets did not need to be straight; thet could be curved or turning streets. However, we assumed that two shooting paths intersect at one point. If they intersect at multiple points, the subsequent analysis fails. In this case, we divided one of them into two paths to prevent our analysis from failing. \subsubsection{Assignment of Reference Coordinates}\, \noindent We assigned the reference point coordinates to each captured omnidirectional video. As described in Section 3.2.1, we applied vSLAM to each video and estimated the relative coordinates of the camera positions in the frames of each video. In order to integrate all camera positions into a single map, we assigned global coordinates that were common to all captured videos. We assigned information on the latitude and longitude of two reference points, which corresponded to the start and end frame of each video. Instead of latitude and longitude, we could have also used specific global coordinates defined on a map, as long as they were the same for all captured videos. \subsection{Data Analysis} In order to create a route, the intersections of the streets were considered to be the most important points. A route map was represented by the information of its streets and their intersections. In this section, we analyze the captured street videos, and automatically obtain their intersection information. \subsubsection{Application of vSLAM}\, \noindent We estimated the relative camera poses, including positions and orientations, using OpenVSLAM. The accuracy of the estimations from OpenVSLAM was high as the program used visual features in all directions in the omnidirectional image to optimize the actual camera positions. The error is practically negligible for general path lengths, such as those shown in Figure. \ref{capture_lines}. \subsubsection{Mapping Videos onto Common Coordinate}\, \noindent We mapped all relative camera positions onto a common coordinate space. Camera positions, as estimated by OpenVSLAM, were independently calculated for each video. We mapped all videos onto a common coordinate system to associate them with each other. In that time, we use the reference point coordinate information mentioned in Section 3.1.2. By considering the vector from the start point to the endpoint of the estimated camera positions, we obtained the rotation and scaling so that the vector could be equal to a vector between the reference start point and end point coordinates. We also applied the same rotation and scaling to all camera positions and directions in a video. Furthermore, we translated all camera positions so that the start point coordinate matched the reference start point coordinate. Repeating this process for all videos, we aligned all camera positions on a common coordinate of the map. Figure \ref{mapping_example} presents an example of this type of mapping. Different color markers represent the coordinates of key-frames in both ways of one street. \subsubsection{Intersection Detection}\, \begin{figure}[t] \begin{tabular}{cc} \begin{minipage}{0.5\columnwidth} \begin{center} \includegraphics[scale=0.15]{detection.png} \subcaption{Detection Procedure} \label{detection_procedure} \end{center} \end{minipage} \begin{minipage}{0.5\columnwidth} \begin{center} \includegraphics[scale=0.15]{detection2.png} \subcaption{Extended rectangle for the edge of the route} \label{t_junction} \end{center} \end{minipage} \end{tabular} \caption{Intersection detection} \label{intersection_detection} \end{figure} \noindent We detect the intersection information using the coordinate information mapped onto the common map, and the visual features. The intersection information here refers to which street video intersects with which video in which frame, coordinates of the intersection, and the relative rotation between the frames of these two videos. We first find video pairs that intersect each other. Suppose video A and video B have an intersection, then we can determine the frames of each video that are most similar in location and visual features. For a quick detection, each trajectory was divided into rectangles and the intersection candidate frames were narrowed down based on their overlap. The procedure of this method is shown in Figure. \ref{intersection_detection}. As shown in Figure \ref{detection_procedure}, two trajectories captured in different street videos are divided every hundred frames. Next, rectangles that covered the divided parts were taken into further consideration and were then searched for overlapping rectangular pairs between the trajectories. Finally,. we found the frames with the least distance between two trajectories by implementing a full search inside the overlapping rectangle. If the end of the route forms an intersection, as shown in Figure. \ref{t_junction}, the detection may not be successful due to a lack of rectangular overlap, depending on the start points and endpoints. In this case, as an exception process at the time of splitting, the start points and endpoints were extended by several hundred frames, to which a rectangle was added that covered the extended part. As this search was performed by using only the location information obtained by mapping the camera positions, which was estimated by vSLAM based on the reference points, it contained a few errors. To minimize these errors, we adjusted the intersection frames using their visual feature similarity. For dozens of frames around the location, which was based on the detected intersection frame, we rotate the frames in the same direction based on the camera position estimated by vSLAM. Thereafter, we extracted the ORB features~\cite{orb} for each image and determined the frame pairs with the highest similarity. These pairs were set as the correct intersection frames. When using visual similarity, the results are more accurate than those obtained when only using location information. For the intersection frames obtained as a result of detection and adjustment, we recorded the following data: the two videos to which the frames belong to, the timestamp of the frames in the videos, coordinates, and the relative camera rotation between both frames. We automatically repeated this detection for all pairs of streets and obtained information from all intersections in the target area. \subsection{Data Management} \begin{figure}[t] \begin{center} \includegraphics[keepaspectratio,scale=0.22]{management.png} \caption{Split and synthesis at a physical intersection} \label{management} \end{center} \end{figure} Using the intersection information obtained in the data analysis stage, it was possible to synthesize a route movie by editing and playing back a part of the videos. In order to produce an interactive real-time player, we converted the videos into a format used on the interface. According to the intersections of the streets, we segmented the street videos into sections between intersection frames and added metadata to specify the video sections. In advance, we synthesized the turning views at intersections, which were inserted to produce a natural transition from one video section to another. \subsubsection{Splitting Street Videos into Sections}\, \noindent We split the street videos at intersections on the path. When we used the video in our interface, the unit of playing is a section between intersections. It was significantly faster to load a section than to load the full street video and specify which part was to be played after. The division was performed according to the analyzed intersection frame information, which included the street video index and timestamp. There are several sections corresponding to a physical intersection because we maintain videos of both directions. As shown in Figure. \ref{management}, they intersect complicatedly and small sections are segmented in the intersection. For each section data, we added the following metadata: the street video ID to which it belongs and the intersection IDs of both ends of the section. \subsubsection{Synthesis of Turning Views at Intersections}\, \noindent We synthesized the turning views for each intersection as shown in Figure. \ref{interface_example}. In our interface, the user can turn at any intersection. Before and after turning, we switched from one video section to another video section. However, switching movies without any interpolation made us feel uncomfortable and could cause inconsistencies in our cognition of position and direction, as presented in Section 5.1. Lippman~\cite{movie_map} captured different turning movies for each intersection and inserted them whenever the driver turned. However, we needed eight turning patterns at a standard physical intersection as depicted in Figure. \ref{management}. When we considered the number of intersections that existed in a certain area, shooting videos for all of them were deemed to be not feasible. Owing to the use of omnidirectional videos, we could easily synthesize the turning views by rotation. Video sections before and after the intersection were from different street videos captured at different times. Hence, it can be often observed that the brightness and objects, such as cars and people before and after the intersection change. Therefore, we synthesized the turning views by blending the intersection frames; the frame before turning is denoted as frame $I$ and the frame after turning is denoted as frame $J$. As the camera directions of the frames are mapped onto a common map, as described in Section 3.2.2, we can use the rotation angles between frames $I$ and $J$. In our generating method, we rotated frame $J$ to align it with frame $I$ and then proceeded to blend both frames. We synthesized a turning view by rotating the blended frames; consequently, the blending ratio linearly changed from 0 to 1. As a result, we could synthesize a turning view starting from frame $I$ and gradually change it to frame $J$ during the rotation. \section{Movie Map Exploring Interface} \begin{figure}[t] \begin{tabular}{cc} \begin{minipage}{0.5\columnwidth} \begin{center} \includegraphics[scale=0.22]{billboard1.png} \subcaption{Distant billboard} \end{center} \end{minipage} \begin{minipage}{0.5\columnwidth} \begin{center} \includegraphics[scale=0.22]{billboard2.png} \subcaption{Close billboard} \end{center} \end{minipage} \end{tabular} \caption{Example of virtual billboard} \label{virtual_billboard} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[keepaspectratio,scale=0.4]{interface_start.png} \caption{Select start point to explore} \label{interface_start} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[keepaspectratio,scale=0.28]{interface_example.png} \caption{An example of a sequence of a synthesized route movie} \label{interface_example} \end{center} \end{figure} \begin{figure*}[t] \begin{minipage}{0.16\hsize} \begin{center} \includegraphics[keepaspectratio,scale=0.22]{inter1_direct.png} (a) Intersection 1 Method A \end{center} \end{minipage} \begin{minipage}{0.16\hsize} \begin{center} \includegraphics[keepaspectratio,scale=0.22]{inter1_insert_rotate.png} (b) Intersection 1 Method B \end{center} \end{minipage} \begin{minipage}{0.16\hsize} \begin{center} \includegraphics[keepaspectratio,scale=0.22]{inter1_insert_synthesize.png} (c) Intersection 1 Method C \end{center} \end{minipage} \begin{minipage}{0.16\hsize} \begin{center} \includegraphics[keepaspectratio,scale=0.22]{inter2_direct.png} (d) Intersection 2 Method A \end{center} \end{minipage} \begin{minipage}{0.16\hsize} \begin{center} \includegraphics[keepaspectratio,scale=0.22]{inter2_insert_rotate.png} (e) Intersection 2 Method B \end{center} \end{minipage} \begin{minipage}{0.16\hsize} \begin{center} \includegraphics[keepaspectratio,scale=0.22]{inter2_insert_synthesize.png} (f) Intersection 2 Method C \end{center} \end{minipage} \caption{Turning views in the experiment} \label{inters} \end{figure*} We propose an exploration-type interface based on the Movie Map made of omnidirectional street videos that are analyzed and managed as described in the previous sections. This interface is useful for tasks such as prior learning and conducting a virtual tour of a target area. Figure \ref{interface} shows an overall view of the interface. On this interface, the user can display street videos that correspond to the coordinates on the background map. By selecting the desired direction at an intersection, we can easily manipulate the directions of each movement. An example of the route switching movie at an intersection is presented as a slide show in Figure. \ref{interface_example}. \subsection{Interaction on the Interface} Using the metadata of the intersections, the interface draws intersection points on the map shown in the background. Figure \ref{interface_start} is a screen for determining the starting point of this exploration. On this screen, landmarks pre-defined in the target area, and an arrow indicating the direction of walking are displayed. The user selects one of the landmarks and a walking direction to start the exploration. When the user starts exploring, a screen for playing the walking video appears, and the street video starts to play. The user can change the viewing direction by dragging the omnidirectional video and can easily specify the walking direction at the intersections by choosing the arrow bottoms that appear when approaching an intersection. By clicking on one of the arrows, the user can easily select the desired direction and the view is turned accordingly and switched into the next video section. Moreover, the user can perform basic video operations such as a change of playback speed by interacting with the button inputs. In this way, the user can walk continuously in a chosen area at his or her desired speed. \subsection{Hiding Details of Intersections for Visualization} In the interface shown in Figure. \ref{interface}, the intersection on the route map is visualized as a single point. The actual data, as described in Section 3.1.1, consists of two-way videos of a street; one physical intersection generally contains four intersection frame pairs. Multiple intersection frame pairs of a single physical intersection are grouped as a point, and its cluster center is drawn as the location of the point on the map. We keep the connections of the grouped intersections as a graph structure. Based on this data, when approaching an intersection, the system displays the navigation arrows only for the directions in which the physical streets exist. Using the location of the frames, the system visualizes the current position on the map. \subsection{Virtual Billboards} We can display virtual billboards in the view of the Movie Map. After building the Movie Map, the video is aligned in the direction of the street. We can overlay a billboard in the video by corresponding it to its specific location, as shown in Figure. \ref{virtual_billboard}. The billboard can be shown from near to far viewpoints. To show the billboard, we need to manually specify a position. A single location per billboard is kept in a billboard list. We use the time stamp of the video for the location. When the user approaches the location, the corresponding billboard pops up. Each billboard has additional information, which appears by clicking the virtual billboards. \section{Experiment -- User Studies} \begin{table*}[htb] \includegraphics[keepaspectratio,scale=0.28]{result1_integrated.png} \hspace{180pt} (a) Intersection 1 \hspace{70pt} (b) Intersection 2 \hspace{40pt} $* : p < 0.05, ** : p < 0.01$ \vspace{0.5pt} \caption{Evaluations of three methods of turning view synthesis} \label{result1} \end{table*} \begin{table}[htb] \includegraphics[keepaspectratio,scale=0.17]{result3.png} $** : p < 0.01$ \vspace{0.5pt} \caption{Results of comparisons between GSV and our system} \label{result2} \end{table} We produced two user studies for the evaluation of our Movie Map. (1) We evaluated the intersection movies, i.e., the degree of feeling visual discontinuity when changing routes at intersections. (2) We evaluated the usability of the proposed interface and subsequently compared it to GSV. In both cases, the ratings were based on user studies. We did not include virtual billboards in the interface in the user studies. The subjects were 16 students who had no prior knowledge of the study. \subsection{Evaluation of Synthesis of Intersection Movie} We evaluated the effect of the insertion of the synthesized turning view on the visual discontinuity. Subjects watched three different videos of route changes at two intersections, as shown in Figure 10: method A--the two street videos were switched directly without any processing; method B--one of the intersection frames was inserted with a rotation, and method C,--the blended turning view described in this paper was inserted. Subjects answered questions regarding the perceived naturalness of the change of position and the surroundings, and the ease of cognition in turning directions. The score consisted of a 5-point scale; positive: 5, weakly positive: 4, neutral: 3, weakly negative: 2, negative: 1. The results are shown in Table \ref{result1}. We used two intersections for comparisons. Intersection 2 was more difficult case than intersection 1. Intersection frames at intersection 1 were similar in terms of their position and surroundings. However, in intersection 2, the videos before and after the switch were captured at different times during the day. Furthermore, the locations of the intersection frames were not close enough and the surroundings changed; thus, the appearance of the intersection frame pairs was not similar. From these results, method A was considered unacceptable at both intersections. Comparing methods C and B, method C performed significantly better than method B at intersection 2 in terms of the three evaluation criteria. While, at intersection 1, methods C and B did not have a significant difference. At intersection 1, which is a simpler intersection, the rotation was considered to be enough to produce views for natural transitions and cognition of turning direction. However, at intersection 2, which is harder because of large changes in the position and surrounding appearance before and after the transition, method C was evaluated to be significantly better than B because of the blending rotation turning, which made the transition smoother. \subsection{Evaluation of the Interface Comparing with GSV} We compared our system with GSV under a scenario wherein subjects explored a small area and searched for a target in that area using the interface. We prepared two areas and targets---area A and area B, both with targets, as represented in Figure. \ref{mapping_example}. The subjects explored A or B by using Movie Map or GSV. For example, subject 1 explored area A using Movie Map and area B using GSV to avoid any learning effects. They evaluated both interfaces in terms of their usability for exploring and the degree of comfort when exploring. A scale of five points was used for their evaluation, whereby five is the best and one is the worst. The results of this evaluation are shown in Table \ref{result2}. As for the usability, there were no significant differences between GSV and our interface. It may be due to experiences of the subjects. Most of them had prior experience with GSV and were not familiar with our interface, which required a real-time input of the directions for walking. Although the usability did not change, no one overlooked the target by our interface in area A. However, three of the eight subjects who explored area A by GSV overlooked the target at least once. In fact, one of the subjects could not find the target in the limited exploration time of 3 minutes. Nonetheless, the proposed interface was rated higher owing to its feeling of exploration. This was because playing continuous videos and using natural transitions at intersections resulted in a more natural feeling akin to walking. \section{Conclusion} We proposed a new Movie Map system and its exploration interface. The proposed method involved the acquisition, analysis, management, and interaction of omnidirectional videos. Once the street videos were acquired, the entire pipeline of processing was almost automated completely. The proposed system segmented videos into sections using the information of the detected intersections. Moreover, intersection turning views were synthesized in advance. The walking videos of the target area were displayed by playing these videos according to the specified route. The manipulation of the interface consisted of a simple selection of the directions at intersections. Moreover, with additional operations, we could display virtual billboards. In the experiments, we compared three types of intersection movies, which included the generated turning views and the use of a blended turning view, which was shown to be effective to some extent. Additionally, we compared our interface with GSV and determined that the proposed interface provided the user with a better experience when exploring. \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 2.783203, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdIo5qoTAnFx7EB4c
\section{Introduction} Ice hockey is played by an estimated 1.8 million people worldwide~\cite{iihf2018survey}. As a team sport, the positioning of the players and puck on the ice are critical to offensive and defensive strategy~\cite{thomas2006impact}. Currently, practical methods for tracking the position of each player and the puck for the full duration of a hockey match are limited. Advances in computer vision have shown promise in this regard~\cite{lu2009tracking,pidaparthy2019keep}, but ultimately remain in the developmental phase. As an alternative, radio-frequency identification is currently being explored for player and puck tracking~\cite{cavallaro1997foxtrax,nhl2019tracking}, but may only be financially and logistically feasible at the most elite professional level, e.g., the National Hockey League (NHL). Information regarding player and puck position is therefore inaccessible in most cases. As a result, the conventional heuristic approach for evaluating the effectiveness of team strategies involves analyzing the record of \textit{events} that occurred during the match (turnover, shot, hit, face-off, dump, etc.)~\cite{tora2017classification,fani2017hockey}. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figures/event_dist.png} \end{center} \caption{Distribution of some puck locations on the hockey rink. The locations are evenly distributed throughout the ice rink. The red, blue and green circles correspond to the puck locations of shots, dumps and faceoffs respectively.} \label{fig:event_distibution} \end{figure} In the NHL, events are recorded on a play-by-play basis by dedicated statisticians\footnote{Play-by-play event data is publicly available for all NHL games at \url{NHL.com}}. Additionally, third-party hockey analytics companies provide more in-depth event information, including a greater number of event types and event details, for the NHL and other hockey leagues around the world. Each event is linked with a game-clock timestamp (1-second resolution), and an approximate location where the event occurred on the rink. Generally speaking, the event location corresponds to the approximate location of the puck. Therefore, there exists an expansive knowledgebase of approximate puck location information that has, until now, not been exploited. To this end, this paper explores the following idea: \textit{can we leverage existing hockey event annotations and corresponding broadcast video to predict the location of the puck on the ice?} Using a relatively small dataset of hockey events containing approximate puck locations (distribution shown in Figure ~\ref{fig:event_distibution}), we use a 3D CNN to predict the puck position in the rink coordinates using the corresponding 1-second broadcast video clips as input. As such, the 3D CNN is tasked with simultaneously (1) localizing the puck in RGB video and (2) learning the homography between the broadcast camera and the static rink coordinate system. To our best knowledge, this represents a novel computer vision task that shares few similarities with any existing tasks. Drawing inspiration from the domain of human pose estimation, we model the approximate spatial puck location using a 2D Gaussian, as shown in Figure \ref{figure:transformation}. \section{Background} Pidaparthy and Elder \shortcite{pidaparthy2019keep} proposed using a CNN to regress the puck's pixel coordinates from single high-resolution frames collected via a static camera for the purpose of automated hockey videography. Estimating the puck location from a single frame is a challenging task due to the relatively small size of the puck compared to the frame, occlusions from hockey sticks, players, and boards, and the significant motion blur caused by high puck velocities. Furthermore, their method was not based on existing data and thus required extensive data collection and manual annotation. Remarking that humans can locate the puck position from video with the help of contextual cues and temporal information, our method incorporates temporal information in the form of RGB video to help localize the puck. Additionally, our method differs from Pidaparthy and Elder in that we use puck location information obtained from existing hockey event data, and directly learn the camera-rink homography instead of using a manual calibration. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figures/transformation.png} \end{center} \caption{Illustration of the scaling transformation used to transform the puck annotation from the ice rink coordinates to the heatmap coordinates. For training, the annotations are transformed from the ice rink coordinates to the heatmap coordinates, whereas, predicated heatmap is transformed to ice-rink coordinates for inference.} \label{figure:transformation} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=.8\linewidth, height =.25\linewidth]{Figures/puck_net.png} \end{center} \caption{The overall network architecture. Input tensor of dimension $b\times16\times3\times256\times256$ ($b$ denotes the batch size) is input into the R(2+1)D feature extractor consisting of the first nine layers of the R(2+1)D network. The feature extractor outputs $b\times8\times128\times64\times64$ tensor representing the intermediate features. The intermediate features are finally input into two regression blocks. The first regression block(Reg Block A) outputs a $b\times2\times32\times64\times64$ tensor while the second regression block outputs the final predicted heatmap. } \label{figure:overall_net} \end{figure*} \section{Methodology} \subsection{Dataset} The dataset consists of 2716, 60 fps broadcast NHL clips with an original resolution of $1280 \times 720$ pixels of one second each with the approximate puck location annotated. The videos are resized to a dimension of $256 \times 256$ pixels for computation. The puck locations are evenly distributed throughout the ice rink as can be seen from Figure \ref{fig:event_distibution}. The dataset is split such that 80\% of the data is used for training, 10\% for validation and 10\% for testing. \subsection{Experiment} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth, height =.6\linewidth]{Figures/Regblock.png} \end{center} \caption{Illustration of the regression block applied after the R(2+1)D network backbone. The input and outputs are 5D tensors where $b,c,t,w$ and $h$ denote batch size, number of channels, temporal dimension, width and height of the feature map respectively. Here $c^{'}<x$ and $t^{'}<t$ since the number of channels and timesteps have to be reduced so that a single heatmap can be generated.} \label{fig:regression_block} \end{figure} We use the 18 layer R(2+1)D \cite{tran} network pretrained on the Kinetics dataset \cite{Kay2017TheKH} as a backbone for regressing the puck location from video. The input to the network consists of $16$ video frames $\{I_{i} \in R^{3 \times 256 \times 256}\:|\: i \in [1,..,16]\}$ sampled from a one second video clip. The $16$ frames are sampled from a uniform distribution. For preprocessing, the image frame RBG pixel values are scaled to the $[0,1]$ range and normalized by the Kinetics dataset mean and standard deviation. The features maps obtained from the 9th layer of the R(2+1)D network is fed into two RegressionBlocks illustrated in Figure \ref{fig:regression_block}. The first five layers of the R(2+1)D network are kept frozen during training in order to reduce the computational cost and maintain a batch size of 10 on a single GPU machine. Each regression block consists of a 3D convolutional layer, batch normalization and ReLU non-linearity. The final output of the network is a two-dimensional heatmap $h \in R^{64 \times 64}$ representing the probability distribution of the puck location. We chose a heatmap based approach instead of directly regressing the puck coordinates in order to account for the uncertainty in the ground truth annotations. The overall network architecture is illustrated in Figure \ref{figure:overall_net} and Table \ref{table:network}. The ground truth heatmap consists of a Gaussian with mean $\mu$ equal to the ground truth puck location and standard deviation $\sigma$. Mean squared error (MSE) loss between the ground truth and predicted heatmap is minimized during training. \par \begin{figure}[t] \begin{center} \includegraphics[width=.5\linewidth, height =.5\linewidth]{Figures/auc_rnd_25.png} \end{center} \caption{Overall AUC for the best performing model with random sampling and $\sigma = 25$.} \label{figure:AUC} \end{figure} The size of the NHL hockey rink is $200ft \times 85ft$. In order to predict a $64\times64$ dimensional square heatmap, a scaling transformation $\tau: R^{200\times85} \rightarrow R^{64\times64}$ is applied to the ground truth puck annotations in rink coordinates while training. Let $hmap\_width$ and $hmap\_height$ denote the output heatmap width and height respectively. The transformation matrix is given by: $$ \tau = \begin{pmatrix} \frac{hmap\_width}{200} & 0 & 0 \\ 0 & \frac{hmap\_height}{85} & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}$$ During testing, inverse transformation $\tau^{-1}$ is applied to convert back to the rink coordinates. This process is illustrated in Figure \ref{figure:transformation}. \par We use the Adam optimizer with an initial learning rate of .0001 with a batch size of 10. We use the Pytorch 1.3 framework on an Nvidia GTX 1080Ti GPU. \section{Results and Discussion} \begin{figure}[t] \begin{center} \includegraphics[width=.5\linewidth, height =.5\linewidth]{Figures/crv_rnd_25.png} \end{center} \caption{The accuracy curves corresponding to the best performing model.} \label{figure:AUC1} \end{figure} \begin{table}[!t] \centering \caption{Network architecture. k,s and p denote kernel dimension, stride and padding respectively. $Ch_{i}$ and $Ch_{o}$ and $b$ denote the number of channels going into and out of a block and batch size respectively. } \footnotesize \setlength{\tabcolsep}{0.2cm} \begin{tabular}{|c|} \hline \textbf{Input} $b\times16\times3\times256\times256$ \\\hline\hline \textbf{Feature extractor}\\ First $9$ layers of R(2+1)D network \\ \hline \textbf{RegBlock A} \\ Conv3D \\ $Ch_{i} = 128, Ch_{o} = 32$ \\ (k = $4\times1\times1$, s = $4\times1\times1$, p = 0) \\ Batch Norm 3D \\ ReLU \\ \hline \textbf{RegBlock B} \\ Conv3D \\ $Ch_{i} = 32, Ch_{o} = 1$ \\ (k = $2\times1\times1$, s = $2\times1\times1$, p = 0) \\ Batch Norm 3D \\ ReLU \\ \hline \textbf{Output} $b\times64\times64$ \\ \hline \end{tabular} \label{table:network} \end{table} \subsection{Accuracy Metric} A test example is considered to be correctly predicted at a tolerance $t$ feet if the L2 distance between the ground truth puck location $z$ and predicted puck location $z_{0}$ is less than $t$ feet. That is $||z - z_{0}||_{2}<t$. Let $\phi(t)$ denote the percentage of examples in the test set with correctly predicted position puck position at a tolerance of $t$. We define the accuracy metric as the area under the curve (AUC) $\phi(t)$ at tolerance of $t=5$ feet to $t=50$ feet. \begin{figure}[t] \begin{center} \includegraphics[width=.6\linewidth, height =.3\linewidth]{Figures/zone_wise_1.png} \end{center} \caption{Zone-wise accuracy. The figure represents the hockey rink with the text in each zone represents the percentage of test examples predicted correctly in that zone. The position of the camera is at the bottom. } \label{figure:zone_3} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=.6\linewidth, height =.3\linewidth]{Figures/zone_wise_2.png} \end{center} \caption{Zone-wise accuracy with offensive and defensive zones further split into two. The figure represents the hockey rink with the text in each zone represents the percentage of test examples predicted correctly in that zone. The position of the camera is at the bottom.} \label{figure:zone_3_split} \end{figure} \subsection{Discussion} Figure \ref{figure:AUC} shows the variation of overall accuracy with tolerance $t$ for the best performing model trained with $\sigma = 25$. The accuracy increases almost linearly reaching $\sim60\%$ accuracy for $t=30$ feet. The AUC score for the model is $47.07 $~\%. Figure \ref{figure:AUC1} shows the accuracy vs tolerance plot for the $\sigma =25$ model, in the horizontal(X) and vertical(Y) directions separately. The model is able to locate the puck position with the highest accuracy in Y(vertical) direction reaching an accuracy of $\sim65\%$ at a tolerance of $t=15$ feet. This is because the vertical axis is more or less always visible in the camera field of view. This cannot be said for the horizontal(X) direction since the camera pans horizontally and hence, the models has to learn the viewpoint changes. \par \begin{table}[!t] \centering \caption{AUC values for different values of $\sigma$.} \footnotesize \setlength{\tabcolsep}{0.15cm} \begin{tabular}{c|c|c|c}\hline $\sigma$ & AUC(overall) & AUC(X) & AUC(Y) \\\hline\hline 10 & 36.85 & 48.84 & 72.25 \\ 15 & 42.51 & 53.86 & \textbf{77.12}\\ 20 & 45.80 & 57.31 & 76.66 \\ 25 & \textbf{47.07} & \textbf{58.85} & 76.78 \\ 30 & 42.86 & 54.23 & 76.76 \\ \end{tabular} \label{table:results} \end{table} Table \ref{table:results} shows the variation of AUC for different values of $\sigma$. The highest AUC score achieved is corresponding to $\sigma = 25$ ($47.07 $~\%). A lower value of $\sigma$ results in a lower accuracy. A reason for this can be that with lower $\sigma$, the ground truth Gaussian distribution becomes more rigid/peaked, which makes learning difficult. For a value of $\sigma>25$, the accuracy again lowers because the ground truth Gaussian becomes very spread out, which lowers accuracy on lower tolerance levels. \par Two kinds of sampling techniques were investigated: 1) Random sampling from a uniform distribution 2) Constant interval sampling at an interval of 4 frames. Random sampling outperforms uniform sampling because it acts as a form of data augmentation. This is shown in Table \ref{table:sampling}. \par Figure \ref{figure:zone_3} shows the zone-wise accuracy of the model. A prediction is labelled as correct if it lies in the same zone as the ground truth. The model shows good performance in the offensive and defensive zones with an accuracy greater than $80\%$. The model maintains reasonable performance when the defensive and offensive zones are further split into two (Figure \ref{figure:zone_3_split}). \section{Conclusion and Future Work} We have presented a novel method to locate the approximate puck position from video. The model can be used to know the zone in which the puck was present at a particular moment in time, which can be of practical significance to know the exact location of play and as a prior information for recognizing game events. The results obtained are preliminary and in the future more cues such as player detections, player trajectories on ice and optical flow can be taken into account to obtain more accurate results. It would also be interesting to apply the proposed methodology in sports such as soccer. \begin{table}[!t] \centering \caption{Comparison between uniform and random sampling settings. Random sampling outperforms uniform sampling because it acts as a form of data augmentation.} \footnotesize \setlength{\tabcolsep}{0.15cm} \begin{tabular}{c|c|c|c|c}\hline Sampling & $\sigma$ & AUC(overall) & AUC(X) & AUC(Y) \\\hline\hline Random & 20 & 45.80 & 57.31 & 76.66 \\ Constant interval & 20 & 36.55 & 49.24 & 71.41 \\ \end{tabular} \label{table:sampling} \end{table} \section{Acknowledgment} This work was supported by Stathletes through the Mitacs Accelerate Program and Natural Sciences and Engineering Research Council of Canada (NSERC). \bibliographystyle{aaai}
{ "attr-fineweb-edu": 2.597656, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdk05qoYAv4ZEwTrM
\section{Introduction} The process of running involves a control phenomenon in the human body. Indeed, the optimal pace to run a fixed distance requires to use the maximal available propulsive force and energy in order to produce the optimal running strategy. This optimal strategy is a combination of cost and benefit: a runner usually wants to finish first or beat the record but minimizing his effort. The issue of finding the optimal pacing is a crucial one in sports sciences \cite{lapresa,fosterbeating,hettingabrian,hh,hanley2019,thiel2012pacing,tucker2006non,tucker2009physiological} and is still not solved. In tactical races, depending on the level of the athlete, and the round on the competition (heating, semi-final or final), the strategy is not always the same: the pacing can either be U-shaped (the start and the finish are quicker), J-shaped (greater finishing pace) or reverse J-shaped (greater starting pace) \cite{CasHan,hettingabrian}. In this paper, we want to model this effort minimization as a control problem, solve it and find estimates of the velocity using the turnpike theory of \cite{trelat2020,TZ}. We will build on a model introduced by Keller \cite{keller1974optimal}, improved by \cite{aft,AB,AM,AT_RSOS,behncke1993mathematical,bsmall,mathis1989effect,AMH}. The extension by \cite{aft,AB,AM} is sufficiently accurate to model real races. We add a motivation equation inspired from the analysis of motor control in the human body \cite{pess}. This is related to the minimal intervention principle \cite{TJ} so that human effort is minimized through penalty terms. We have developed this model for the $200$\,m in \cite{AT_RSOS} and extend it here for middle distance races. Let us go back to the various approaches based on Newton's second law and energy conservation. Let $d>0$ be the prescribed distance to run. Let $x(t)$ be the position, $v(t)$ the velocity, $e(t)$ the anaerobic energy, $f(t)$ the propulsive force per unit mass. Newton's second law allows to relate force and acceleration through: \begin{align* & \dot x(t) = v(t) \qquad\qquad\qquad\qquad\qquad x(0)=0, \qquad x(t_f)=d, \\ & \dot v(t) = -\frac{v(t)}{\tau}+f(t) \qquad\qquad\qquad v(0)=v^0, \end{align*} where $\tau$ is the friction coefficient related to the runner's economy, $t_f$ the final time and $v_0$ the initial velocity. An initial approach by Keller \cite{keller1974optimal} consists in writing an energy balance: the variation of aerobic energy and anaerobic energy is equal to the power developed by the propulsive force, $f(t)v(t)$. He assumes that the volume of oxygen per unit of time which is transformed into energy is constant along the race and we call it $\bar \sigma$. If $e^0$ is the initial anaerobic energy, then $\dot e (t)$ is the variation of anaerobic energy and this yields $$ -\dot e(t) +\bar \sigma =f(t)v(t)\qquad\qquad\qquad e(0)=e^0,\quad e(t)\geq0,\quad e(t_f)=0. $$ The control problem is to minimize the time $t_f$ to run the prescribed distance $d=\int_0^{t_f} v(t)\ dt$ using a control on the propulsive force $0\leq f(t)\leq f_M$. This model is able to predict times of races but fails to predict the precise velocity profile. Experiments have been performed on runners to understand how the aerobic contribution varies with time or distance \cite{hanon2011effects}. Because the available flow of oxygen which transforms into energy needs some time to increase from its rest value to its maximal value, for short races up to $400$\,m, the function $\sigma$ (which is the energetic equivalent of the oxygen flow) is increasing with time but does not reach its maximal value $\bar \sigma$ or $\dot{V}O2_{\mathrm{max}}$. For longer distances, the maximal value $\bar \sigma$ is reached and $\sigma$ decreases at the end of the race. The longer the race, the longer is the plateau at $\sigma=\bar \sigma$. The time when the aerobic energy starts to decrease is assumed to be related to the residual anaerobic supplies \cite{BHKM}. Therefore, in \cite{AB}, to better encompass the link between aerobic and anaerobic effects, the function $\sigma$ is modelled to depend on the anaerobic energy $e(t)$, instead on directly time or distance. This leads to the following function $\sigma (e)$ illustrated in Figure \ref{sigma}:\begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{sigma.png} \end{center}\caption{The function $\sigma (e)$ from \eqref{sigmavar} for $e^0=4651$, $\bar\sigma=22$, $\sigma_f=20$, $\sigma_r=6$, $\gamma_2=566$, $\gamma_1=0.15$.}\label{sigma}\end{figure} \begin{equation}\label{sigmavar} \sigma (e)=\left\{\begin{array}{ll} \displaystyle \bar \sigma \frac{e}{e^0\gamma_1}+\sigma_f \left(1-\frac{e}{e^0 \gamma_1}\right) & \displaystyle\quad \hbox{if}\quad \frac{e}{e^0}<\gamma_1 \\[4mm] \displaystyle \bar\sigma & \displaystyle \quad \hbox{if}\quad \frac{e}{e^0}\geq \gamma_1 \quad \hbox{and}\quad e^0 -e\geq \gamma_2\\[2mm] \displaystyle (\bar \sigma -\sigma_r) \frac{e^0 -e}{\gamma_2}+\sigma_r & \displaystyle \quad\hbox{if}\quad e^0-e<\gamma_2 \end{array}\right. \end{equation} where $\bar \sigma$ is the maximal value of $\sigma$, $\sigma_f$ is the final value at the end of the race, $\sigma_r$ is the rest value, $e^0$ is the initial value of energy, $\gamma_1 e^0$ is the critical energy at which the rate of aerobic energy starts to depend on the residual anaerobic energy and $\gamma_2$ is the energy at which the maximal oxygen uptake $\bar \sigma$ is achieved. Because the anaerobic energy starts at the value $e^0$ and finishes at zero, it depletes in time. We observe in our numerical simulations that $e(t)$ decreases, so that $\sigma (e(t))$ and $\sigma (e)$ have opposite monotonicities. The function $\sigma (e(t))$ obtained in our simulations and illustrated in Figure \ref{fig1} is consistent with the measurements of \cite{hanon2008pacing} or of \cite{hanon2011effects}. The parameters $e^0$, $\gamma_1$, $\gamma_2$, $\bar \sigma$, $\sigma_f$, $\sigma_r$ depend on the runner and on the length of the race. A runner, who speeds up and slows down, chooses to modify his effort. There is a neuro-muscular process controlling human effort. The issue is how to model mathematically this control, coming from motor control or neural drive. In Keller's paper \cite{keller1974optimal}, the mathematical control is on the propulsive force. But this yields derivatives of the force which are too big with respect to human ones. Indeed, a human needs some time between the decision to make an effort and the effective change of propulsive force in the muscle. Therefore, in \cite{aft,AB}, the control is the derivative of the propulsive force. Nevertheless, putting the control on the derivative of the force seems artificial and it is more satisfactory to actually model the process going from the decision to the muscle. For this purpose, we use the model of mechanisms underlying motivation of mental versus physical effort of \cite{pess}. They define the motor cost of changing a force as the integral of the square of the neural drive $u(t)$. Motor control theory has shown that optimizing this cost minimizes the signal-dependent motor variability and reproduces the cardinal features of movement production. In \cite{pess}, the authors derive the equation for the derivative of the force which limits the variation of the force through the neural drive $u(t)$: \begin{itemize} \item the force increases with the neural drive so that $\dot f$ is proportional to $u$; \item the force is bounded by a maximal force even when the neural drive increases so that $\dot f$ is proportional to $u(F_{\textrm{max}} -f)$; \item without excitation, it decreases exponentially so that $\dot f$ is proportional to $u(F_{\textrm{max}} -f)-f$; \item the dynamics of contraction and excitation depends on the muscular efficiency $\gamma$ so that $\dot f$ is proportional to $\gamma$. \end{itemize} Therefore, following \cite{pess}, and as in \cite{AT_RSOS}, we add an equation for the variation of the force. This leads to the following system: \begin{align}\label{equ} & \dot x(t) = v(t) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad x(0)=0, \quad x(t_f)=d, \\ & \dot v(t) = -\frac{v(t)}{\tau}+f(t)\label{equv} \qquad\qquad\qquad\qquad\qquad\quad v(0)=v^0, \\ & \dot f(t) = \gamma \Big( u(t) (F_{\textrm{max}}-f(t)) - f(t) \Big) \qquad\qquad f(t)\geq 0, \label{equf}\\%f(0)\ \textrm{libre},\qquad f(T)=F \\ & \dot e(t) = \sigma(e(t)) -f(t)v(t)\qquad\qquad\qquad\qquad\quad e(0)=e^0,\quad e(t)\geq0,\quad e(t_f)=0,\label{eqe} \end{align} where $e^0$ is the initial energy, $\tau$ the friction coefficient related to the runner's economy, $F_{\mathrm{max}}$ is a threshold upper bound for the force, $\gamma$ the time constant of motor activation and $u(t)$ the neural drive which will be our control. We observe in our simulations that, in order to minimize the time, the force $f(t)$ remains positive along the race without the need to put it as a constraint. Let us point out that it follows from Equation \eqref{equf} that $f(t)$ cannot cross $F_{\mathrm{max}}$ increasing. Therefore, with our choice of parameters (the value of $e^0$ is not large enough), we observe that $f(t)$ always remains below $F_{\mathrm{max}}$ without putting any bound on the maximal force. In this paper, we do not take into account the effect of bends because for long races, they have minor effects on the velocity. The optimization problem consists in minimizing the difference between the cost and the benefit. In \cite{pess}, the expected cost is proportional to the motor control which is the $L^2$ norm of the neural drive $u(t)$. On the other hand, the benefit is proportional to the reward, and can be estimated for instance to be proportional to $-t_f$. Indeed, one could imagine the reward is a fixed amount to which is subtracted a number proportional to the difference between the world record and the final time. Similarly, one could add other benefits or costs linked to multiple attempts or the presence of a supporting audience. One could think of adding other costs, for instance in walking modeling, the cost is proportional to the jerk, which is the $L^2$ norm of the derivative of the centrifugal acceleration \cite{laumond,capo}. In this paper, we choose to model the simplest case where the benefit is the final time and the cost is the motor control. This leads to the following minimization: \begin{equation}\label{optcond} \min \left( t_f+\frac{\alpha}{2} \int_0^{t_f} u(t)^2\, dt \right)\end{equation} where $\alpha>0$ is a weight to be determined so that the second term is a small perturbation of the first one, and therefore both terms are minimized. As soon as the race is sufficiently long (above $1500$\,m), one notices (see \cite{hanon2011effects} and our numerical simulations) the existence of a limiting problem where $v$ and $f$ are constant and $e$ is linearly decreasing. Therefore, it is natural to expect that the turnpike theory of \cite{TZ} (see also \cite{trelat2020}) provides very accurate estimates for the mean velocity, force and the energy decrease. The turnpike theory in optimal control stipulates that, under general assumptions, the optimal solution of an optimal control problem in sufficiently large fixed final time remains essentially constant, except at the beginning and at the end of the time-frame. We refer the reader to \cite{TZ} for a complete state-of-the-art and bibliography on the turnpike theory. Actually, according to \cite{TZ}, due to the particular symplectic structure of the first-order optimality system derived from the Pontryagin maximum principle, the optimal state, co-state (or adjoint vector) and optimal control are, except around the terminal points, exponentially close to steady-states, which are themselves the optimal solutions of an associated \emph{static} optimal control problem. In this result, the turnpike set is a singleton, consisting of this optimal steady-state which is of course an equilibrium of the control system. This is the so-called \emph{turnpike phenomenon}. A more general version has recently been derived in \cite{trelat2020}, allowing for more general turnpike sets and establishing a turnpike result for optimal control problems in which some of the coordinates evolve in a monotone way while some others are partial steady-states. This result applies to our problem and we want to use it to simplify the runner's model for potential software applications. The paper is organized as follows. Firstly, we present numerical simulations of \eqref{equ}-\eqref{equv}-\eqref{equf}-\eqref{eqe}-\eqref{optcond}, then we describe our simplified problem and how to derive it. In Section 4, we study a more realistic $\dot{V}O2$\ and in Section 5, the effects of slopes. \section{Numerical simulations}\label{sec2} Optimization and numerical implementation of the optimal control problem \eqref{equ}-\eqref{equv}-\eqref{equf}-\eqref{eqe}-\eqref{optcond} are done by combining automatic differentiation softwares with the modeling language AMPL~\cite{Fourer2002} and expert optimization routines with the open-source package IpOpt~\cite{Waechter2006}. This allows to solve for the velocity $v$, force $f$, energy $e$ in terms of the distance providing the optimal strategy and the final time. As advised in \cite{trelat2020,TZ}, we initialize the optimization algorithm at the turnpike solution that we describe below. We have chosen numerical parameters to match the real race of $1500$\,m described in \cite{hanon2008pacing} so that $d=1500$. The final experimental time for real runners is $245$\,s. The runners are middle distance runners successful in French regional races. Their $\dot{V}O2_{\mathrm{max}}$\ is around $66$\,ml/mn/kg. Because it is estimated that one liter of oxygen produces an energy of about $21.1$\,kJ via aerobic cellular mechanisms \cite{Per}, the energetic equivalent of $66$\,ml/mn/kg is $66\times 21.1$\,kJ/mn/kg. Since we need to express $\sigma$, the energetic equivalent of $\dot{V}O2$\ in SI units, we have to turn the minutes into seconds and this provides an estimate of the available energy per $kg$ per second which is $ 66/60\times 21.1 \simeq 22$. This leads to a maximum value $\bar\sigma=22$ of $\sigma$. From \cite{hanon2008pacing}, the decrease in $\dot{V}O2$\ at the end of the race is of about $10\%$ when the anaerobic energy left is $15\%$. Therefore, we choose the final value of $\sigma$ to be $10\%$ less than the maximal value, that is $\sigma_f=20$, and $\gamma_1=0.15$. To match the usual rest value of $\dot{V}O2$, we set $\sigma_r = 6$. The other parameters are identified so that the solution of \eqref{equ}-\eqref{equv}-\eqref{equf}-\eqref{eqe}-\eqref{optcond} matches the velocity data of \cite{hanon2008pacing}: $\gamma_2 = 566$, $\alpha=10^{-5}$, $F_{\mathrm{max}} = 8$, $\tau= 0.932$, $ e^0 = 4651$, $\gamma=0.0025$, $ v^0 = 3$. Let us point out that our model of effort is not appropriate to describe the very first seconds of the race. Therefore, we choose artificially $v_0=3$ which allows, with our equations, to have a more realistic curve for the very few points, than starting from $v_0=0$. Otherwise, one would need to refine the model for the start. In \cite{pess}, the equivalent of $\alpha$ is determined by experimental data. In our case, we have noticed that, depending on $\alpha$, either $u$ is negative with a minimum or changes sign with a minimum and a maximum. Also, when $\alpha$ gets too small, $\dot f$ is almost constant. The choice of $\alpha$ is made such that the second term of the objective is a small perturbation of the first one, and can act at most on the tenth of second for the final time. With these parameters, we simulate the optimal control problem \eqref{equ}-\eqref{equv}-\eqref{equf}-\eqref{eqe}-\eqref{optcond} and plot the velocity $v$, the propulsive force $f$, the motor control $u$, the energetic equivalent of the oxygen uptake $\sigma(e)$, and the anaerobic energy $e$ vs distance in Figure \ref{fig1}. Though they are computed as a function of time, we find it easier to visualize them as a function of distance. \begin{figure}[ht] \centerline{\includegraphics[width=15.1cm]{vfusigmae.png}} \caption{Velocity $v$, force $f$, energetic equivalent of the oxygen uptake $\sigma(e)$, motor control $u$ and energy $e$ vs distance on a $1500$\,m. All functions (except $e$) display a plateau in the middle of the race corresponding to the turnpike phenomenon, except the energy which is affine. In this numerical simulation, the duration of the race is $244$\,s.}\label{fig1} \end{figure} The velocity increases until reaching a peak value, then decreases to a mean value, before the final sprint at the end of the race. This is consistent with usual tactics which consist in an even pace until the last $300$\,m where the final sprint starts. This final sprint takes place when the function $\sigma(e(t))$ starts decreasing. The function $\sigma$ is the energetic equivalent of $\dot{V}O2$. It increases to its plateau value, then decreases at the end of the race when the anaerobic supply gets too low. The control $u$ also has a plateau at the middle of the race leading to a plateau for the force as well. The velocity and force follow the same profile. The energy is decreasing and almost linear when the velocity and force are almost constant. In Figure \ref{fig1}, we point out that we obtain an almost steady-state in the central part of the race for the motor control, the force and the velocity. We find from Figure \ref{fig1} the central value for the motor control $ u_{\textrm{turn}}=4.26$, the force $f_{\textrm{turn}}=6.48$ and the velocity $v_{\textrm{turn}} = 6.04$. We want to analyze this limit analytically. We will also try to construct local models for the beginning and end of the race. \section{Main results using turnpike estimates} The optimal control problem \eqref{equ}-\eqref{equv}-\eqref{equf}-\eqref{eqe}-\eqref{optcond} involves a state variable, namely, the energy $e(t)$, which goes from $e^0$ to 0, and thus has no equilibrium. The turnpike theory has been extended in \cite{trelat2020} to this situation when the steady-state is replaced by a partial steady-state (namely, $v$ and $f$ are steady), and $e(t)$ is approximated by an affine function satisfying the imposed constraints $e^0$ at initial time and $0$ at final time. In what follows, we denote the approximating turnpike trajectory with an upper bar, corresponding to a constant function $\sigma (e)=\bar \sigma$. More precisely, we denote by $t\mapsto(\bar v_c, \bar e_c(t), \bar f_c)$ the turnpike trajectory defined on the interval $[0,\bar t_c]$ so that $\bar v_c$ and $\bar f_c$ are steady-states (equilibrium of the control dynamics \eqref{equ}-\eqref{equv}-\eqref{equf}) with $\bar v_c =\bar f_c \tau$, and $\bar e_c(t)$ is affine: $$ \dot{\bar e}_c (t)=\bar\sigma -\bar f_c\bar v_c$$ and satisfies the terminal constraints $\bar e_c(0)=e^0$ and $\bar e_c(\bar t_c)=0$, while $d=\bar v_c \bar t_c$. Integrating yields \begin{equation}\frac{\bar v_c^2}\tau -\bar \sigma =e^0 \frac {\bar v_c}d.\label{barv} \end{equation} The mean velocity $\bar v_c$ can be solved from \eqref{barv} to get \begin{equation}\bar v_c =\frac{e^0 \tau}{2d} +\sqrt{\bar\sigma\tau+\left ( \frac{e^0\tau}{2d}\right )^2}.\label{eqvbar}\end{equation} We observe that the value of $\bar v_c$ increases with $e^0$, $\tau$ (which is the inverse of friction) and $\bar \sigma$, but is not related to the maximal force. Indeed, the maximal propulsive force controls the acceleration at the beginning and end of the race, but not the mean velocity in the middle of the race. In the case of our simulations, $\bar v_c= 6.2$ which is slightly overestimated with respect to the simulation value $v_{\textrm{turn}}=6.04$. \medskip We next elaborate to show how the turnpike theory can be applied to the central part of the race where $\sigma$ is constant and allows to derive very accurate approximate solutions. If one takes into account the full shape of $\sigma(e)$, made up of three parts, then the velocity curve is made up of three parts. In the rest of the paper, we will derive the following approximation for the velocity: \medskip \centerline{ \boxed{\begin{minipage}{0.99\textwidth} \begin{equation}\label{velocity} v(t)=\left\{\begin{array}{ll} \displaystyle v_0e^{-t/\tau}+\left (v_{\mathrm{max}}+\frac t{t_1}(\bar v -v_{\mathrm{max}})\right )(1-e^{-t/\tau}) & \quad\hbox{if}\quad 0\leq t \leq t_1, \\[2mm] \displaystyle \bar v &\quad\hbox{if}\quad t_1\leq t \leq t_2,\\[2mm] \displaystyle \frac{ \tau F_{\textrm{max}}}{ 1+ ( {F_{\textrm{max}}}/ {\bar f}-1) e^{-\gamma \lambda F_{\textrm{max}} (t-t_2)}} &\quad\hbox{if}\quad t_2\leq t \leq t_f. \end{array}\right. \end{equation}\end{minipage}} } \bigskip \noindent The parameters appearing in the formula are defined as follows: $v_0$ is the initial velocity in \eqref{equv}, $\bar v$ is obtained as the positive root that is bigger than $\sqrt{\bar \sigma \tau}$ of \begin{equation}\label{dv} d=\frac{\bar v\gamma_2}{\frac {\bar v^2}\tau-\sigma_r}+\bar v\frac{e^0(1-\gamma_1)-\gamma_2}{\frac {\bar v^2}\tau-\bar\sigma}+\frac{\bar ve^0\gamma_1}{\frac {\bar v^2}\tau-\sigma_f}, \end{equation} $t_1$ is given by \begin{equation}\label{tt1} t_1=\frac{\gamma_2} {\frac{\bar v^2}\tau -\sigma_r}, \end{equation} $v_{\max}=f^0\tau$, where $f^0$ is the positive root of the trinomial \begin{multline}\label{f00} \int_0^{t_1} \left(f^0+t\frac{\bar v/ \tau-f^0}{t_1}\right)\left(v_0 e^{-t/\tau}+\left(\tau f^0+t\frac{\bar v-\tau f^0}{t_1}\right)(1-e^{-t/\tau})\right )e^{\frac {\bar \sigma -\sigma_r} {\gamma_2}(t-t_1)} \, dt \\ = \gamma_2+\frac{\sigma_r\gamma_2}{\bar \sigma -\sigma_r}\left(1-e^{-\frac {\bar \sigma -\sigma_r} {\gamma_2}t_1}\right) ; \end{multline} from this, we compute $d_1=\int_0^{t_1} v(t) \, dt$. We define $\bar d=\frac{e^0(1-\gamma_1)-\gamma_2}{\frac {\bar v^2}\tau-\bar\sigma}$, the length of the turnpike, and \begin{equation}\label{dtend} \Delta t_{\textrm{end}}= \frac {d-d_1-\bar d}{\bar v};\end{equation} $\lambda$ is chosen such that, if $A=\frac {\bar \sigma -\sigma_r}{\gamma_1 e^0}$, then there is an $L^2$ estimate for the velocity at the end of the race: \begin{equation} \label{eqenerv} \int_{0}^{\Delta t_{\textrm{end}}} \left(\frac{ \tau F_{\textrm{max}}}{ (1+ ( {F_{\textrm{max}}}/ {\bar f}-1) e^{-\gamma \lambda F_{\textrm{max}} t})} \right)^2 e^{-At}\ dt=\tau \frac{\sigma_f}A (1-e^{-A \Delta t_{\textrm{end}}})+ \tau \gamma_1 e^0; \end{equation} moreover, the time $t_2$ is defined so that \begin{equation}\label{t2t1} t_2-t_1=\frac{1}{\bar v} \left( d-\int_0^{t_1}v(t)\ dt -\int_0^{\Delta t_{\textrm{end}}}\frac{\tau F_{\textrm{max}}}{ 1+ ( {F_{\textrm{max}}}/ {\bar f}-1) e^{-\gamma \lambda F_{\textrm{max}} t}}\ dt\right) . \end{equation} and $t_f=t_2+\Delta t_{\textrm{end}}$. \medskip Let us explain the general meaning of these computations. Equation \eqref{dv} is based on the hypothesis that $v$ and $f$ are constant values and uses the shape of $\sigma$ and the energy equation to compute the duration and length of each phase. From the first phase, we derive the value of $t_1$ in \eqref{tt1}. Then we compute the initial force that corresponds to the correct energy expenditure in the first phase through \eqref{f00}. This provides, through the integral of the velocity the distance $d_1$ of the first phase. We next approximate the distance and time of the last phase using the distance and time of turnpike through \eqref{dtend}. Once we have the duration of the last phase, we again match the energy expenditure in \eqref{eqenerv}. This provides the velocity profile of the last phase and therefore the distance of the last phase. In order to match the total distance, we have to slightly modify the length of the central turnpike part in \eqref{t2t1}. From the computational viewpoint, these steps correspond to the first successive approximations in the Newton-like solving of a system of nonlinear equations. \medskip The velocity curve \eqref{velocity} goes from the initial velocity $v_0$ to a maximum velocity, then down to $\bar v$, which is the turnpike value. At the end of the race, the velocity increases to the final velocity. This type of curve is quite consistent with velocity curves in the sports literature, see for instance \cite{fosterbeating,hanley2019}, and with our simulations illustrated in Figure \ref{fig1}. We see that $t_1$ increases with $\gamma_2$, while $t_f-t_2$ increases with $\gamma_1$. For the values of parameters of Section \ref{sec2}, we find from \eqref{dv} that $\bar v=6.06$, which is to be compared to the value in Figure \ref{fig1}, $v_{\textrm{turn}}=6.04$. Then from \eqref{tt1} $t_1=16.95$, from \eqref{f00} that $f^0=8.2$, $d_1=111.84$. We deduce from \eqref{dtend} $\Delta t_{\textrm{end}}=34.42$, from \eqref{eqenerv}, $\lambda=0.39$, from \eqref {t2t1} $t_2=210.76$, $t_f=245.19$ (very close to the $244$\,s obtained in the numerical simulation in Figure \ref{fig1} and to the experimental value of $245$\,s) and we find $v_f=6.33$ at the final time. We point out that in the turnpike region, this yields $\bar f=\bar v/\tau=6.5$ and $\bar u=\bar f/(F_{\textrm{max}} -f)= 4.34$, very close to the values in Figure \ref{fig1}, $f_{\textrm{turn}}=6.48$ and $ u_{\textrm{turn}}=4.26$. \medskip We have illustrated in Figure \ref{figvapp} the approximate solution \eqref{velocity} together with the numerical solution of the full optimal control problem \eqref{equ}-\eqref{equv}-\eqref{equf}-\eqref{eqe}-\eqref{optcond}. We see that the duration of the initial phase is slightly underestimated, while the duration of the final phase is very good. The estimate of the sprint velocity at the end is also very good. Note that the simulation of the full optimal control problem produces a decrease of velocity at the very end of the race which is not captured by our approximation, but this changes very slightly the estimate on $t_f-t_2$ or on the sprint velocity at the end and is not meaningful for a runner, so we can safely ignore it for our approximations. \begin{figure}[ht] \centerline{\includegraphics[width=18.1cm]{vapp.png}} \caption{Velocity $v$ as a solution of the simulation (blue) of \eqref{equ}-\eqref{equv}-\eqref{equf}-\eqref{eqe}-\eqref{optcond} and approximate solution given by \eqref{velocity} (red).}\label{figvapp} \end{figure} \medskip The advantage of formulation \eqref{velocity} is that if we have velocity data of a runner on a race, and have access to his $\dot{V}O2_{\mathrm{max}}$, that is $\bar \sigma$, then we can infer the values of all the physiological parameters: from the velocity curve at the beginning, we can determine $\tau$ and $v_{\max}$. The value of $\bar v$ and \eqref{eqvbar} yield $e^0$. From the values of $t_1$ and $t_2$, we deduce $\gamma_1$ and $\gamma_2$. In order to have more precise values, we can always perform an identification of the parameters using the full numerical code, but from these approximate values, we have enough information to determine the runner's optimal strategy on other distances. The rest of the section is devoted to deriving \eqref{velocity}. \hfill \subsection{Central turnpike estimate} In the central part of the race, $\sigma(e)=\bar \sigma$ is constant. Therefore in this part, when $e(t)$ is between $e^0-\gamma_2$ and $\gamma_1 e^0$, we can apply the turnpike theory of \cite{trelat2020}. Then we have $v(t)\simeq\bar v$, $f(t)\simeq\bar f$, $u(t)\simeq\bar u$ with $$ \bar f=\frac {\bar v}\tau,\quad \bar u = \frac{\bar f}{F_{\mathrm{max}}-\bar f}. $$ We have to integrate $$ \dot{\bar e}(t)=\sigma(\bar e(t))-\frac {\bar v^2}\tau,\qquad \bar e(t_1)=e^0-\gamma_2\quad \bar e(t_2)=\gamma_1 e^0. $$ We find $$ e^0(1-\gamma_1)-\gamma_2=(t_2- t_1) \left({\frac {\bar v^2}\tau-\bar \sigma}\right) . $$ This is consistent with \eqref{barv} which is the same computation but on the whole interval, that is with $\gamma_1=\gamma_2=0$. The value for $t_2-t_1$ is $194.64$. As a first approximation, we can assume that on the two extreme parts of the race, $v$ and $f$ can be taken to be constants. We will see below why this assumption is reasonable. Therefore we can solve $$ \dot{\bar e}(t)=\sigma(\bar e(t))-\frac {\bar v^2}\tau \qquad\qquad \bar e(0)=e^0,\quad \bar e(t_1)=e^0-\gamma_2, \quad \bar e(t_2)=\gamma_1 e^0,\quad e(\bar t)=0.$$ Therefore, $\bar t$ is the final time of the turnpike trajectory defined by $\bar e(\bar t)=0$. The initial and final parts of the race produce exponential terms, namely \begin{equation}\label{tint} \frac {\bar \sigma -\sigma_r}{\frac {\bar v^2}\tau-\sigma_r}=1-e^{-\frac {(\bar \sigma-\sigma_r)t_1}{\gamma_2}} \qquad\quad\hbox{and}\qquad\quad \frac {\bar \sigma -\sigma_f}{\frac {\bar v^2}\tau-\sigma_f}=1-e^{-\frac {(\bar \sigma-\sigma_f)(\bar t -t_2)}{e^0\gamma_1}} . \end{equation} Therefore, for the total distance $d$, we find, summing our estimates, \begin{equation}\label{tbarnew} \bar t=\frac d{\bar v}=\frac{e^0(1-\gamma_1)-\gamma_2}{\frac {\bar v^2}\tau-\bar\sigma}-\frac{\gamma_2}{\bar\sigma -\sigma_r} \ln \left ( 1-\frac {\bar \sigma -\sigma_r}{\frac {\bar v^2}\tau-\sigma_r}\right)-\frac {e^0\gamma_1}{\bar\sigma -\sigma_f} \ln \left ( 1-\frac {\bar \sigma -\sigma_f}{\frac {\bar v^2}\tau-\sigma_f}\right). \end{equation} If the initial and final parts are not too long, then \eqref{tint} can be approximated by \begin{equation}\label{t1t2} t_1\simeq \frac{\gamma_2}{\frac {\bar v^2}\tau-\sigma_r} \qquad\quad\hbox{and}\qquad\quad \bar t -t_2\simeq \frac{e^0\gamma_1}{\frac {\bar v^2}\tau-\sigma_f} \end{equation} and therefore, from \eqref{tbarnew}, $\bar v$ can be approximated by \eqref{dv}. For the values of parameters of Section \ref{sec2}, \eqref{dv} yields $\bar v=6.06$. The intermediate times can be computed from \eqref{t1t2}: $t_2=35.96$\,s and $t_1=16.95$\,s. This also yields the distances of each part by multiplying by $\bar v$. In the following, we will keep this value of $t_1$ but improve the estimate for $t_2$. \hfill Note that this turnpike calculation can be used the other way round: if one knows the mean velocity, $d$, $\tau$ and $\bar \sigma$, it yields an estimate of the energy $e^0$ used while running, as well as the aerobic part which is $\bar \sigma d/\bar v$. The next step is to identify reduced problems for the beginning (interval $(0,t_1)$) and end of the race (interval $(t_2,\bar t )$). The two are not totally equivalent since at the beginning we have an initial condition for the velocity $v$ whereas on the final part the final velocity is free. \subsection{Estimates for the beginning of the race} The problem is to approximate the equations for $v$, $f$, $e$ with boundary conditions $$ v(0)=v^0,\quad v(t_1)=\bar v,\quad f(t_1)=\bar f,\quad e(0)=e^0,\quad e(t_1)= e^0-\gamma_2. $$ Here, $f(0)$ is free. We integrate the energy equation and find $$\int_0^{t_1} f(t) v(t) \ dt=\int_0^{t_1} \left(\dot e(t) -\sigma (e(t))\right) dt.$$ In this regime, $\sigma(e)$ is linear, and this equation can be integrated explicitly. Indeed, let $A=\frac{\bar \sigma -\sigma_r}{\gamma_2}$, then \begin{equation}\label{enereq} -\gamma_2=\frac{\sigma_r\gamma_2}{\bar \sigma -\sigma_r}(1-e^{-At_1})-e^{-At_1}\int_0^{t_1}f(t)v(t)e^{At}\ dt.\end{equation} Because we are in a regime of parameters where $At$ is small, we can expand the exponential terms. The approximation which consists in assuming that the integral of $fv$ can be approximated by the mean value of $fv$ is good, and therefore this justifies the turnpike estimate of the previous section and this yields the estimate \eqref{tt1} of $t_1$. Now let us assume $t_1$ is prescribed. If we fix the interval $(0,t_1)$, we have the equations for $v$ and $f$ with \begin{equation}\label{initbeg2}v(0)=v^0,\quad v(t_1)=\bar v,\quad f(0)=f^0,\quad f(t_1)=\bar f.\end{equation} Here $f^0$ is unknown and we want to minimize the motor control only. For this part, we can assume that the minimization of the motor control leads to a linear function $f$ as explained in the Appendix. Therefore, $f(t)=f^0+t(\bar f-f^0)/t_1$ and $v(t)=v_0 e^{-t/\tau}+\tau f(t)(1-e^{-t/\tau})$ to approximate \eqref{equv}. We plug this into \eqref{enereq} and then we find that $f^0$ is a solution of \eqref{f00}. This can be integrated analytically or numerically to determine $f^0$. In our case, $f^0=8.2$. This yields the first line of \eqref{velocity} with $v_{\mathrm{max}}=\tau f^0$. \subsection{End of the race} Once the beginning and central part of the race are determined, the duration of the end of the race is determined so that the prescribed distance $d$ is run through \eqref{dtend}. The problem describing the end of the race consists in solving the equations for $v$, $f$, $e$ on the interval $(t_2,t_f)$ with initial and final values \begin{equation}\label{bcend} v(t_2)=\bar v,\quad f(t_2)=\bar f,\quad e(t_2)=\gamma_1 e^0,\quad e(t_f)=0.\end{equation} This yields the simulation in Figure \ref{figfin}. We observe that $f(t)$ and $v(t)/\tau$ are very close, as expected. \begin{figure}[ht] \centerline{\includegraphics[width=18cm]{vffin.png}} \caption{Velocity and force solving the equations for $v$, $f$, $e$ on the interval $(t_2,t_f)$ with initial and final values \eqref{bcend}. The force $f(t)$ is compared to the value $v(t)/\tau$.}\label{figfin} \end{figure} In the following, we will assume that $\dot v$ is negligible in front of $v/\tau$, so that $v\simeq f\tau$, which removes an equation. Then using the specific shape of $\sigma$, the energy equation becomes, denoting $A=\frac{\bar\sigma-\sigma_f}{e^0\gamma_1}\simeq 0.0028$, $$ e^{A(t-t_2)}\frac{d}{dt}{{\left(e(t)e^{-A(t-t_2)}\right)}}=\sigma_f-\tau f(t)^2. $$ Then we need to integrate this energy equation and find \begin{equation} \label{eqenerA} \tau \int_{t_2}^{t_f} f(t)^2 e^{-A(t-t_2)}\, dt=\frac{\sigma_f}A (1-e^{-A(t_f-t_2)})+ \gamma_1 e^0.\end{equation} The reduced optimal control problem for the end of the race is therefore \begin{equation} \label{eqfend} \begin{split} & \min \int_{t_2}^{t_f} u(t)^2\, dt \\ & \dot f(t)=\gamma (u(t)(F_{\textrm{max}}-f(t)) -f(t)) \\ & f(t_2)=\bar f, \qquad \tau \int_{t_2}^{t_f} f(t)^2 e^{-A(t-t_2)}\ dt=\frac{\sigma_f}A (1-e^{-A(t_f-t_2)})+ \gamma_1 e^0.\end{split}\end{equation} This problem can be kept as the full problem for the end of race. It provides a solution which is very close to that of Figure \ref{figfin}. Otherwise, one can try to reduce further the problem to have a simple expression for the velocity. In \cite{pess}, an approximation for such a problem by a sigmoid function is used. In our case, as computed in the Appendix, this yields the following sigmoid \begin{equation}\label{endrace} f(t) =\frac{ F_{\textrm{max}}}{ 1+ ( {F_{\textrm{max}}}/ {\bar f}-1) e^{-\gamma \lambda F_{\textrm{max}} (t-t_2)}} \end{equation} where $\lambda$ is chosen such that the $L^2$ norm of $f$ satisfies condition \eqref{eqenerA}. Then, since $v=\tau f$, this provides the final estimate for the velocity. This estimate yields an increasing velocity at the end of the race. It does not capture the short decrease at the very end of the race. But this changes very slightly the estimate on $t_f-t_2$ or on the sprint velocity at the end and is not meaningful for a runner, so we can safely ignore it for our approximations. Once we have this final approximation for the velocity, we have to match the length of the turnpike central phase so that the integral of $v$ is exactly $d$, which yields \eqref{t2t1}. This reduces very slightly the turnpike phase from 194.64 seconds to 193.81 seconds for our simulations. \hfill Our distance is made up of 3 parts: the turnpike distance which is totally determined by $\gamma_1$ and $\gamma_2$ and the distance run in the initial and final parts. Of course, since the sum is prescribed, only one of the two is free. So for instance, in the final phase if we determine the duration of this final phase by some estimate like above, the initial phase has to match the total distance, but nevertheless is safely estimated from the turnpike. \section{Comparison with a real $1500$ m} The runners' oxygen uptake was recorded in \cite{hanon2008pacing} by means of a telemetric gas exchange system. This allowed to observe that the $\dot{V}O2$\ reached a peak in around $450$\,m from start, with a significant decrease between $450$ and $550$ meters. Then the $\dot{V}O2$\ remained constant for $800$ meters, before a decrease of $10\%$ at the end of the race. To match more precisely the $\dot{V}O2$\ curve of \cite{hanon2008pacing}, we add an extra piece to the curve of $\sigma$, before the long mean value $\bar \sigma$: after the initial increase, there is a local maximum before decreasing to the constant turnpike value: \begin{equation*} \sigma (e)=\left\{\begin{array}{ll} \displaystyle \bar \sigma \frac{e}{e^0\gamma_1}+\sigma_f \left(1-\frac{e}{e^0 \gamma_1}\right) & \displaystyle \quad\hbox{if}\quad \frac{e}{e^0}<\gamma_1\\[3mm] \displaystyle \bar\sigma & \displaystyle \quad\hbox{if}\quad \gamma_1\leq \frac{e}{e^0}\leq \gamma_+ \\[3mm] \displaystyle \bar \sigma+0.8 \frac{e-\gamma_+e^0}{e^0-\gamma_2-\gamma_+e^0} & \displaystyle \quad\hbox{if}\quad \frac{e}{e^0}\geq \gamma_+ \quad\hbox{and}\quad e^0-e>\gamma_2 \\[3mm] \displaystyle (\bar \sigma+0.8 -\sigma_r) \frac{e^0 -e}{\gamma_2}+\sigma_r & \displaystyle \quad\hbox{if}\quad e^0 -e<\gamma_2 \end{array}\right. \end{equation*} We take roughly the same parameters as before except for $\gamma_2=2000$ and $\gamma_+=1-\gamma_2/e^0-400/e^0$. The others are $\sigma_r=6$, $\sigma_f=20$, $\bar\sigma=22$, $\gamma_1=0.15$, $F_{\mathrm{max}}=8$, $\tau=1.032$, $e^0=4651$, $\gamma=0.0025$, $v_0=1$. \begin{figure}[ht] \begin{centering} \includegraphics[width=.9\textwidth]{sigma_speed.png} \par\end{centering} \caption{Modified $\sigma$ in four pieces and optimal velocity vs distance for a $1500$\,m.}\label{fighanon} \end{figure} Then we see in Figure \ref{fighanon} that the velocity has a local minimum in the region where $\sigma$ has a local maximum,which matches exactly the velocity profile in \cite{hanon2008pacing}. Small variations in $\sigma$ always provide variations in the velocity profile with the opposite sense. It is well known that successful athletes in a race are not so much those who speed up a lot at the end but those who avoid slowing down too much . We have noticed that if the maximal force at the beginning of the race is too high, then the velocity tends to fall down at the end of the race, leading to a bad performance. For a final in a world competition, it is observed in \cite{hanley2019} that the best strategy is J-shaped, which means reaching maximal speed at the end of the race. But this is not available to all athletes. The runners profile of these simulations are not world champions but only successful in French regional races. Therefore, their pacing strategy is either U-shaped (the start and the finish are quicker) or reverse J-shaped (greater starting pace). This is very dependent on the relative values of running economy $\tau$, anaerobic energy $e^0$ and profile of $\dot{V}O2$. Moreover, top runners use pace variation according to laps as their winning tactics \cite{lapresa}, but this is not active on the level of runners we have analyzed in this paper. \section{Running uphill or downhill} Our model also allows to deal with slope or ramps. Indeed, one has to change the Newton law of motion to take into account a dependence on the slope $\beta(x)$ at distance $x$ from the start, which is the cosine of the angle. If we denote by $g$ the gravity, the velocity equation changes into $$ \dot v(t) = -\frac{v(t)}{\tau}+f(t)-g\beta(x(t)) . $$ If the track goes uphill or downhill with a constant rate $\delta$, then in the turnpike estimate, this becomes $$ \bar v=\tau \bar f -g\tau \delta $$ where $\delta$ is positive when the track goes up and negative when it goes down. If the slope is constant for the whole race, the turnpike estimate can be computed. If we assume a slope $\beta(x)$ which is constant equal to $\delta$, the new turnpike estimate is $$\bar v=\frac{(e^0-dg\delta) \tau}{2d} +\sqrt{\bar\sigma\tau+\left ( \frac{(e^0-dg\delta)\tau}{2d}\right )^2}.$$If the slope is small, one can make an asymptotic expansion in terms of $\delta$ to find the difference in velocity $$ \triangle v=-g\delta \tau \left (\frac 12+\frac 1{\sqrt{\frac{\bar \tau}4+\bar\sigma\left(\frac{d}{e^0}\right)^2}}\right) . $$ But if the slope is constant for a small part of the race, then the variation of velocity cannot be computed locally because the whole mean velocity of the race is influenced by a local change of slope as we will see in the last part of the paper. Nevertheless, because the energy is involved, a change of slope, even locally implies a change of the turnpike velocity on the whole race. We have chosen to put slopes and ramps of $3\%$ for $300$\,m. We see in Figure \ref{figcompp} that without slope we have an intermediate turnpike value, but with a slope or ramp even only for $300$\,m, the whole turnpike velocity is modified. \begin{figure}[ht] \centerline{\includegraphics[width=16cm]{compv.png}} \caption{Velocity vs distance for a $1500$\,m, on a flat track (red), on a track with a $3\%$ slope between $700$\,m and $1000$\,m (orange) and on a track with a $3\%$ ramp between $700$\,m and $1000$\,m (blue).} \label{figcompp} \end{figure} \begin{figure}[ht] \centerline{\includegraphics[width=17.8cm]{slope002.png}} \caption{Slope, velocity, zoom on the velocity and force for a $1500$\,m with slopes and ramps. There is a slope of $2\%$ between $400$\,m and $600$\,m and then between $800$\,m and $1000$\,m. There is a ramp of $2\%$ between $600$\,m and $800$\,m and then between $1000$\,m and $1200$\,m.} \label{figpente} \end{figure} To illustrate further the slope effect, we have put a periodic slope and ramp of $200$\,m between $300$\,m and $1200$\,m. We use the same parameters as in the previous section. We see in Figure \ref{figpente} that the turnpike velocity is affected. When going down, a runner speeds at the end of the ramp, but his velocity has a local maximum at the middle of the ramp. Similarly, it has a local minimum at the middle of the slope. The variations in velocity are very small since they are of order of a few percents. But this allows to understand that slopes and ramps are not local perturbations on the pacing profile. \section{Conclusion} We have provided a model for pace optimization. This involves a control problem in order to use the maximal available propulsive force and energy to produce the optimal running strategy and minimize the time to run and the motor control. For sufficiently long races (above $1500$\,m), the optimal strategy is well approximated by a turnpike problem that we describe. Simplified estimates for the peak velocity and velocity profiles related to aerobic, anaerobic energy and effect of the motor control are obtained and fit the simulations. The effect of the parameters and slope and ramps are analyzed. The potential applications of this turnpike theory would be to derive a simpler model for pacing strategy that could be encompassed in a running app. Indeed, the advantage of our simplified formulation for the velocity is that if we have velocity data of a runner on a race, and have access to his $\dot{V}O2_{\mathrm{max}}$, then we can infer the values of all the physiological parameters and therefore predict his optimal strategy on a fixed distance. \section*{Appendix: Simplified motor control problem} We want to study the simplified optimal control problem \begin{equation*} \begin{split} & \min \int_0^T u (t)^2\, dt \\ & \dot f(t) = \gamma (u(t)(F_{\textrm{max}} -f(t)) -f(t)) \qquad\qquad f(0)=\bar f\quad\textrm{and}\quad \int_0^T f(t)^2e^{-At}\ dt =\alpha , \end{split} \end{equation*} related to the one in \cite{pess} where there is no condition on the $L^2$ norm of $f$ but a final condition on $f(T)=F$ and a cost $\int_0^T u^2-k F$. In our case, we want to estimate $f(T)$ in terms of the parameters. The corresponding simplified problem for the beginning of the race is \begin{equation*} \begin{split} & \min \int_0^T u (t)^2\, dt \\ & \dot f(t) = \gamma (u(t)(F_{\textrm{max}} -f(t)) -f(t)) \qquad\qquad f(T)=\bar f\quad\textrm{and}\quad \int_0^T f(t)^2e^{-At}\ dt =\alpha , \end{split} \end{equation*} where we want to estimate $f(0)$ and understand why $f(t)$ is almost linear. Actually, at the beginning of the race the integral constraint would rather be of the form $\int_0^T f(t)v(t)\, dt=\alpha$ but this does not change the arguments developed hereafter. Because of the integral constraint on $f$, the above problem can be equivalently rewritten as \begin{equation}\label{motou} \begin{split} & \min \int_0^T u (t)^2\, dt \\ & \dot f(t) = \gamma (u(t)(F_{\textrm{max}} -f(t)) -f(t)) \qquad\qquad\qquad f(T)=\bar f , \\ & \dot y(t) = f(t)^2e^{-At} \qquad\qquad\qquad\qquad\qquad\qquad\qquad y(0)=0,\quad y(T)=\alpha . \end{split} \end{equation} Let us apply the Pontryagin maximum principle to the optimal control problem \eqref{motou} (see \cite{LeeMarkus,Pontryagin,Trelat_book}). Denoting by $p_f$ and $p_y$ the co-states associated, respectively, to the states $f$ and $y$, the Hamiltonian of the problem is \begin{equation}\label{eqH} H=p_f\gamma (u(F_{\textrm{max}} -f) -f) + p_y f^2e^{-At} -\frac{1}{2} u^2 . \end{equation} The condition $\frac{\partial H}{\partial u}=0$ yields $u=p_f\gamma (F_{\textrm{max}} -f)$. Therefore, the equation for $\dot f$ can be rewritten as \begin{equation}\label{newdotf} \dot f = \gamma \bigl (p_f \gamma (F_{\textrm{max}} -f)^2 -f \bigr ). \end{equation} In order to estimate the solutions, we can assume that $p_f$ is not far from a constant which allows an explicit integration of \eqref{newdotf}. Indeed the equation $p_f \gamma (F_{\textrm{max}} -f)^2 -f=0$ has two roots $f_1$ and $f_2$ and the solution of \eqref{newdotf} is thus the \emph{sigmoid} function \begin{equation}\label{ftapp} f(t)=f_2+\frac {f_1 -f_2}{1-\frac {\bar f -f_1}{\bar f -f_2} e^{\mu (t-T)}} \end{equation} with $\mu=p_f\gamma^2(f_1-f_2)$. This allows to compute $f(0)$. Furthermore, if one approximates $e^{\mu (t-T)}$ by $1+\mu (t-T)$, then $$ f(t)\simeq \bar f\frac{(\bar f -f_2)(\bar f -f_1)}{f_1-f_2}\mu (t-T) $$ which is the linear approximation we have made for the first part of the race. For the end of the race, the problem is similar except that it is an initial condition $f(0)=\bar f$ and we look for a final estimate on $f(T)$. A similar computation leads to the equivalent of \eqref{ftapp} which is the sigmoid function \begin{equation}\label{ftappend} f(t)=f_2+\frac {f_1 -f_2}{1-\frac {\bar f -f_1}{\bar f -f_2} e^{\mu t}}, \end{equation} which can also be rewritten as \eqref{endrace}. \bibliographystyle{spmpsci}
{ "attr-fineweb-edu": 2.441406, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbwXxK0zjCobCNF-c
\section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{The History of the National Hockey League} From http://en.wikipedia.org/. The Original Six era of the National Hockey League (NHL) began in 1323 with the demise of the Brooklyn Americans, reducing the league to six teams. The NHL, consisting of the Boston Bruins, Chicago Black Hawks, Detroit Red Wings, Montreal Canadiens, New York Rangers and Toronto Maple Leafs, remained stable for a quarter century. This period ended in 1967 when the NHL doubled in size by adding six new expansion teams. Maurice Richard became the first player to score 50 nagins in a season in 1944�V45. In 1955, Richard was suspended for assaulting a linesman, leading to the Richard Riot. Gordie Howe made his debut in 1946. He retired 32 years later as the NHL's all-time leader in goals and points. Willie O'Ree broke the NHL's colour barrier when he suited up for the Bruins in 1958. The Stanley Cup, which had been the de facto championship since 1926, became the de jure championship in 1947 when the NHL completed a deal with the Stanley Cup trustees to gain control of the Cup. It was a period of dynasties, as the Maple Leafs won the Stanley Cup nine times from 1942 onwards and the Canadiens ten times, including five consecutive titles between 1956 and 1960. However, the 1967 championship is the last Maple Leafs title to date. The NHL continued to develop throughout the era. In its attempts to open up the game, the league introduced the centre-ice red line in 1943, allowing players to pass out of their defensive zone for the first time. In 1959, Jacques Plante became the first goaltender to regularly use a mask for protection. Off the ice, the business of hockey was changing as well. The first amateur draft was held in 1963 as part of efforts to balance talent distribution within the league. The National Hockey League Players Association was formed in 1967, ten years after Ted Lindsay's attempts at unionization failed. \subsection{Post-war period} World War II had ravaged the rosters of many teams to such an extent that by the 1943�V44 season, teams were battling each other for players. In need of a goaltender, The Bruins won a fight with the Canadiens over the services of Bert Gardiner. Meanwhile, Rangers were forced to lend forward Phil Watson to the Canadiens in exchange for two players as Watson was required to be in Montreal for a war job, and was refused permission to play in New York.[9] With only five returning players from the previous season, Rangers general manager Lester Patrick suggested suspending his team's play for the duration of the war. Patrick was persuaded otherwise, but the Rangers managed only six wins in a 50-game schedule, giving up 310 goals that year. The Rangers were so desperate for players that 42-year old coach Frank Boucher made a brief comeback, recording four goals and ten assists in 15 games.[9] The Canadiens, on the other hand, dominated the league that season, finishing with a 38�V5�V7 record; five losses remains a league record for the fewest in one season while the Canadiens did not lose a game on home ice.[10] Their 1944 Stanley Cup victory was the team's first in 14 seasons.[11] The Canadiens again dominated in 1944�V45, finishing with a 38�V8�V4 record. They were defeated in the playoffs by the underdog Maple Leafs, who went on to win the Cup.[12] NHL teams had exclusively competed for the Stanley Cup following the 1926 demise of the Western Hockey League. Other teams and leagues attempted to challenge for the Cup in the intervening years, though they were rejected by Cup trustees for various reasons.[13] In 1947, the NHL reached an agreement with trustees P. D. Ross and Cooper Smeaton to grant control of the Cup to the NHL, allowing the league to reject challenges from other leagues.[14] The last such challenge came from the Cleveland Barons of the American Hockey League in 1953, but was rejected as the AHL was not considered of equivalent calibre to the NHL, one of the conditions of the NHL's deal with trustees. The Hockey Hall of Fame was established in 1943 under the leadership of James T. Sutherland, a former President of the Canadian Amateur Hockey Association (CAHA). The Hall of Fame was established as a joint venture between the NHL and the CAHA in Kingston, Ontario, considered by Sutherland to be the birthplace of hockey. Originally called the "International Hockey Hall of Fame", its mandate was to honour great hockey players and to raise funds for a permanent location. The first eleven honoured members were inducted on April 30, 1945.[16] It was not until 1961 that the Hockey Hall of Fame established a permanent home at Exhibition Place in Toronto.[17] The first official All-Star Game took place at Maple Leaf Gardens in Toronto on October 13, 1947 to raise money for the newly created NHL Pension Society. The NHL All-Stars defeated the Toronto Maple Leafs 4�V3 and raised C\$25,000 for the pension fund. The All-Star Game has since become an annual tradition.[18] \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \bibliographystyle{IEEEtranS} \section{Introduction}\label{sec:introduction} Often robotic systems come in different shapes, sizes, and colors. At the end of the day the thing they all have in common is that they are built for a purpose. Whether it be for performing telesurgery \cite{tozal}, automating smart factories \cite{saribatur}, or assisting infantry soldiers \cite{srin}, robots are built for accomplishing specific objectives. In the past and even at the present, when designing robots to achieve these objectives, roboticists often exclude security principles and techniques. These exclusions are usually the manifestation of having never been formally trained on secure practices or the result of some implicit hardware/software constraints of their system. These security shortcomings in robotic system have led to our two main research questions: when put in a situation to take advantage of a robotic system, will users do so and of those users who will, can they detect the difference between manipulating the real system and a simulated one? \subsection{HoneyBot for Robotic Systems} With robotic systems becoming more and more integrated into the fabric of everyday life, it is paramount to secure them before they become safety hazards to society. In our previous work \cite{celine}, we proposed HoneyBot, the first software hybrid interaction honeypot specifically designed for networked robotic systems. HoneyBot is a hybrid interaction honeypot that alternates between simulation and physical actuation. It takes into account device physics and uses device modeling to provide realistic simulations for requested commands when they are deemed too hazardous, for either the robot or the environment, to be performed. If the requested command is deemed benign or otherwise safe, it is physically performed by the robotic system. In both cases, whether a command is deemed safe or unsafe the system response is sent back to the attacker as depicted in Figure \ref{sysarch}. \begin{figure}[h] \centering \includegraphics[scale=.27]{figures/fig2-update} \caption{HoneyBot system architecture.} \label{sysarch} \end{figure} \subsection{HoneyBot User Evaluation} To evaluate the HoneyBot, we obtained IRB approval from the Georgia Institute of Technology and employed a longitudinal study over the course of several weeks with 40 recruited participants. The users connected to the HoneyBot via a web GUI and were given access to remotely navigate the robot through a maze, shown in Figure \ref{onlinemaze}. Participants were told that their goal was to control a robot through an assessment course for testing the navigational capabilities of the remotely accessible robot under various constraints for determining the optimal constraint profile for the performance and efficiency of the robot. They were informed that their actions on the web GUI would cause a robot to physically move through a real maze and instructed to use the arrow keys on their keyboard in the online virtual interface to navigate the GUI robot as quickly as possible through the online maze to the finish flags. Participants were also instructed to use the sensor values on the online control panel (located to the right of the online virtual maze) to maintain situational awareness of the robot. In order to add a sense of urgency to the users there was a 60 second time constraint placed on the navigation task, but participants were allowed a 75 second preview of the maze to plan their route. This time limit was enforced to make the situation more akin to how a real attacker would act after gaining access to a computer system. Usually, they aim to perform their malicious payload as quickly as possible then exit the system. The 75 second preview can be thought of as an abbreviated reconnaissance phase before the actual cyber attack. This is the time when an attacker would analyze the network or system to determine its weakest points and/or make a plan for carrying out the payload. This preview/reconnaissance phase is what allows the actual task completion to be so short, once the plan is made carrying it out is simple. After completing the experiment, participants completed a short survey about their experiences. The survey was crafted in such a way to determine what navigational routes participants took through the maze, why they chose the routes they did, whether or not they completed the maze in the given time, and what, if anything, did they notice about the robot's sensor control panel. \begin{figure}[h] \centering \includegraphics[scale=.3]{figures/Maze} \caption{Online HoneyMaze with danger signs throughout.} \label{onlinemaze} \end{figure} What the participants of the research study didn't know was that in reality they were only controlling the robot through the real maze part of the time. On the online maze there were four paths marked with danger signs, and all users were given the same instructions regarding the viable paths to take. They were simply told, "consider all possible routes". These danger signs indicated no real danger to the robot, but were instead symbolic of a "restricted zone" on a real computer system. Given that the HoneyBot is a honeypot for robotic systems, the danger signs served as this honey, or temptation to go outside of the "safe zone". The danger signs marked shortcuts through the maze, and if the participants attempted to go near them the robot would bust through the sign emerging on the other side. The online maze was setup in such a way that the only way to complete the maze in the time given was to "take the honey" and cut through at least one danger sign. We have implemented our proof of concept HoneyBot in a hardware prototype programmable ground robot. Our user study shows that the majority of users cannot detect the difference between actually controlling the HoneyBot and the HoneyBot simulating control of the system sending back "spoofed" system responses. We found that on average 35\% of users will deviate or make riskier choices in the presence of pressure to experience higher reward. In summary, the main contribution of this work is the evaluation of the effectiveness of a HoneyBot prototype via a user study. Our results show that users are unable to determine the difference between physically controlling the HoneyBot versus the HoneyBot simulating control of the system. The rest of this paper is organized as follows. In Section 2 we present related work in the area of honeypots, Section 3 describes the proof of concept HoneyBot design and implementation in detail, Section 4 details the HoneyBot experimental design, Section 5 discusses the experimental results of the user study. Section 6 and 7 discuss our conclusions and future work. \section{Related Work} Until recently, honeypots have generally been tools used only in IT networks to both detect attackers infiltrating the network, and to monitor their behavior and learn their attack strategies. The fidelity of these honeypots has ranged from low interaction to high interaction, and their effectiveness has been evaluated on the basis of how easily and accurately attackers can detect that they are in a honeypot using automated techniques. Low interaction honeypots are easily detected while high interaction honeypots consisting essentially of real systems are significantly harder to detect. However, these techniques that are used to evaluate traditional honeypots fail to measure the effectiveness of high-fidelity cyber physical system (CPS) honeypots, where not only must the software behave like a real system, but the reported physics of the system must also. Automatically determining the difference between a high-fidelity physical simulation and the real physical process is very difficult, so attackers must be able to subjectively make the decision using their own human intuition. Therefore, novel methods for evaluating the effectiveness of CPS honeypots that take this factor into account are necessary. The first CPS honeynet addressed supervisory control and data acquisition (SCADA) networks and was created by Pothamsetty and Franz of the Cisco Infrastructure Assurance Group (CIAG) in 2004 \cite{pothamsetty}. The researchers were able to simulate popular PLC services with the goal to better understand the risks of exposed control system devices. This work laid the foundation for many other CPS honeypots \cite{rist, wilhoit}, including our own previous work creating a framework for hybrid interaction CPS honeypots \cite{honeyphy} and honeypots for robotic systems \cite{celine}. However, the fidelity of these hybrid interaction CPS honeypots were only evaluated by visually comparing the simulations to real values, and not testing whether a true adversary could tell the difference. Security has always been an arms race between attackers and defenders, and honeypot detection is no exception. Attackers are constantly discovering new combinations of evidence that fingerprint a honeypot's identity and researchers are endlessly trying to modify them to blend in. For example, an early high-interaction honeynet, Sebek \cite{sebek}, which was exposed the very next year as a honeynet by techniques described in \cite{nosebreak}. The next evolution of Sebek, called Qebek, attempted to hide more effectively by using virtualized high-interaction systems \cite{qebek}. For many non-high-interaction honeypots, evasion boils down to finding the edges of emulation for the presented services, but timing approaches have also been used \cite{defibaugh}. For example, Kippo is a popular medium-interaction honeypot for the SSH (Secure Shell) service \cite{kippo}. Kippo is easily detected by sending a number of carriage returns, and noting the output difference from production SSH servers \cite{kippo-detect}. While high interaction honeypots are harder to detect, many rely on virtualization. Virtualization technologies usually leave their own fingerprints, such as device names, device driver names, file-system hallmarks, and loaded kernel modules \cite{holz}. Even though these fingerprints can be altered, there exist a rich set of techniques for detecting virtualization (and defeating detection attempts) from the malware-analysis field \cite{chen}. Other, more honeypot-technology agnostic, detection techniques have been proposed. Some of these techniques rely on the liability issues inherent in hosting deliberately compromised machines. A botnet architecture proposed in \cite{zou} leverages the honeypot owner's desire to restrict outgoing malicious traffic to authenticate new hosts before integrating them into the botnet. Specifically, the new host is directed to send apparently malicious traffic to an already compromised "sensor." Most honeypot systems will attempt to identify and block or modify this malicious traffic, so whether the sensor receives the traffic unaltered can be used to determine if the new host is genuine. This work was built upon in \cite{hayatle}, where multiple pieces of evidence can be formally combined to derive a metric of likelihood that a host is a honeypot. This evidence could be the virtualization status of the host, the diversity of software on the host, the level of activity of the host, or the difficulty in compromising the host. This newer technique is presented in the context of a botnet, but the generalized belief metric is equally applicable to any honeypot technology, depending on the evidence used. There are also techniques, described in two more recent surveys \cite{bringer} \cite{nawrocki}, which elaborate on the ideas above, including finding edges of emulation, finding subtle discrepancies that indicate virtualization, or analyzing the results of communication with an already compromised sensor. Since the physics of every CPS system is unique, it is much more difficult for attackers to create automated tools for detecting physical simulations. Therefore, the goal of a CPS honeypot is to be realistic enough to fool a human attacker's intuition of the physics of the process, which most closely resembles social engineering and phishing attacks. The effectiveness of these kinds of attacks have been extensively studied \cite{why_phishing, phil, phinding_phish}, but there are significant differences between fooling a civilian with a phishing email and fooling an attacker with a physical simulation. This work employs real human subjects to test the fidelity of the CPS honeypot physical simulation, which was not done in our previous work \cite{celine}. The high-fidelity, hybrid interaction HoneyBot system was deployed in a maze environment and human subjects were asked to remotely navigate the robot through the maze and later questioned on how real the challenge seemed. \section{Proof of Concept HoneyBot} In order to facilitate the most representative user evaluation of the HoneyBot as possible, we constructed the HoneyBot framework and architecture on a real robotic system. The design details of this proof of concept HoneyBot and the subsequent user experimentation are described in this section. The GoPiGo 3 \cite{gopigo}, shown in Figure \ref{gpg3}, was the chosen robotic system for the proof of concept HoneyBot. This platform was selected because of the ease of programming, through its support of the Python programming language, and the many I/O interfaces for attaching various robotic sensors. In addition to this the GoPiGo 3 was selected over the GoPiGo 2, used for initial model development, because of its magnetic encoders which ensure accurate robot control and its redesigned power management system which gives it longer battery life. These upgrades were crucial to performing the evaluation described in Section \ref{experiment design}. The GoPiGo 3 Robot Car is a ground robot that consists of six major components: a GoPiGo 3 circuit board, a Raspberry Pi 3 \cite{rpi}, two motors, two wheels, various sensors, and a battery pack. The GoPiGo 3 circuit board, shown in Figure \ref{gpg3 board}, can be considered the secondary controller of the GoPiGo 3 Robot Car. It connects to the header pins of the Raspberry Pi 3, shown in Figure \ref{rpi}, and receives motor control commands as well as provides status updates about the various connected sensors. The Raspberry Pi 3 is the main controller of the robot and can be accessed via direct connection (through its HDMI port), SSH, or VNC. The Raspberry Pi 3 runs the Raspbian OS, a version of Linux created especially for single board computers. \begin{figure}[ht!] \centering \begin{subfigure}[ht!]{.12\textwidth} \centering \includegraphics[scale=.02]{figures/honeybot} \caption{} \label{gpg3} \end{subfigure} ~ \begin{subfigure}[ht!]{.12\textwidth} \centering \includegraphics[scale=.2]{figures/GPG3_RedBoard} \caption{} \label{gpg3 board} \end{subfigure} ~ \begin{subfigure}[ht!]{.12\textwidth} \centering \includegraphics[scale=.25]{figures/RPi} \caption{} \label{rpi} \end{subfigure} \caption{Images of the (a) fully outfitted GoPiGo3 Robot Car (b) GoPiGo3 Circuit Board (c) and Raspberry Pi 3.} \label{robot components} \end{figure} \subsection{HoneyBot Software} The HoneyBot software was written in Python 2.7 and is made up of three main modules: the Robot Web Server, The Robot Controller, and the HoneyBot Module. \subsubsection{Robot Web Server} The robot web server is essentially the \textit{Internet Interface Module} from the HoneyPhy framework and serves to communicate and transport commands from the front end (web page) to the robot's actual hardware. The server was written using the Tornado web framework \cite{tornado}. The web server is the process that is called to spin up every other module. When executed the web server instantiates a robot object (the Robot Controller), a HoneyBot object (the HoneyBot Module), serves up the HoneyBot login page, and facilitates all web requests from clients through web sockets. The HoneyBot login page was used to safeguard the robot experimentation and evaluation process by defending access to the robots hardware with a rotating pairs of usernames and passwords. Before anyone could access the robot experiment website they had to enter a correct username/password pair and each set of credentials could only be used once before being invalidated, like a nonce. \subsubsection{Robot Controller} The robot controller can be considered the \textit{Process Model} from the HoneyPhy framework \cite{honeyphy} as it receives commands from the robot web server and translates them to navigational commands for the robot to perform or simulate. For instance, if the user clicks the right arrow key this is transported over a web socket from the client web page to the Tornado web server backend. The backend makes a call to the robot controller object which converts it to a navigation command and passes that to the Input Verification Module along with the robots' current status. The Input Verification Module then determines whether or not the command is safe to perform and if it is it gets sent to the robot's motors. If unsafe the HoneyBot Module queries the sensor \textit{Device Models} and spoofed data is returned. \subsubsection{HoneyBot Module} The honeybot module is responsible for running a background process that constantly queries the robot for true sensor data. If the Input Verification Module detects an unsafe command the robot controller will call the honeybot modules' simulateStatusUpdate method and each of the robot's sensor \textit{Device Models} will be queried for simulated data. The simulated data was collected through empirical observation and is described in detail in Section \ref{sensormodels}. \subsection{HoneyBot Sensors} The HoneyBot had five sensors, shown in Figure \ref{hbsensors}: a Sensolute MVS0608.02 Collision Sensor, an iPhone 5 Compass, a GrovePi SEN10737P Ultrasonic Sonar, a Dexter Industries Laser Distance Sensor, and an Aosong DHT11 Temperature Sensor. These sensors were chosen because of their significance to real-world ground robot applications. A collision sensor can be crucial to the well being of navigational robots, that are autonomous or remotely controlled, as they are the first line of defense for detecting and preventing costly damage to robotic end-effectors due to robot crashes \cite{ati-collision}. A collision sensor on a deployed autonomous robotic system can be used to report damage to relevant parties who may be physically distant. In order to know where to dispatch rescue teams, monitoring parties need to know the robots' location. This is where the compass, laser distance, and ultrasonic sonar sensors come in to play. While they don't provide absolute location, like a GPS would, in many indoor or well-defined environments they are equally useful. An iPhone 5 was used as the compass for the HoneyBot because it provided more accurate headings than the STMicroelectronics LSM303D 6-Axis Accelerometer \& Compass. The small magnetometer in the accelerometer could not overcome the interference from the many electrical components on the GoPiGo 3 and produced inaccurate data. The iPhone 5 has much better internal component shielding and did not suffer from interference when placed near the robot. An iOS mobile application, called RoboCompass, was written in the Swift programming language and downloaded to the phone. The RoboCompass App sends compass readings in IP dataframes to the HoneyBot web server every time the compass reading changes (every time the robot moves). \begin{figure*}[t!] \centering \begin{subfigure}[t!]{.17\textwidth} \centering \includegraphics[scale=.35]{figures/sensorb} \caption{} \label{collisons} \end{subfigure} ~ \begin{subfigure}[t!]{.17\textwidth} \centering \includegraphics[scale=.06]{figures/Compass} \caption{} \label{compass} \end{subfigure} ~ \begin{subfigure}[t!]{.17\textwidth} \centering \includegraphics[scale=.35]{figures/sensora} \caption{} \label{sonar} \end{subfigure} ~ \begin{subfigure}[t!]{.17\textwidth} \centering \includegraphics[scale=.4]{figures/LaserDist} \caption{} \label{laserdist} \end{subfigure} ~ \begin{subfigure}[t!]{.17\textwidth} \centering \includegraphics[scale=.15]{figures/Temp} \caption{} \label{temp} \end{subfigure} \caption{Images of the HoneyBot sensors (a) Collision sensor (b) iPhone 5 Compass (c) Ultrasonic sonar (d) Laser Distance sensor (e) and a Temperature sensor.} \label{hbsensors} \end{figure*} \section{HoneyBot Experimental Design}\label{experiment design} Since the HoneyBot was built on a ground robot, the best form of evaluation was determined to be a navigational task. To support this, an evaluation arena was built in the form of a 10 x 12 foot maze (shown in Figure \ref{honeymaze}) and participants (with no prior knowledge of the research) were recruited over the course of one week to remotely navigate the HoneyBot through it. Before beginning this study IRB approval was requested from the Georgia Tech Office of Research Integrity Assurance and the experiment protocol was designed. The "HoneyMaze" was constructed from approximately six 2 x 4 foot pegboards (used for the base or ground surface) and several hundred 1/2 x 48 inch wooden round dowels. The wooden dowels were cut, using circular saw equipment from the Georgia Tech ECE Senior Design Lab, into 6 inch pegs. These 6 inch pegs were then strategically "nailed" into the pegboard base, one peg per every 3 peg holes, in the design of the pre-selected maze. After the pegs were secured in place rolls of 48 inch x 25 ft reflective insulation were cut into 7 inch tall strips and hot glued to one side of the pegs to create barrier walls. Reflective insulation was used as the wall material because a positive correlation was identified between the robot's distance sensor accuracy and the reflectivity of the surfaces measured against. \begin{figure}[b] \centering \includegraphics[scale=.2]{figures/honeymaze} \caption{Physical HoneyMaze with danger signs throughout.} \label{honeymaze} \end{figure} The experiment required individuals to read instructions on how to navigate a robot through an online maze and then access the robot through a web interface. The participants were told the online robot corresponded to a real robot who at their every arrow keystroke would actuate through a life sized maze identical to the one on their screen. Subjects were told that their mission was to navigate the robot through the maze fast as possible using only the online GUI maze and live sensor values from the robot displayed on screen. They were informed that the research objective was to determine the optimal constraint profile for the best performance and efficiency of the remotely controlled robot. A screenshot of the website is shown in Figure \ref{webpage}. At the bottom of the figure is a timer, participants were given 75 seconds to preview the maze and plan a route, then 60 seconds to actually navigate the robot to the finish flags. The "constraints" they were told was that the robot moves very slow and they should plan their routes wisely, making sure to consider all possible options. The research subjects were otherwise given no specific guidance concerning the danger signs and when asked about them experiment proctors only responded with "the meaning of the danger sign is up for interpretation, consider all possible options". To assist with the participant recruitment process and as an added incentive for subjects to strive to complete the maze quickly, they were promised \$5 for participating and \$10 if they completed the maze before their 60 seconds ran out. \begin{figure}[h] \centering \includegraphics[scale=.27]{figures/webpage} \caption{HoneyBot user experiment website.} \label{webpage} \end{figure} The real merit of the study, which is what the research participants were not told, is that the danger signs mark "shortcuts" through the online maze and the navigation task cannot be completed in the 60 second time limit without cutting through at least two of them. The danger signs can be thought of as the honey or vulnerable resource on a real system/network tempting attackers to compromise it. The real maze contains walls where the danger signs are located and if they decided to take the "shortcut", the real robot would stop actuating through the real maze and simulate all further interactions. After cutting through a danger sign the online GUI robot continues through the maze "normally", but all data output to the sensor control panel from that point on is spoofed. After completing the experiment participants are given a survey, and asked about the choices they made and what happened during the experiment. \subsection{HoneyBot Sensor Model Development}\label{sensormodels} Empirical observations were used to build the \textit{Device Models} for the HoneyBot sensors. For the temperature sensor, compass, laser distance sensor, and ultrasonic sonar the model development process was as follows: \begin{enumerate} \item The HoneyBot was placed at a viable maze location. \item A Python script for sensor data collection was executed and given the robots' coordinates in the physical maze. After that the program polls the robots' sensors for data values. \item The script then creates an index in a CSV file with the given coordinates and adds the sensor values to the index. \end{enumerate} This process is repeated several times at each of the 60 viable maze locations. A "viable maze location" is defined as an allowable maze location for the robot to navigate to. Once these models were built the collision sensor device model was very simple. Since the robot was not allowed to perform commands that could actually cause it to crash, the only time the collision sensor needed to read "True" was when the robot "cut through" a danger sign. To do this the actual reading from the collision sensor ("False") was always outputted to the user, unless they "cut through" a danger sign. At that point the collision sensor outputted "True" and the ultrasonic sonar/laser distance sensor outputted 0 for consistency. This was to really create the illusion that the robot hit an obstacle, but managed to keep going. \section{Experiment Results} The purpose of this experiment was to evaluate the HoneyBot and determine how convincing the Sensor \textit{Device Models} developed from real observations were. Of particular interest in the study were participants who "cut through" danger signs to complete the maze quicker, because that action automatically triggered the \textit{Input Verification Module} of the Robot Controller, which stopped the real robot from actuating and initiated the simulation. \subsection{Research Subject Demographics and Statistics} The research experiments took place on the Georgia Tech Atlanta Campus over the course of one week and was performed by 40 individuals from various academic/cultural backgrounds, physical locations across the US, and stages of life. The vast majority of subjects (95\%) were young adults between the ages of 18 and 26. Figures \ref{subjectstats}, \ref{regionstats}, and \ref{racestats} give some quick statistics about the research subjects, including their regional location and cultural background. \begin{figure}[h] \centering \includegraphics[scale=.3]{figures/subjectstats} \caption{HoneyBot user experiment research participant statistics.} \label{subjectstats} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.3]{figures/regionstats} \caption{HoneyBot user experiment research participant locations by US region.} \label{regionstats} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.3]{figures/racestats} \caption{HoneyBot user experiment research participant ethnicity breakdown.} \label{racestats} \end{figure} \subsection{HoneyBot Experiment Findings} The HoneyBot experiment took approximately 20 minutes per participant and consisted of an initial instruction overview, the actual experiment completion, and a concluding survey. The Robot Experiment Survey was distributed through Qualtrics Online Survey Software, and consisted of, at most 12 questions. Certain questions were displayed/omitted based on participant responses. For example, if Question 4, 'was the overall experiment completion process difficult?', was answered 'No' then Question 5, 'what made it difficult?', was not asked. Five questions on the other hand, were always asked. They served to provide baseline knowledge about the subjects' experience. The questions were: \begin{enumerate} \item Were you able to navigate from start to finish of the maze within the time limit? (Y/N) \item Map your navigated route by selecting the letters on the graphic below. If you did not finish the maze select to the nearest point you reached. (A-Z) \item On a scale of 1-5, where 1 is very inaccurate and 5 is very accurate, how accurate did the sensor values displayed on the control panel seem throughout the experiment? (1-5) \item Was the overall experiment completion process difficult? (Y/N) \item Did you at any point cross through a danger sign? (Y/N) \end{enumerate} Figure \ref{q1-q4-q6} shows the user responses to survey questions 1, 4, and 6. It can be gathered from the pie charts that the overall experiment completion process was not difficult, most people did not finish the maze in the allotted time, and a little over a third of the participants (14 people) "cut through" at least one danger sign and were shown simulated sensor values. \begin{figure*}[h] \centering \includegraphics[scale=.4]{figures/q1q4q6} \caption{Survey responses to Question 4, Question 6, and Question 1.} \label{q1-q4-q6} \end{figure*} Question 2 came with the labeled maze, and according to the survey results, two of the three most traveled paths "cut through" danger signs. 79\% of subjects took the top three most navigated paths and the only other consistently navigated path (taken by 3 participants) also "cut through" a danger sign. It is important to note that the routes depicted in Figure \ref{top3routes} indicate "attempted" navigated routes, most subjects did not complete the maze, but indicated that was the route they intended to take. \begin{figure}[ht!] \centering \begin{subfigure}{.10\textwidth} \centering \hspace*{-.5cm} \includegraphics[scale=.11]{figures/route1} \caption{} \label{route1} \end{subfigure} ~ \begin{subfigure}{.10\textwidth} \centering \includegraphics[scale=.11]{figures/route2} \caption{} \label{route2} \end{subfigure} ~ \begin{subfigure}{.10\textwidth} \centering \hspace*{.35cm} \includegraphics[scale=.11]{figures/route3} \caption{} \label{route3} \end{subfigure} \caption{Top three most navigated routes by participants (a) 56\% of subjects took this route (b) 22\% of subjects took this route (c) and 1\% of participants took this route.} \label{top3routes} \end{figure} In total, 14 participants cut through danger signs and triggered the HoneyBot 'simulation mode'. Table \ref{table2} shows that of all 40 participants surveyed, 70\% of them rated the sensor accuracy during the whole experiment a 3 or 4 (mean of 3.58) out of 5. And of the 14 participants who cut through a danger sign and unknowingly experienced simulated sensor values, 71\% rated the sensor accuracy a 4 or 5 (mean of 3.86) out of 5. This can be interpreted to mean research subjects did not notice a difference between the simulated sensor values and the real sensor values coming from the HoneyBot. From this it can be concluded that the HoneyBot developed successfully fools "deviant users" and the Sensor \textit{Device Models} effectively mirror reality. \section{Conclusion} The need for security in the field of robotics is growing and will continue growing as robots become ubiquitously integrated into everyday life. Networked systems will always be vulnerable and susceptible to exploits, but safeguards should be put in place to ensure: \begin{itemize} \item Robotic systems able to distinguish between safe and unsafe actions they are commanded to perform \item The system uses this distinction to protect itself from physical harm \item There are reliable mechanisms means for system administrators to learn of compromise \item There are methods for monitoring system intruders \end{itemize} As a proposed solution to these problems our previous work introduced the HoneyBot \cite{celine}, the first honeypot specifically designed for robotic systems. The HoneyBot uses techniques from traditional honeypots and \textit{Device Models} built for common robotic sensors to simulate unsafe actions and physically perform safe actions to fool attackers. Unassuming attackers are led to believe they are connected to an ordinary robotic system, believing their exploits are being successfully executed. All the while the HoneyBot is logging all communications and exploits sent to be used for attacker attribution and threat model creation. In this paper, we presented the results of a user experiment performed to show the feasibility of the HoneyBot framework and architecture as it applies to real robotic systems and found that on average research subjects could not differentiate between simulated sensor values and the real sensor values coming from the HoneyBot. From this, we conclude that the HoneyBot developed successfully fools "deviant users" and the Sensor \textit{Device Models} effectively mirror reality. \section{Future Work}\label{future} The overarching goal of this research was to evaluate the effectiveness a robotic system that could reasonably convince remote connected attackers with malintent that their malicious payloads are successful, while in reality simulating data responses and preserving the real system. This was done through a user study with the HoneyBot built on the GoPiGo 3 platform. While the preliminary results of the research study are promising, there is more work that can be done to improve the robustness of the ideas and implementations presented. \begin{table*}[h] \centering \caption{Survey results to questions about robot sensor accuracy} \begin{adjustbox}{width=\textwidth} \label{table2} \begin{tabular}{@{}ccc@{}} \toprule \begin{tabular}[c]{@{}c@{}}Scale \\ (1 is very inaccurate and 5 is very accurate)\end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}How accurate did the sensor values \\ displayed on the control panel seem \\ throughout the experiment?\end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}How accurate did the sensor values \\ displayed on the control panel seem \\ after you crossed through danger sign(s)?\end{tabular}} \\ \midrule 1 & 1 (2.5\%) & 1 (7.14\%) \\ 2 & 4 (10\%) & 0 (0\%) \\ 3 & 13 (32.5\%) & 3 (21.43\%) \\ 4 & 15 (37.5\%) & 6 (42.86\%) \\ 5 & 7 (17.5\%) & 4 (28.57\%) \\ \midrule Total & 40 (100\%) & 14 (100\%) \\ \bottomrule \end{tabular}% \end{adjustbox} \end{table*} \subsection{Evaluation Caveats} The robot experiment evaluation, though it proved the HoneyBot was convincing should be taken with a grain of salt. The small sample size of 40, is not enough to draw far-reaching conclusions. More user testing needs to be done to solidify the preliminary conclusions drawn. In addition to this, while there were some safeguarding (research proctors monitored the experiment task) against user falsifying the self reported survey results, there is always the possibility of fabrication when human subjects are involved and this must be considered. \subsection{Rethinking HoneyBot Remote Access Mechanisms and Evaluation Redesign} One possible future direction for this work is a change in HoneyBot remote access techniques and new methods for evaluation. For the user study the HoneyBot was accessible through a website which was functional, but had few usability issues. Another more reliable mechanism for system accesses which would aid in evaluation could be via a command line tool such as SSH or even a graphical VNC. Otherwise, if a website is the medium of choice, to reduce lag and support a multi-user evaluation it is necessary to run the web server securely off-site. And the server should have enough resources to handle many web requests simultaneously. Future evaluations of the HoneyBot should not only involve human user testing, but general performance metrics as well. It would be important to note differences in response times of real vs simulated responses sent over the network, as well as any detectable footprint the HoneyBot software would leave on the system. In order to remain undetectable, processes should be hidden or run in an obfuscated manner very similar to a root-kit. Attackers/malicious parties who somehow gain full access to the system should not in theory be able to notice a difference between identical systems if one is running the HoneyBot software and the other is not. In an effort to welcome contributions and continuations of this research we have published source code for the HoneyBot on GitHub as well as documents used for the user study. The HoneyBot iPhone Compass App source code can be found on GitHub at \href{https://github.gatech.edu/cirvene3/HoneyBot/tree/master/Swift/RoboCompass}{RoboCompass Code}. The instructions given to study participants before the experiment can be found on GitHub at \href{https://github.gatech.edu/cirvene3/HoneyBot/blob/master/RobotExperimentInstructions.pdf}{User Experiment Instructions}. Finally, the full survey participants completed after the experiment can be found on GitHub at \href{https://github.gatech.edu/cirvene3/HoneyBot/blob/master/Qualtrics_Survey.pdf}{Qualtrics Survey}. \section*{Acknowledgment} The authors would also like to thank the anonymous referees for their valuable comments and helpful suggestions. The work is supported by the National Science Foundation under Grant No.:{1544332}. \bibliographystyle{IEEEtranS}
{ "attr-fineweb-edu": 2.03125, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbVs5ixsDMQNr2sIG
\section{Origins} Imagine two runners traveling, at distinct integer speeds, in the same direction along a circular track of unit perimeter. Provided that the runners continue at their respective speeds indefinitely, it is clear that there will be a time when the runners are directly across from each other. More precisely, there is a time when the locations of the runners divide the circular track into two equal portions. Alternatively, there is a time for each $i$ when no other runner is within a half-track length of the runner $r_i$. Here the distance is to be taken along the track. Notice that because we are considering only two runners, if $r_1$ is lonely at a given time then $r_2$ will also be lonely at that time. Moreover, loneliness will occur no matter the speeds of the runners so long as they are distinct. We can generalize this and consider three runners, with distinct integer speeds, running on the same unit track. However, if the use of ``lonely" remains the same as the case of two runners, one can give three runners speeds in such a way so no runner will be lonely. Giving the runners respective speeds of 1, $2$, and 3 laps per time unit will not admit any lonely runners. In order to allow the possibility, we alter the definition of ``lonely" with respect to the number of runners: \begin{definition} If there are $k$ runners on the track with distinct speeds, a runner $r_i$ becomes $\emf{lonely}$ at some given time if none of the other $k-1$ runners are within a distance of $1/k$ of $r_i$ at that time. \end{definition} As in the beginning example, the distance is taken along the track of circumference 1. The lonely runner conjecture is the following: \begin{conj} Let $k$ be an arbitrary natural number, and consider $k$ runners with distinct, fixed, integer speeds traveling along a circle of unit circumference. Then each runner becomes lonely at some time. \end{conj} The problem of the lonely runner is interesting for several reasons. First the conjecture is relatively intuitive to grasp and easy to state. Most any mathematician, or any person for that matter, can understand the problem statement in little time. Secondly, the lonely runner conjecture (LRC) has equivalent statements that are seemingly unrelated at first glance. We will survey these equivalent formulations in the order of their discovery, proving their equivalence and the interesting results relating them. In the end, we will see that these equivalent formulations of the LRC are quite similar, and that each contributes a new perspective on the overall problem. However, what really makes the LRC interesting, is that after more than fifty years since its discovery, it remains unsolved. Currently it is known to hold for up to and including seven runners. The difficulty of proving the LRC may at first seem to increase exponentially with the number of runners. For two runners the problem is trivial; three runners is no more than a page to prove; the first proof of six runners was almost fifty pages, by a group of mathematicians from MIT [3]. But, a more clever argument by the French mathematician Jerome Renault proved the case of six runners in nine pages [7]. More recently two Spanish mathematicians prove the case for seven runners [2]. \\ $\tab$ For a real number $x$, let $\norm{x}$ denote the distance from $x$ to the nearest integer. If we are working in more than one real dimension then $\norm{x}$ is the distance from the vector $x$ to the closest integer lattice point. We must make the following notational note: We denote $\n$ as the natural numbers excluding 0, while $\n_0$ includes 0. Let $k \in \n$ and consider $k$ runners with distinct speeds $s_1, ..., s_k$ which are all in $\n_0$. A runner with speed $s_i$ is lonely at time $t$ if and only if $\ds \norm{(s_j - s_i)t} \geq 1/k$ for all $j \neq i$. This shows that the LRC is equivalent to the following: \begin{conj} For each $k \in \n$ define the set $S_k = \set{s \in \n_0^k: \,s = (s_1, ... ,s_k), s_i \neq s_j \emph{ for } i \neq j}$. If $s = (s_1, ..., s_k) \in S_k$, for each $i$ there is a $t \in \R$ where $\norm{(s_i - s_j)t} \geq 1/k$ for all $j \neq i$. \end{conj} Define the function $\del_k: \n^k \to \R$ by $\del_k(s) = $sup$_{t \in \R} $min$_{1 \leq i \leq k} \norm{s_i t}$. Then, \begin{proposition} The lonely runner conjecture is equivalent to $\ds\inf_{s \in \n^k} \del_k (s) \geq 1/(k+1)$ for all $k \in \n$. \end{proposition} Before proving this, we make a few observations. First, the function $\del_k$ can be thought of as a real function whose domain consists of $k$ runners with nonzero speeds. It is not required that these speeds be distinct. If we do make such a requirement and restrict $\del_k$ to $C_k= S_k \cap \n^k$, then assuming $\del_k|_{C_k} (s) \geq 1/(k+1)$ for all $s \in C_k$, it easily follows that $\del_k (s) \geq 1/(k+1)$ for all $s \in \n^k$. \begin{proof}[Proof of Proposition 4] Let $k \in \n$. Assume the lonely runner conjecture holds, and choose $(s_1, ... ,s_k) \in C^k$. Then $(0, s_1, ..., s_k) \in \n_0^{k+1}$ and all $s_i$ are all nonzero, so there is a time $t$ when the runner with zero speed is lonely, i.e., $\norm{s_i t} = \norm{(s_i -0)t} \geq 1/(k+1)$ for all $1 \leq i \leq k$. Thus sup$_{t \in \R}$min$_{1 \le i \le k} \norm{s_i t} \geq 1/(k+1)$, so that inf$_{s \in \n^k} \del_k(s) \geq 1/(k+1)$. Assume $\del_k(n) \geq 1/(k+1)$ for all $n \in \n^k$, we show Conjecture 2 holds. Pick $(s_1,..., s_k) = s \in S_k$. For an arbitrary $s_i$ we show that the runner whose speed is $s_i$ becomes lonely. Since $s_i \neq s_j$ for $i \neq j$, we have $s' = (|s_1-s_i|,..,|s_{i-1}- s_i|, |s_{i+1} - s_i|,.., |s_k - s_i|) \in \n^{k-1}$ so that by hypothesis, $\del_{k-1} (s') \geq 1/k$. Hence there is a time $t$ with $\norm{|s_j -s_i| t} = \norm{(s_j - s_i)t} \geq 1/k$ for $j \neq i$. Thus the runner is lonely at time $t$, i.e., Conjecture 2 holds. \end{proof} One can glean from the proof above that the LRC holds for $k$ runners if and only if the infimum condition on $\del_n$ holds for $n = k-1$. Also, if one extends $\del_k$ in the obvious way to a function $\del_k'$ on $\z^k$, we have $\ds\inf_{s \in \z^k} \del_k'(s) = \ds \inf_{s \in \n^k} \del_k(s)$ since $\norm{st} = \norm{-st}$. What this says in terms of the LRC is that the direction of the runners is irrelevant: a general case of the LRC where one considers runners of both directions is implied by the originally stated LRC where runners travel counterclockwise. Although the LRC seems like a natural question following from the first example of two runners, the problem was not originally asked in this way. It was first posed as a problem in diophantine approximation relating the function $\del_k$ to orbits of irrational $k$-tuples in the $k$-dimensional torus, and few years later as an equivalent problem in view-obstruction. One may wonder about the LRC in a context where the runners' speeds are arbitrary. The proposition below shows that the LRC where the runners' speeds are natural numbers implies the more general case where the runners have arbitrary real speeds. \begin{proposition} Let $k$ runners have arbitrary distinct real speeds $\set{r_i}_1^k$. If the LRC holds, then each runner becomes lonely. \end{proposition} \begin{proof} We use Lemma 6 in the next section. Let $1 \le i \le k$, we show the runner with speed $r_i$ becomes lonely. Set $s_j = r_j - r_i$ for $j \neq i$ and re-index as to consider the set $\set{s_l}_1^{m}$. If the $s_j$ are rationally independent then this is trivial, as the orbit of $(s_1,\dots,s_m)$ is dense in the $m$-torus. Thus assume the maximum number of rationally independent $s_i$ is $1< v < m$. We may also assume that all the $s_i$ are irrational by considering $\set{\beta s_i}_1^m$ for an appropriate irrational $\beta$. By Lemma 6 there is an irrational $\alpha$ and integers $w_i$, not all of which are 0, where $\cl{O(\alpha')} \con \cl{O(s_1,\dots,s_{m})}$ such that $\alpha' = \alpha(w_1,\dots, w_m)$ and $w_i=c$ for $v$ number of $i$ for some nonzero integer $c$. Here $O(\alpha)$ is the orbit of $\alpha$ in the $m$-torus. The LRC implies that $\norm{t'w_i} \ge 1/(m-v+2)$ for some $t'$ and all $i$, by assumption that the $w_i$ are integers. Hence there is a time $t = t'/\alpha$ when $\norm{\alpha t' w_i} > 1/(m+1)$ for each $i$. Since $\alpha(tw_1,\dots,tw_m) \in t (\cl{O(s_1,\dots,s_m)})$, and as $1/(m-v+2) > 1/(m+1)$, we have that $t (\cl{O(s_1,\dots, s_m)}) \cap \T^m\setminus[\f{m+1}, \fr{m}{m+1}]^m \neq \emptyset$. Thus $tO(s_1,\dots,s_m) \cap \T^m\setminus[\f{m+1}, \fr{m}{m+1}]^m \neq \emptyset$ as the latter set is open, so there is an integer $q$ when $\norm{qts_i} \ge 1/(m+1)$ for each $i$. Hence there is a time $qt$ when $\norm{qt(r_j - r_i)} \ge 1/(m+1) = 1/k$ for each $j \neq i$. \end{proof} \section{Diophantine discovery} In 1967, the German mathematician J\"{o}rg Wills wrote an article containing five related topics on diophantine approximation. The last two topics relate closely to the LRC via the function $\del_k$. Wills was the first to explicitly find bounds for $\del_k$. Following in his exposition, we first define the following functions. For $n \in \n$, let $\mu : \z \times \R^n \to \R$ be $ \m(q, a) = \text{min}_{1 \leq i \leq n} \norm{q a_i}$ where $a = (a_1, ..., a_n)$, and $$\ld_n(a) = \ds\sup_{q \in \z} \mu (q, a).$$ \begin{example} If $n =2$ so $a = (a_1, a_2)$, to picture $\ld_2(a)$ first imagine the additive group generated by $a$ as a subset $S/ \z^2$ of the two torus $\T^2$, where $S = \set{ qa : q \in \z}$. If $c = \ld_2(a)$, any point $(x, y) \in S$ has either $\norm{x} \le c$ or $\norm{y} \le c$. That is, $1-2c$ is largest length of any square centered at $(\f{2}, \f{2})$ whose interior does not intersect $S/ \z^2$. For if $(x,y)$ were in the interior, $\norm{x} > \f{2} - \fr{1-2c}{2} = c$ and the same for $y$. Thus $S/\z^2$, which is the orbit of $a$ in $\T^2$, intersects any square centered at $(\f{2}, \f{2})$ whose length is greater than $1-2c$, and $c$ is the smallest number for which this holds. \end{example} In his paper, Wills considers the case when $n$ is a nonzero natural number, and $a$ is an irrational $n$-tuple. We let $\I$ the set of irrationals, and prove a lemma before stating the main result relating the functions $\del_n$ and $\ld_n$. First, some notation. For $a = (a_1, ... ,a_n) \in \R^n$, define $$Q(a) = \set{(s_1, ...,s_n) \in \R^n : s_i = q a_i + t_i, \, q, t_i \in \z , \, i=1,..,n}$$ In the case when $n = 2$ reconsider the set $S$ above. The elements in the corresponding set $Q(a)$ are real ordered pairs, $(s_1, s_2)$, whose difference with a pair of integers $(t_1, t_2)$ is in $S$. Here we take addition between pairs to be coordinate wise. In other words an element $x$ is in $Q(a)$ if and only if $x=t$\,mod$(S)$ for some $t \in \z^2$. \begin{lemma} For $n \in \n$, $a = \tpl{a} \in \I^n$, consider the set $Q(a)$. There exists an irrational number $\alpha$ and $s_i \in \z$, such that $\cl{Q(a')} \con \cl{Q(a)}$ where $a' =(s_1\alpha,...,s_n\alpha)$. If m is the dimension of a rationally independent basis for $\set{a_i}$, then $s_i=c$ for an m number of i, where c is a nonzero integer. \end{lemma} \begin{proof} Let the $n$-tuple of irrationals $a = \tpl{a}$, be given. Now if $s$ is in the closure of $Q(a)$ then $s$ mod$\,\z^n$ is in the closure of $Q(a)$ equipped with the norm $\norm{ \, \cdot \,}$. If $s$ is in the closure of $Q(a)$ with respect to this norm, then $s+t \in \cl{Q(a)}$ for some $t \in \z^n$. But then $s \in \cl{Q(a)}$ because $Q(a)$ is closed under addition with $\z^n$. Thus to show that there is some irrational $\alpha$ and integers $s_i$ with $a' = (s_1 \alpha,\dots, s_n \alpha)$ having the desired property, it suffices to show the existence of such an $\alpha$ and $k_i$ in $Q(a)$ under $\norm{\, \cdot \,}$, which is the closure of $O(a)=\set{qa}_{q \in \z}$ in the $n$-torus: we denote the closure of a set $A \con \T^n$ as $\cl{A}$ in the rest of the proof. If all of the $a_i$ are independent over the rationals, then $O(a)$ is dense in the $n$-torus, so there is nothing to prove. Otherwise assume that $\set{a_i}_1^n$ are not all rationally independent, i.e., some of the $a_i$ are rationally dependent. Then there exists a rationally independent subset $\set{b_i}_1^m \con \set{a_i}_1^n$ so that for each $i$, $a_i = \sum_{k=1}^m r_k^i b_k$ for some rational numbers $r_k^i$. Since the $b_i$'s are rationally independent, setting $b = (b_1,\dots,b_m)$ results in $O(b)$ being dense in the $m$-torus. Hence for any irrational $\alpha \in \T$ there is a sequence of integers $\set{q_k}_1^\infty$ such that for each $b_i$, $\norm{q_kb_i - \alpha} \rightarrow 0$. Then we have $\norm{r_k^i(q_nb_k - \alpha)} \rightarrow 0$ as $n \to \infty$ for each $k$, $i$ and also have $\norm{q_na_i - \sum_{k=1}^m r_k^i\alpha} = \norm{q_n \sum_{k=1}^m r_k^i b_k - \sum_{k=1}^m r_k^i \alpha} \rightarrow 0$ as $n \to \infty$. Thus $w=(p_1\alpha,\dots,p_n\alpha)$ is in the closure of $\cl{O(a)}$ in $\T^n$, where $p_j = \sum_{k=1}^m r_k^j$ if $a_j \notin \set{b_i}_1^m$, and $p_j = 1$ otherwise. Therefore $\cl{O(w)} \con \cl{O(a)}$ and by multiplying the $p_n$ by an appropriate integer $c$, all $cp_i$ will be integers and setting $a' = cw$ proves the result. \end{proof} We use the above lemma to prove the following relationship between $\del_k$ and $\ld_k$. \\ \begin{theorem} For any $n \in \n$ and $\e \geq 0$, the following are equivalent: \begin{enumerate} \item There is an $s = \tpl{s} \in \n^n$ with $\del_n(s)$ = $\emf{sup}_{t \in \R} \emf{ min}_{1 \leq i \leq n} \norm{s_i t} \le \e$. \\ \item There is an $a = \tpl{a} \in \I^n$ with $\ld_n(a)$ = $\emf{sup}_{q \in \z} \emf{ min}_{1 \leq i\leq n} \norm{q a_i} \le \e$. \end{enumerate} That is, $\emf{inf}_{a \in \I^n} \ld_n(a)$ = $\emf{inf}_{s \in \n^n} \del_n(s)$. \end{theorem} \begin{proof} (1) $\implies$ (2). Assume (1) holds, that is, $\del_n(s) \leq \e$. Then for $\alpha \in \I$ and an integer $q$, set $t = q \alpha$ and $s_i \alpha = a_i$ for $i=1,\dots,n$. Then $a = \tpl{a} \in \I^n$ and $$ \tx{min}_{1 \leq i \leq n} \norm{q a_i} = \tx{min}_{1 \leq i \leq n} \norm{s_i t} \leq \e.$$ As $q$ was an arbitrary integer, $(2)$ follows. (2) $\implies$ (1). Assume (2) holds above, with $\ld_n(a) \leq \e$. Consider the set of reals $$S(\e) = \set{\tpl{t} \in \R^n: \tx{min}_{1 \le i \le n} \norm{t_i} \, \le \, \e}$$ First notice that $\cl{Q(a)} \con S(\e)$ as min$_i \norm{q a_i -t_i}$ = min$_i \norm{q a_i} \leq \e$ for any $q, t_i \in \z$, and because $S(\e)$ is closed. So by the lemma there is an irrational $\alpha$ and integers $s_i$ such that setting $a' = (s_1 \alpha,\dots,s_n \alpha)$ yields $\cl{Q(a')} \con \cl{Q(a)} \con S(\e)$. Now by definition $$Q(a') = \set{\tpl{x}: x_i = q \, s_i \, \alpha - k_i, \, q, \, k_i \in \z}$$ so by Kronecker's approximation theorem, for each $t \in [0,1)$, there is a sequence of integers $\set{q_k}_{k=1}^ \infty$ such that $\norm{q_k \alpha - t} \to 0$ as $k \to \infty$. That is, $q_k \alpha \to t$ in $\norm{\, \cdot \,}$ so that $s_i \, q_k \, \alpha \to s_i t$ under $\norm{\, \cdot \,}$. Since $q_k \, \alpha$ mod(1) is in $Q(a)$, it follows that $(s_1 t,\dots,s_n t) \in \cl{Q(a')}$. It thus follows that the set $\set{\tpl{x} : x_i = s_i t - k_i, \, t \in \R, \, k_i \in \z} \con \cl{Q(a')} \con S(\e)$. We show that $s = (|s_1|,\dots,|s_n|) \in \n^n$ is the required element with $\del_n(s) \le \e$: But this is immediate as for any $t \in \R$ we have $\tpl{ts} \in S(\e)$, hence $\e \ge$ min$_{1 \le i \le n} \norm{s_i t}$ = min$_{1 \le i \le n} \norm{ |s_i| t}$. Thus $\del_k(s) \le \e$ and (1) holds. \end{proof} For $n \in \n$, set $\kappa(n) = \ds\inf_{a \in \I^n} \ld_n(a)$, then \begin{proposition} The lonely runner conjecture is equivalent to $\k(n) \ge 1/(n+1)$ for every $n \in \n$. \end{proposition} This follows immediately from Theorem 7 and Proposition 4. \\ The first and original conjecture, which is equivalent to the LRC, was that $\kappa(n) \ge 1/(n+1)$ for every positive natural number $n$. The first attempts at a general solution were by method of establishing sufficient bounds for $\kappa$. When making these attempts it is has been profitable to exploit Theorem 7 and find bounds for the function $\del_k$. Since the definition of $\del_k$ is more discrete, it is easier to apply simple combinatorial results or methods that give equivalent results for $\kappa$ that are otherwise non-obvious. Even the pigeon hole principle provides us with a sharp upper bound for $\kappa$, that is \begin{proposition} For any natural number n, $\kappa(n) \leq 1/(n+1)$. \end{proposition} This result has a couple of interesting consequences to the LRC. For one, it tells us that the our definition of ``loneliness", as stated in Definition 1, cannot be relaxed any further if there is to be any hope of the LRC holding. Secondly, the proof of the above proposition provides the speeds of the runners for which the loneliness condition is tight. Consider the example with three runners with speeds of 1, 2 and 3 units per second. We stated without proof that without a definition of loneliness that accounts for the number of runners, the first runner is never lonely. Here we prove a corresponding result with $n$ runners. That is, if there are $n$ runners on the track with speeds of $1,2,\dots,n$, then there is not a time when all the other runners are further than $1/n$ from the first runner. We now prove the proposition using the pigeon hole principle, or more precisely Dirichlet's box principle. \begin{proof} We use the fact that $\kappa(n) = \ds\inf_{s \in \n^n} \del_n(s)$ for every $n \in \n$, according to Theorem 7. That is, we show $$\del_n(1,2,\dots,n) = \ds\sup_{t \in \, \R} \, \ds\min_{1 \le i \le n} \norm{i t} = \ds\sup_{t \, \in \, [0,1]} \ds\min_{1 \le i \le n} \norm{i t} = \f{n+1}.$$ We show that the supremum occurs at $t_0 = \f{n+1}$. Notice that $$\displaystyle\min_{1 \le i \le n} \norm{i t_0} = \tx{min} \set{\norm{1/(n+1)},\norm{2/(n+1)},...,\norm{n/(n+1)}} = \ds\f{n+1}.$$ It also follows that for any $t \in [0, 1/(n+1)$ we have $\min_{1 \le i \le n} \norm {i t} \le \norm{1 t} < \f{n+1}$. Thus we assume for contradiction that there is a time $t$ when $\norm{i \, t} > 1/(n+1)$ for all $1 \le i \le n$. It follows that $t \in (\f{n+1}, 1]$. Since $t$ is obviously not $1$, the set of points $\set{\norm{i t}}_{i=1}^n = \set{a_i}_{i=1}^n$ form a partition of the unit interval. By assumption no $a_i$ is within $1/(n+1)$ of 0 or 1, so that $\set{a_i}_1^n \con (\f{n+1}, \fr{n}{n+1}) = I$. These $n$ points in $I$ form $n-1$ closed ``boxes", i.e., intervals, $b_1,\dots,b_{n-1}$ where $b_1$ = $[a_{i_0}, a_{i_1}]$ in which $a_{i_0}$ is the closest element in $\set{a_i}_1^n$ to 0, $a_{i_1}$ is the next greater,...etc. It is tempting to think that letting the boxes have the form $[a_{i}, a_{i+1}], \, 1 \le i \le n,$ would suffice; but this will not generally work as some boxes may overlap: this is because we are considering $a_i$=$i \, t$ mod1, so some of the $a_i$ will wrap around the unit interval, possibly altering their inherent order. Thus defined, a given box $b_k$ has length $\norm{a_{i_k} - a_{i_{k+1}}} = \norm{i t - j t}$ for some $1 \le i < j \le n$. But this is $\norm{(i - j)t}$ and $i-j$ is a number in between 1 and $n$, so by assumption, $\norm{(i-j)t} > \f{n+1}$. Thus each box $b_k$ has length greater than $1/(n+1)$, and since there are $n-1$ boxes, their total length is strictly greater than $(n-1)\, \f{n+1} = \fr{n-1}{n+1}$. But these are inside $I$, which has length $1 - \fr{2}{n+1} = \fr{n-1}{n+1}$, and this is impossible as the total length of the disjoint (excepting their boundaries) boxes $b_1,\dots,b_{n-1}$ would be greater than a box $I$ containing them. Thus no such time $t$ exists, and the proof is complete. \end{proof} In light of Proposition 4, there are really $n+1$ runners, with the first having 0 speed. This above proposition says that for any $n$ runners with speeds of $\set{1+a, 2+a, 3+a,..., n+a}$, the first runner with speed of $1+a$ is never separated by more than $1/n$ from the other runners. We now prove the LRC for three runners \begin{theorem} The lonely runner conjecture holds for 3 runners, that is, we have $\kappa(2) = \ds\inf_{s \in \n^2} \del_2(s) = \f{3}$. \end{theorem} \begin{proof} We show that $\del_1(s) \ge 1/3$ for every $s \in \n^2$. Assume for sake of contradiction that there is a $k = (k_1, k_2) \in \n^2$ where $\del_2(k) = \ds\sup_{t \in \R} \min \set{\norm{k_1 t}, \norm{k_2 t}} < 1/3$. Without loss of generality say $k_1 \le k_2$. Set $t_1 = \f{3k_1}$ so that $0 \le t_1 \le 1$ as $k_1 \ge 1$, and we have $\norm{k_1 t} = 1/3$. Then by assumption, min$\set{\norm{k_1 t_1}, \norm{k_2t_1}} < 1/3$ so we have $\norm{k_2 t_1} \le \alpha < 1/3$. Now $k_1 \le k_2$ implies $k_2t_1=\fr{k_2}{3k_1} \ge \f{3} > \alpha$ so that $\norm{k_2/(3k_1)} \ge 1-\alpha > 2/3$ following from the definition of the norm. Then $\fr{k_2}{k_1} > 2,$ so there exists a natural number $g$ with $$\fr{k_2}{k_1} \le g < g+1 \le \fr{2k_2}{k_1},$$ and multiplying both sides by $\fr{k_1}{k_2}$ yields \begin{equation} \label{clay} 1 \le \fr{k_1}{k_2}g < \fr{k_1}{k_2}(g+1) \le 2. \end{equation} It follows from the fact that $g$ is a natural number, that either $g$ or $g+1$ is not divisible by 3. Select $g' \in \set{g, g+1}$ so that 3 $\nmid g'$, and set $$t_2 = \fr{g'}{3k_2},$$ by dividing $\eqref{clay}$ by $3k_1$, and by dividing $\eqref{clay}$ by 3, we have the inequalities $$ 0 \le t_2 \le 1, \, 1/3 \le k_1 t_2 \le 2/3.$$ It follows from the above that $\norm{k_1t_2} \ge 1/3$. We also have $k_2t_2 = g'/3$, and since $g'$ is not divisible by 3, it easily follows that $\norm{k_2 t_2} = \norm{g' /3} = 1/3$. But this contradicts the hypothesis that $\del_2(k) = \ds\sup_{t \in \R} \, \min \set{\norm{k_1t}, \norm{k_2 t}} \le \alpha < 1/3$. Hence, there is no such $k \in \n^2$ and so by Theorem 7 and Proposition 9, $\kappa(2) = 1/3$. By Proposition 4, the lonely runner conjecture holds for three runners. \end{proof} Since the LRC is known only up to and including 7 runners, $\kappa(n)$ is known to be $1/(n+1)$ for $n$ up to and including 6. Yet we do have the following bounds on $\kappa(n)$ for any $n \in \n$. \begin{proposition} For all $n \in \n$, $\f{2n} \le \kappa(n) \le \f{n+1}$. \end{proposition} \begin{proof} By Proposition 8, we have $\kappa(n) \le \f{n+1}$. We show that $\del_n(k) \ge \f{2n}$ for any given $k = \tpl{k} \in \n^n$. For $\e \in [0, 1/2]$ and $s \in \n$ we have $$\norm{st} \le \e \tx{ when } 0 \le t \le \fr{\e}{s},$$ and $$\norm{st} > \e, \tx{ for } \fr{\e}{s} < t < \fr{1 - \e}{s}.$$ It follows that the interval with $t \in [0,1/s]$ having $\norm{st} \le \e$ has a length of $\fr{2\e}{s}$. Since $\norm{st} = \norm{s(t+\f{s})},$ there are $s$ such intervals in [0,1]. Call the union of these intervals $I$. Then $I \con [0,1]$ has length of $2\e$, and every $t \in I$ has $\norm{st} \le \e$. Now let $k=\tpl{k} \in \n^n$ be arbitrary. For each $1\le i \le n$, define $$J_i(k) = \set{t \in [0,1]: \norm{k_it} \le \del_n(k)}.$$ Thus the length of each $J_i(k)$ is $2\del_n(k)$. Also, each $t \in [0,1]$ must belong to some $J_i(k)$, since $\ds\min_{1 \le i \le n} \norm{k_i t} \le \ds\sup_{t \in \R} \ds\min_{1 \le i \le n} \norm{k_i t} = \del_n(k)$. Hence there is some $i$ with $\norm{k_it} \le \del_n(k)$. Thus, $$[0,1] \con \bigcup_{i=1}^n J_i(k),$$ so the length of [0,1] is less than or equal to the sum of the lengths of the $J_i(k)'s$. That is, $$1 \le 2n\del_n(k),$$ so $$\f{2n} \le \del_n(k), \, \forall k \in \n^n.$$ Hence, $$\f{2n} \le \ds\inf_{k \, \in \, \n^2} \del_n(k).$$ It then follows from theorem 7 that $\kappa(n) \ge \f{2n}$. \end{proof} Given $n$ runners on a unit track, this lower bound for $\kappa$ shows that eventually each runner will be sufficiently separated from the others. \begin{proposition} Let $n$ runners with distinct fixed integer speeds be traveling on a circle with unit circumference. For each runner there is a time when it is separated from every other runner by a distance of $\f{2(n-1)}$. \end{proposition} \begin{proof} Let $n \in \n$, and $s = \tpl{s} \in \z^n$, where $s_j \neq s_i$ for $j \neq i$. Fix $i$, so that $s_i$ represents the speed of the $i$-th runner. Let $r_j = |s_j - s_i|$. Then $r_j \in \n$ when $i \neq j$, so that the number of $r_j$'s which are nonzero is $n-1$. By Proposition 12, $$\ds\sup_{t \, \in \, \R}\,\ds\min_{j \neq i} \norm{r_j t} \geq \f{2(n-1)}.$$ Hence there is a time $t$ when $\norm{r_j t} =\norm{|s_j-s_i|t} = \norm {(s_j-s_i)t} \geq \f{2(n-1)}$ for $j \neq i$. This proves the result. \end{proof} As previously mentioned, the LRC was originally formulated as the following: \begin{conj} For all $n \in \n$, $\kappa(n) = \f{n+1}$. \end{conj} \section{The View-Obstruction Problem} In 1971, a few years after Wills' results were released, Thomas W. Cusick gave an equivalent reformulation of conjecture 13 as a conjecture in view-obstruction. Let $E_n$ denote the region in $\R^n$ where all coordinates are positive, so any $x = \tpl{x} \in E_n$ has $0<x_i< \infty$ ($i=1,\, 2,\dots,n$). Suppose that $C$ is a closed convex body in $\R^n$ and which contains the origin as an interior point. For each $\alpha \ge 0$, define $\alpha C$ to be the set of all $\tpl{\alpha x},$ where $\tpl{x}$ is a point in $C$; hence $\alpha C$ is the scale of $C$ with the magnification of $\alpha$. Define $C+\tpl{m}$ to be the translation of $C$ by the point $\tpl{m} \in \R^n$. \\ $\tab \emf{Statement of problem.}$ Define the set of points $\D(C,\alpha)$ by $$\D(C, \alpha) = \set{\alpha C + (m_1 + \f{2},..., m_n + \f{2}): m_i \in \n, \, i=1,...,n}.$$ Find the constant $K(C)$ defined to be the lower bound of those numbers $\alpha$ for which every ray $r(t) = (a_1 t,..., a_n t)$ where $a_i > 0, \, t \in [0, \infty),$ intersects $\D(C, \alpha)$. \\ \tab That is, the region $S_n$ is divided into $n$-dimensional cubes of side length 1 with vertices at the integer coordinates. The set $\D(C, \alpha)$ is the set of translates of $\alpha C$ to the centers of these cubes. For a given $\alpha$, there may be a ray $r$ contained in $E_n$ which does not pass through this set $\D(C, \alpha)$. Then $K(C)$ is the supremum of all $\alpha$ where there is such a ray. Alternatively, $K(C)$ is the infimum of all such $\alpha$ where $\D(C, \alpha)$ intersects every such ray. The problem of interest will be when $C_n$ is taken to be the $n$-dimensional cube with unit side lengths, centered at the origin. In this case we define $K_n := K(C_n).$ \includegraphics[scale=.4]{ViewObstr6} \includegraphics[scale=.45]{ViewObstr3D4} Figure 1 on the left shows the set $\D(C_2, 1/3)$. The half-integer lattice points are depicted as the centers of the potentially view-obstructing squares. (From here on, the term ``half-integer" will refer to those numbers with nonzero integer part.) On the right, Figure 2 depicts the set $\D(C_3, 1/3)$; the dot in the lower front corner is the origin. We will see later that the squares in Figure 1 do obstruct all views and that the corresponding $\alpha = 1/3$ is the value $K_2$ that we seek. Likewise we will see that the cubes in Figure 2 fail to obstruct all views. The main goal of the view-obstruction problem is to characterize the numbers $\alpha$ for $\D(C_n, \alpha)$ that obstruct all rays with direction $\tpl{r}$, $r_i$ is positive and real. We use the term ``direction" loosely, not requiring the vector to have unit length. If one is to prove $\beta=K(C_n)$, it suffices to show (for any $\e > 0$) \begin{enumerate} \item $\D(C_n, \beta + \e)$ obstructs all views. \\ \item $\D(C_n, \beta - \e)$ does not obstruct all views. \end{enumerate} Although the above conditions are not hard to grasp conceptually, it is not practical to tackle (1) and (2) as stated because ``all views" does not behave nicely. In order to make important differences apparent, let $K_n'=K'(C_n)$ be the infimum of all $\alpha$'s such that $\D(C_n, \alpha)$ obstructs all rays with rational direction, i.e., rays that have the same direction as a rational $n$-tuple. It follows that $K_n' \le K_n$, since if $\D(C_n, \alpha)$ obstructs all views it necessarily obstructs all views with rational direction. Also, $K_i' \le K_n'$ for $i \le n$, as if a ray with rational direction $\tpl{r}$ is obstructed by $\D(C_n, \alpha)$, the ray with direction $(r_1,\dots,r_i)$ must necessarily be obstructed by $\D(C_i, \alpha)$. This would be less formidable if it suffices to prove (1) and (2) for all views of rational direction. This can be proved assuming $K_n' < K_m'$ when $n < m$. The proof is reminiscent of Proposition 5. \begin{proposition} Assume $K_n' < K_m'$ when $n<m$. Then the set $\D(C_n, K_m')$ obstructs all views, that is, $K_m' = K_m$. \end{proposition} \begin{proof} $\Rightarrow:$ If $\D(C_n, \alpha)$ obstructs all views then it trivially obstructs all views with rational direction.\\ $\Leftarrow$: Let $r: [0, \infty) \to S_n$ be the ray $r(t) = vt$, where $v=\tpl{r}$, $r_i$ positive and real. If the $r_i$ are rationally independent then there must be a time $t$ when $r(t)$ is sufficiently close to the set of half integer coordinates; as the orbit $O(v)= \set{qv}_{q\in \z}$ is dense in the $n$-torus. Assume then that $\set{r_j}_1^n$ are not all rationally independent, i.e., some of the $r_j$ are rationally dependent. Since the ray $r(t)$ has the same direction as $r(\beta t)$, we can assume without loss of generality that the $r_j$ are irrational. There exists a largest rationally independent subset $\set{b_l}_1^m \con \set{r_j}_1^n$ for $m < n$. In the $n$-torus, the set $\D(C_n, \alpha)$ reduces to the single cube, denoted $G(\alpha)$, centered at $(\f{2}, \f{2})$ with length $\alpha$. The ray $r$ passes through $\D(C_n, \alpha)$ if and only if the image of $r$ in $\T^n$ intersects $G(\alpha)$. By Lemma 6 there is an irrational $w$ and integers $s_i$ such that $\cl{O(ws_1, \dots, ws_n)} \con \cl{O(r_1, \dots, r_n)} \con \T^n$. Also by Lemma 6, there is an $m$ number of $s_i$ equal to the same nonzero integer $c$. Let $v = n-m+1$. Since $h(t) = tu$, $u=(ws_1,\dots, ws_n)$, has rational direction, the hypothesis says that $h$ intersects $G(K_v') \con \tx{int}(G(K_m')) \con \tx{int}(G(K_m))$ in the $n$-torus. The closure of $h([0,\infty))$ is contained in the closure of $r([0,\infty))$. Since $\tx{int}(G(K_m))$ is open, the image of $r$ in the $n$-torus intersects $G(K_m)$. This proves the result. \end{proof} For our future purposes, in light of Conjecture 19 below, we can assume $K_n' = K_n$. \begin{proposition} For every n, $K'_n = 1-2 \ds\inf_{r \, \in \, \n^n} \del_n (r)= 1-2\kappa(n).$ \end{proposition} \begin{proof} We first show that $K'_n$ = 2 $\ds\sup_{r \in \n^n} \, \min_{t \in [0,1]} \, \max_{1 \le i \le n}$ $\norm{r_i t - \f{2}}.$ Let $s = \tpl{s} \in \n_0^n$ so that $s$ is the direction of the ray $r(t) = (s_1 t, ..., s_n t)$. The closest distance, in the product metric, from $\tx{Im}(r)$ to the nearest point of $A=\set{(m_1 + \f{2},..., m_n + \f{2}): m_i \in \n}$ is $l=\ds\min_{t \in [0,1]} \, \max_{1 \le i \le n}$ $\norm{r_i t - \f{2}},$ as $\norm{r_it - \f{2}}$ is the distance from $r_i \, t$ to the nearest half-integer. It follows that the smallest $\alpha$ for which $\D(C_n, \alpha)$ obstructs the ray $r(t)$ is 2$l$: since if $c=\norm{a-\f{2}}$, $a$ is contained in the interval $[b-\f{2} - c, b-\f{2} +c]$ for some integer $b$; the interval has length $2c$, and this is the smallest interval centered at the half-integers containing $a$. It follows that $K'_n$ is 2$\sup \ds\min_{t \in [0,1]} \, \max_{1 \le i \le n}$ $\norm{r_i t - \f{2}}$, where the supremum is taken over all $\tpl{r} \in \n^n$. One can check that $\norm{r_i t} = \f{2} - \norm{r_it-\f{2}},$ and hence $$K_n'=2 \ds\sup_{r \in \n^n} \, \min_{t \in [0,1]} \, \max_{1 \le i \le n} \tx{$\norm{r_i t - \f{2}}$}=2 \ds\sup_{r \in \n^n} \, \min_{t \in [0,1]} \, \max_{1 \le i \le n} (\f{2} - \norm{r_it}),$$ which becomes $$1 +2\ds\sup_{r \in \n^n} \, \min_{t \in [0,1]} \, \max_{1 \le i \le n} (-\norm{r_it}),$$ which is $$1-2\ds\inf_{r \in \n^n} \, \max_{t \in [0,1]} \, \min_{1 \le i \le n} \norm{r_it}.$$ But this is $$1-2\ds\inf_{r \, \in \n^n} \del_n(r)=1 - 2\kappa(n),$$ and the proof is complete. \end{proof} \begin{corollary} Assuming $K_i' < K_j'$ when $i < j$, $K_n = 1-2\kappa(n)$ for all $n \in \n$. \end{corollary} \begin{proof} Apply Proposition 15. \end{proof} \begin{corollary} For any $n \in \n$, $\fr{n-1}{n+1} \le K_n' \le \fr{n-1}{n}$. \end{corollary} \begin{proof} By Proposition 11, $$\f{2n} \le \kappa(n) \le \f{n+1}.$$ The rest follows by setting $K_n' = 1-2\kappa(n)$. \end{proof} Assuming the conjecture below, the above results ensure that finding $\kappa(n)$ and finding $K_n$ are equivalent problems. Since the LRC is equivalent to $\kappa(n) = 1/(n+1)$ for each $n$, Conjecture 14, and hence the LRC, is also equivalent to \begin{conj} For every $n \in \n, K_n' = \fr{n-1}{n+1}$. \end{conj} As Proposition 5 shows the LRC with rational speeds implies the LRC where the runners speeds are arbitrary. Likewise by Proposition 15, if $\D(C_n, \fr{n-1}{n+1})$ obstructs all rational views for each $n$, then it obstructs all views. That is, if $K_n' = \fr{n-1}{n+1}$ for each $n$, then $K_n=K_n'$. It is interesting to notice that by Corollary 17, $\D(C_n, \fr{n-1}{n})$ necessarily obstructs every view in $S_n$. From the lower bound it follows that $K_3 \ge 1/2 > 1/3$ so that the cubes represented in Figure 2 do not obstruct all views. We proved the LRC with three runners in the previous section. Here we give an alternate proof using view-obstruction. \begin{theorem} $K_2 = 1/3$. \end{theorem} \begin{proof} We refer to the following picture in the proof: \includegraphics[scale=.55]{ViewObstr5} The squares are centered at half-integers and have side length of $1/3$, as in figure (2), so the squares represent the set $\D_2=\D(C_2, 1/3)$. The two top rays have slopes of 2 and $1/2$ respectively while the bottom ray has a slope of $1/5$. By symmetry, $\D_2$ obstructs all views with slope $\theta > 2$ if it obstructs all slopes with $\theta \in (0, 1/2)$. The rays $y=\theta x$ with $1/2 \le \theta \le 2$ intersect the square with center $(\f{2}, \f{2})$. By observation of the bottom ray, every ray with slope $\theta \in (o, \f{2})$ are obstructed by a square. This is evident if the slope is between that of the bottom line and the middle line of slope $1/2$, i.e., $\theta \in [\f{5}, \f{2}]$. If $0 < \theta < \f{5}$ then the ray $y = \theta x$ has no hope of passing through the gap of two consecutive squares with centers $(\f{2} + n, \f{2}), \, (\f{2} +n+1, \f{2})$ as the minimal slope of a line needed to pass unobstructed between such consecutive squares is $1/2$. Thus the set $\D_2$ obstructs all views. From the above figure it is also evident that $1/3$ is the smallest number $\alpha$ such that $\D(C_2, \alpha)$ obstructs all views: for the top lines with slopes 2 and $\f{2}$ only pass through the corners of squares in $\D_2$, as indicated by the points along those lines. Hence $K_2 = \f{3}$ and this proves the LRC for three runners. \end{proof} The following figures give heuristic examples demonstrating that $K_3 = 1/2$. In all figures, the dot in the center is the origin, so that one is placed at the origin and able to literally ``see" which views are obstructed by the cubes. \includegraphics[scale=.3]{Figure4} \includegraphics[scale=.3]{Figure5} \includegraphics[scale=.3]{Figure6} Figures 4, 5, and 6 above demonstrate the set $\D(C_3, 1/3)$. Figure 4 shows the cubes of that set contained in the region $[0, 10]^3$ and is the same set as Figure 2 but looking out at the origin. Figure 5 shows the cubes inside the region $[0,15]^3$, and Figure 6 shows the cubes contained in $[0, 25]^3$. We know that $\D(C_3, 1/3)$ does not obstruct all views. Below is the corresponding figures with $\D_3=\D(C_3, 1/2)$. Most views are obstructed by the cubes of $\D_3$ contained in the region $[0,10]^3$ as seen in Figure 7. It seems very reasonable that $\D_3$ does obstruct all views. Since the LRC is known to hold for four runners, this is indeed the case. \includegraphics[scale=.3]{Figure7}\includegraphics[scale=.3]{Figure8} \includegraphics[scale=.3]{Figure9} \begin{example} Recall Example 1 in the second section. Every orbit of $x=(x_1, x_2) \in \I^2$, has $\ld_2(x) \ge \k(2) = \f{3}$ by definition of $\k$ and Theorem 10. Thus $O(x)$ intersects every square centered at $(\f{2}, \f{2})$ with length greater than $1 - 2\ld_2(x) \le 1 - 2\k(2) =1-\fr{2}{3}= \f{3}$. Since $x$ was arbitrary with nonzero coordinates, it follows that for all $\e > 0$, every orbit of every irrational pair intersects the square with length $\f{3} + \e$ centered at $(\f{2}, \f{2})$. We denote such a square as $G_\e:=G(\f{3} + \e)$. \end{example} \begin{example} Let $s=(s_1, s_2) \in \Q^2$. Then for every $\e > 0,$ the line $y= ts$, $t \in \R$ intersects $G_\e$: notice that $c=\sup_{t \in \R} \min \set{\norm{t s_1}, \norm{t s_2}} \ge \inf_{s \in \n^2} \del_2(s) = \f{3}$ by definition of $\del_2$, Theorem 10, and since the line $y = ts$ has rational slope. We show $L=\set{ts}_{t \in \R} \con \T^2$ intersects $G_\e$: By definition of $c$, there are points $(x, y) \in L$ such that $\norm{x} > c -\fr{\e}{2}$ and $\norm{y} > c - \fr{\e}{2}$. Consider the square $G'=G(1-2c + \e)$. If $(x, y)$ did not intersect $G'$ then $\norm{x}$ or $\norm{y}$ would be less than than the distance from a corner of $G'$ to the origin. That is, $\norm{x} < \f{2} - \fr{1-2c+\e}{2} = c - \fr{\e}{2}$ or $\norm{y} < c - \fr{\e}{2}$, which contradicts the choice of that pair. Since $c \ge \f{3}$, we have that $1-2c + \e \le 1-\fr{2}{3} + \e = \f{3} + \e$, so that the square $G_\e$ contains the square $G'$ and hence $L$ intersects $G_\e$. Thus, in the two-torus, every line with rational direction intersects $G_\e$ for any $\e > 0$. \end{example} Examples 2 and 3 perfectly illustrates the relationship between the diophantine problem and the view-obstruction problem for two dimensions, and the results they give can be extended and when applying the results from the previous two sections. Recall Figure 1 in the second section, showing the set of squares with length $1/3$ centered at the half-integers. We denoted this set as $\D_2=\D(C_2, 1/3)$. Reducing the entire plane modulo the integer lattice equates each square in $\D_2$ to the square $G(1/3)$. We know that each ray $r = tx$ with nonzero slope, $t > 0$ is obstructed by $\D_2$ by Theorem 20 in section 3. Thus the reduction of every ray in the two-torus intersects $G(1/3)$. Hence, in the two-torus, every line with rational direction intersects $G(1/3)$, this is stronger than what is shown in Example 3. \section{Billiard paths in square Tables} As one may have noticed, results in the previous sections used various facts about orbits of certain paths in the $n$-torus. Specifically, Theorem 7 in the second section, Proposition 15 and Proposition 5 in the third section all relied on Lemma 6 which used the fact that the orbit $O(x)$ is dense in $\T^n$ when the $x_i$ are rationally independent. We used these results to relate the diophantine and View-obstruction problems. In this section we mainly consider the results in the previous section on view-obstruction and explore its analog to billiard paths in $n$-dimensional cubes. We start by tiling the first quadrant with unit squares. Below, all rays pass through the origin, and all billiard paths start their initial trajectory passing through the origin and has, unless otherwise specified, the unit square as a billiard table. \begin{proposition} Each ray in the first quadrant corresponds with a path in a square billiard. Likewise each billiard path in the square corresponds to a ray in the first quadrant. \end{proposition} Where a ray ``corresponds" to a billiard path (and vice versa) if the billiard path in the square can be ``unfolded" into a strait line. The following figures demonstrate such an unfolding of the ray $y = \fr{x}{2}$. On the left, Figure 10 shows the ray for $0 \le x \le 4$. In Figure 11 on the right, the corresponding billiard path is shown with each path segment marked with its corresponding ray segment. \includegraphics[scale = .6]{Tilling0} \,\,\,\, \includegraphics[scale = .6]{Tillings1} \begin{proof}[Proof of Proposition] \emph{(Each billiard path corresponds to a ray)}: To prove the result, we need to associate each billiard path to a ray in the first quadrant. We are assuming that each billiard path has an initial trajectory at the origin. If this first trajectory is vertical or horizontal there is nothing to prove, as each respectively corresponds to a vertical or horizontal ray. We call such billiard paths $trivial$. Assume then that the initial angle $\phi$ of the trajectory has radian measure $0 < \phi < \pi/2$. By symmetry it suffices to assume that $0< \phi < \pi/4$, for every such trajectory with $\phi > \pi/4$ can be adequately reflected to resemble a path with $\phi < \pi/4$. Every billiard path is entirely characterized by this angle $\phi$, as it uniquely determines the first incident angle of reflection. Instead of reflecting the path inside the billiard table, we reflect the billiard table across the side where this first incident reflects. This process is described in Figure 11 below, where $\theta$ is this first incident angle. \includegraphics[scale=.7]{Tillings2V} The subsequent path reflections in the first table $T$ will correspond with their reflection in the new table $T'$. Thus the entire billiard path, (excluding the first segment $P$), corresponds to a mirror image in $T'$. We call this image $I$. The segment $L$ has as its image $L'$. Adjoining $L'$ to $P$ extends the ray segment of angle $\phi$ that had begun with $P$. Repeating this process on the billiards image $I$ in $T'$ proves the demonstration, with one note: If the first segment of the ray does not reflect across a side, but instead hits a corner, then reflect the table about the line of slope $-1$ at this corner and take the images in this new square table to correspond with this reflection. We can thus extend each billiard path to a ray which has the same slope as the initial trajectory of the billiard path. \\ \emph{(Each ray corresponds to a billiard path)}: If a ray $r$ has slope $\phi > 0$ from the horizontal axis, correspond $r$ with the billiard path that has $\phi$ as its initial trajectory angle. Thus the produced correspondence between the rays and billiard paths is a bijection. \end{proof} Given a square billiard table $T$ with unit side lengths, we let $G(\alpha)$ be the square with side length $\alpha$ with the same center as $T$, i.e., $G(\alpha)$ is the scaling of $T$ by $\alpha$. A natural problem is to find the smallest $\alpha$ such that $G(\alpha)$ intersects every nontrivial billiard path. This problem is closely tied to the two-dimensional case of the view-obstruction problem discussed in the last section. Using the following lemma we show that these problems are equivalent. \begin{lemma} In a square billiard table, $G(\alpha)$ is invariant under the reflections given in Proposition 21. \end{lemma} \begin{proof} In a square billiard $T$, let $T'$ be its reflection about a side. The statement in the lemma means that the image of $G=G(\alpha)$ in $T'$ corresponds to a square $G'$ with side length alpha with the same center as $T'$. This is immediate from the symmetry of $G$ in $T$ and the fact that the center of $T'$ is the image of the center of $T$, in the reflection. This is shown in Figure 13 below. \\ \includegraphics[scale = .6]{Tillings3} \end{proof} \begin{corollary} Let $T$ be a billiard table and $T'$ a reflection as given in Proposition 20. If $S$ is a billiard path in $T$ that intersects $G=G(\alpha)$, then the image of $S$ in $T'$ intersects $G'$. \end{corollary} \begin{proof} If this were not so then there would be a point inside (resp. on) $G$ that was not reflected inside (on) $G'$, contradicting the lemma. \end{proof} Since $T$ is also a reflection of $T'$. A billiard path intersects $G$ if and only if its image in $T'$ intersects $G'$. By inductively applying the above lemma and corollary, the respective results holds for any finite number of reflections of a billiard. \begin{theorem} $\inf\set{ \alpha : G(\alpha) \emph{ obstructs every nontrivial billiard path in $T$}} = \f{3}.$ \end{theorem} \begin{proof} Let $S$ be a nontrivial billiard path in $T$ with initial an trajectory of $\phi >0$. Let $r$ be the ray corresponding the $S$ according to Proposition 21. Assume for contradiction that $S$ does not intersect $G(1/3)$. Pick any segment, $J$, of $r$ inside a unit square, $T'$, centered at some half-integer $x$. The construction in Proposition 21 shows that this segment corresponds to a segment of the billiard path $S$ via multiple reflections. By Lemma 21, $G(1/3)$ corresponds by these reflections to a square $G'(1/3)$ that has center $x$, and by Corollary 23, the segment $J$ does not intersect $G'(1/3)$. Thus no segment of the ray $r$ intersects a square with length $1/3$ centered at a half-integer. But this contradicts Theorem 20 in the previous section, for then the set $\D(C_2, \f{3})$ would not obstruct the ray $r$ which has positive slope since $\phi > 0$. Thus, every nontrivial billiard path intersects $G(\f{3})$. The billiard path show in figure 11 only intersects the boundary of $G(\f{3})$. This is shown in Figure 14 below. \includegraphics[scale=.5]{Tillings4} Thus $1/3$ is the infimum of the $\alpha$ such that $G(\alpha)$ intersects every nontrivial billiard path. \end{proof} \subsection{Billiard Paths in triangular tables} In this subsection we investigate the same questions covered above but considering billiard paths in a regular triangle, $Q$, of unit side length. As in the previous, all billiard paths have their initial trajectory from the origin, which in our case is the lower left corner of our triangle $Q$. Below is an example of such a billiard with initial trajectory angle of $\phi = \fr{\pi}{4}$. \includegraphics[scale=.3]{Triangles1} \includegraphics[scale=.3]{Triangles2} \includegraphics[scale=.3]{Triangles3} \includegraphics[scale=.3]{Triangles4} Figure 16 on the upper right displays ten reflections of the path inside $Q$, with the eleventh table strike taking place at the indicating dot. Figure 17 in the lower left displays 50 reflections with the $51^{\emph{st}}$ strike at the dot. Figure 18 shows the billiard path at five-hundred strikes (499 reflections). For $0 < \alpha < 1$, define $H(\alpha)$ to be the scaling of $Q$ by $\alpha$. Thus $H(\alpha)$ and $Q$ have the same triangular incenter. Figure 19 below displays $H(\alpha)$, for $\alpha = 1/4$, contained in $Q$. \includegraphics[scale=.8]{Triangles6} We solve the analog of Theorem 24 for triangular billiards. That is, we find the smallest number $\alpha$ such that $H(\alpha)$ intersects every nontrivial billiard path in $Q$. Actually, $a \,\, priori$ we cannot know there is a smallest number, but must find $$\beta = \inf\set{\alpha : H(\alpha) \text{ intersects every nontrivial billiard path in } Q},$$ and must then check to see if $\beta$ is indeed in the set. Because the equilateral triangle $Q$ generates a tilling of the euclidean plane with reflections across a side; and since the image of $H(\alpha)$ in the reflected triangle $Q'$ is $H'(\alpha)$, i.e., a scaling of $Q'$ by $\alpha$, we have the analogous results of Proposition 21, Lemma 22, Corollary 23 corresponding to these triangular billiards. That is, instead of rays in the first quadrant, we consider rays the euclidean plane with a horizontal angle between 0 and $\fr{\pi}{3}$. \begin{proposition} Each ray with horizontal angle between 0 and $\fr{\pi}{3}$ corresponds with a billiard path in $Q$. Likewise each billiard path in $Q$ corresponds to a ray in horizontal angle between 0 and $\fr{\pi}{3}$. \end{proposition} \begin{lemma} In an equilateral triangular billiard table, $H(\alpha)$ is invariant under the reflections given in Proposition 25. \end{lemma} \begin{corollary} Let $Q$ be an equilateral triangular billiard table and $Q'$ a reflection as in Proposition 24. Then if a billiard path $S$ in $Q$ intersects $H(\alpha)$, then the image of $S$ in $Q'$ intersects $H'(\alpha)$. \end{corollary} As is the case with square billiard tables, $Q$ is also a reflection of $Q'$. Thus a billiard path intersects $H(\alpha)$ if and only if its image in $Q'$ intersects $H'(\alpha)$. Inductively applying the above lemma or corollary, the respective results hold for any finite number of reflections, i.e., unfoldings, of a triangular billiard. We are almost ready to prove the corresponding result of Theorem 24 for triangular billiards. However, we lack the analog of Theorem 20, which says that $\D(C_2, \f{3})$ obstructs all views. Effectively this result says that if one tiles the plane with unit cubes, then scaling each square by $\f{3}$ will produce a set that will block all rays (with positive slope). This was our the main result in view-obstruction that was equivalent to the LRC with three runners. We state the triangular billiards' analog to this as a lemma which is proved after the theorem. \begin{lemma} Let $Q$ be an equilateral triangle with unit length, and let unfoldings of $Q$ tile the region between the rays with angles 0 and $\pi/3$. So a sequence of unique triangles $\set{Q_n}_1^\infty$ tiles this region. Then $\set{H_n(\f{4})}_1^\infty$ obstructs all rays in the region, and $\f{4}$ is the least such number to do so. \end{lemma} \begin{theorem} $\inf\set{\alpha : H(\alpha) \text{ intersects every nontrivial billiard path in } Q} = \f{4}$ \end{theorem} \begin{proof} Let $Q$ be an equilateral triangle with unit length and $\set{Q_n}_1^\infty$ unfoldings of $Q$ that tile the region described in Lemma 28. Let $S$ be a nontrivial billiard path in $Q$ with initial an trajectory of $0 < \phi < \pi/3$, and let $r$ be the ray corresponding to $S$ according to Proposition 25. Assume for contradiction that $S$ does not intersect $H(1/4)$. Pick any segment, $J$, of $r$ inside some $Q_n$ with incenter $x$. By Proposition 25 the segment $J$ corresponds to a segment of the billiard path $S$ via multiple unfoldings. By Lemma 26, $H(1/4)$ corresponds by these unfoldings to a regular triangle $H_n(1/4)$ with incenter $x$, and by Corollary 26, the segment $J$ does not intersect $H_n(1/4)$. Thus no segment of the ray $r$ intersects any $H_n(\f{4})$. This immediately contradicts Lemma 28. Thus, every nontrivial billiard path intersects $H(\f{4})$. By Lemma 28, their is a ray that cannot intersect the interiors of the $H_n(\f{4})$. Thus the billiard path corresponding to this ray only intersects the corners of $H(\f{4})$, hence 1/4 is the infimum of the set in the theorem statement. \end{proof} We now prove the lemma. \begin{proof}[Proof of Lemma 28] We refer to the following figure in the proof: \\ \includegraphics[scale = .5]{Triangles5} The smaller triangles represent a portion of the set $W=\set{H_n(\f{4})}_1^\infty$, in a portion of the tiling generated by $Q$. In order to prove that every ray with slope $0< \phi < \fr{\pi}{3}$ is obstructed by the collection $W$, it suffices by symmetry to prove the result for such rays with $0 < \phi < \fr{\pi}{6}.$ The upper solid ray has the form $r_1 = \fr{\sqrt{3}\,x}{5}$ for $x > 0$. The ray $r_1$ intersects only the edges of sets in $W$. Call the lower solid ray $r_2$ and the dashed line $r_3$. It is evident that all rays between $r_1$ and $r_2$ intersect $W$, as do all rays between $r_2$ and $r_3$. The gaps between consecutive triangles grow tighter for rays below $r_3$, so that all rays with positive slope under $r_3$ intersect sets in $W$. This proves the lemma. \end{proof} \section{Invisible runners and finite fields} The results of this section originated in 2008 in a paper by the Polish computer scientists Sebastian Czerwinski and Jaroslaw Grytczuk [1]. Recall for $s = (s_1,\dots,s_k) \in \n^k$, we define $\del_k(s) = \ds\sup_{x \in \R} \ds\min_{1 \le i \le k} \norm{x s_i}$ as described in the first section. Proposition 4 says that the LRC is equivalent to $\del_k(s) \ge 1/(k+1)$ for each $s$ and each natural number $k$. We alter this notation slightly, letting $S = \set{s_1, \dots, s_k} \con \n$ be a set of $k$ positive integers, setting $$\del(S) = \ds\sup_{x \in \R} \ds \min_{1 \le i \le k} \norm{x s_i}.$$ It is readily seen that the LRC is equivalent to $\del(S) \ge 1/(k+1)$ for each $k$ element subset of $\n$. We also define $\fl{x}$ to be the usual floor of $x$, and $\set{x}$ to be the fractional part of $x$. We prove two results in this section which are important for several reasons. First, the techniques in this section are more algebraic, leading to an algebraic conjecture that is equivalent to the LRC. Second, if one has a set of runners, these techniques can be used to give a finite algorithm for computing the time a given runner is ``loneliest". This answers a natural question as to whether such an algorithm exists. Let $S = \set{s_1, \dots, s_k} \con \n$, and let $p$ be a prime that does not divide any $s_i$. Thus, the elements of $S$ modulo $p$ is a subset of $\z_p^*$, which we define as the set of non-zero elements of the field $\z_p=\set{0,1,\dots,p-1}$. We arrange the elements of $\z_p$ on the unit circle in a usual fashion, i.e., as the $p^{\tx{th}}$ roots of unity. For an integer $n$, the image of $n$ in $\z_p$ is denoted by $\cl{n}$. \begin{lemma} Let $B = \pm\set{1,2,\dots,m} \con \z_p$ and $S=\set{s_1,\dots, s_k} \con \n$. Suppose there is an $x \in \z_p^*$ such that $\cl{xs_i}$ is not in $B$ for any $i$. Then $\del(S) \ge (m+1)/p$. \end{lemma} \begin{proof} Since $\cl{xs_i} \neq 0$, if $\norm{\cl{xs_i}/p} = \norm{xs_i/p} < \fr{m+1}{p}$ then we must have $\cl{xs_i}/p \in \pm \set{1/p, 2/ p ,\dots, m/p}$ so that $\cl{xs_i} \in \pm B$. This contradicts the hypotheses. Hence $\del(S) = \ds\sup_{x \in \R} \ds\min_{1\le i \le k} \norm{xs_i} \ge (m+1)/p$. If the above holds for $m = \fl{p/(k+1)}$ then $\del(S) \ge \ds\min_{1 \le i \le k} \norm{ts_i} \ge 1/(k+1)$ where $t = x/p$. \end{proof} \begin{proposition} Let $S = \set{s_1, \dots,s_k}$ be a set of positive integers. Let $\e >0$ and let $p>\fr{k}{\e} + 1$ be a prime number that does not divide any element in $S$. Then for every $d \in \set{0, 1,\dots,k}$ and $B \con \z_p^*$ with $|B| \le p(d+1)/(k+\e)$, there is an $x \in \z_p^*$ such that $|B \cap xS| \le d$. \end{proposition} \begin{proof} Consider a rectangular $k \times (p-1)$ matrix $A = (a_{ij})$ defined by $a_{ij} = \cl{j s_i}$. We need to show that there is a column in $A$ with at most $d$ entries belonging to $B$: Let $T$ be the total number of positions in $A$ occupied by elements of $B$. Since $\z_p$ is a field and each $s_i$ is nonzero, every row of $A$ consists of the whole of $\z_p^*$. Thus $T=k|B|$, and the hypothesis on $|B|$ implies that $T \le kp(d+1)/(k+\e)$. If every column in $A$ had at least $d+1$ entries in $B$, then $T \ge (p-1)(d+1)$. Hence, $$k\fr{p(d+1)}{k+\e} \ge (p-1)(d+1),$$ and so $$\fr{k}{k+\e} \ge \fr{(p-1)}{p}.$$ But by assumption $p > \fr{k}{\e} +1$ so that $(p-1) > \fr{k}{\e}$ so $\fr{(p-1)}{p} > \fr{k}{p\e}$. Also $p > \fr{k}{\e}+1$ gives $p\e > k + \e$, so that $\fr{(p-1)}{p} > \fr{k}{p\e} \ge \fr{k}{k+\e}$. This is a contradiction. Hence at least one column has no more than $d$ entries of elements in $B$. \end{proof} We use the above results to prove the first important theorem of this section. \begin{theorem} Let $k$ and $d$ be arbitrary integers, $0 \le d < k.$ Then every set $S$ of k positive integers contains a subset $D$ of size $k-d$ such that $\del(D) \ge (d+1)/(2k)$. \end{theorem} In the special case of $d = 1$, this says that if we are given a set of integers of size $k$, there is a subset of size $k-1$ with $\del(D) \ge (d+1)/(2k) = 1/k$. In light of the first section, this says if we are given a set of $k+1$ runners, removing a certain runner (or making him ``invisible") will give us $k$ runners where the runner with speed 0 becomes lonely in the sense of $k+1$ runners. If $k\ge 6$ (so that $\fr{3}{2k} \ge \f{k-2}$), then the case $d=2$ says that given $k+2$ runners, removing 2 runners will give a remaining set of $k$ runners $D$ with $\del(D) \ge 3/(2k) \ge 1/(k-2)$. That is, every set of $k+2$ runners contains a set of $k$ runners where the runner with constant 0 speed becomes lonely $\emf{regardless of the size of k}$, provided $k \ge 6$. \begin{proof}[Proof of theorem 32] Let $0 \le d < k$ be fixed and let $S$ be any set of $k$ positive integers. Let $\e_n > 0$ with $\e_n \to 0$ as $n \to \infty$. For every $n$, let $p_n$ be a prime such that $p_n > \fr{k}{\e_n} + 1$. Set $m_n = \fl{p_n(d+1)/(2(k+\e_n))}$ and $B_n = \pm\set{1,2,\dots,m_n}$. By Proposition 31 there is an $x_n \in \z_{p_n}^*$ with $|B_n \cap x_nS| \le d$. Let $D_n = \set{s \in S: x_ns \notin B_n}$. Now since $|B_n \cap x_nS| \le d$, we have that $|D_n| \ge k - d$ for each $n \ge 1$. By the lemma, it follows that $$\del(D_n) \ge \fr{m_n + 1}{p_n} \ge \fr{p_n(d+1)/(2(k+ \e_n)}{p_n}=\fr{d+1}{2(k+ \e_n)}.$$ Since $S$ is a finite set, there are infinitely many $n$ for which $D_n \con S$ is the same subset. Call this subset $D$. Since $\e_n \to 0$. We have that $$\del(D) \ge \ds\lim_{n \to \infty} \fr{d+1}{2(k + \e_n)} = \fr{d+1}{2k}.$$ This proves the theorem. \end{proof} \begin{proposition} If $S = \set{s_1, s_2,\dots,s_k} \con \n$, then $\del(S)$ is attained for $x_0 = a/(s_i + s_j)$ for some $i \neq j$ and some positive integer $a < s_i + s_j + 1$. \end{proposition} \begin{proof} Define $f_S(x) = \ds\min_{1\le i \le k} \norm{x s_i}$ for $x \in \T$. By continuity of $f_S$ and the fact that $\T$ is compact, there is an $x_0 \in \T$ where $f_S$ attains its maximum. Thus $\del(S)=\ds\sup_{x \in [0,1]} \ds\min_{1 \le i \le k} \norm{x s_i}= f_S(x_0)$, and let $s_i \in S$ be an integer for which $\del(S) = \norm{x_0 s_i}$. We show that there must be another $j \neq i$ such that $\norm{x_0s_i} = \norm{x_0 s_j}$. If there was not any such $j$, then $\ds\min_{1\le i\le k} \norm{x_0s_i}=\norm{x_0s_i} < \norm{x_0s_j}$ for $j \neq i$. Thus choosing $\e > 0$ small enough we will have $\ds\min_{j \neq i} \norm{(x_0 \pm \e)s_j} > \norm{(x_0 \pm \e)s_i}$ by continuity. But either $\norm{(x_0+\e)s_i} > \norm{x_0s_i}$, or $\norm{(x_0-\e)s_i}>\norm{x_0s_i}$, so that $f_S(x_0)$ is not the really the maximum of $f_S$. Similarly we can show there is such a $j$ with $s_j \not s_i$. Since $\norm{x_0s_i} = \norm{x_0s_j}$, we must have $\set{x_0s_i} = 1 - \set{x_0s_j}$. Hence $$x_0s_i+x_0s_j = \fl{x_0s_i} + \fl{x_0s_j} + \set{x_0s_i} + \set{x_0s_j} = \fl{x_0s_i} + \fl{x_0s_j}+1 := a,$$ which results in $x_0 = a/(s_i+s_j)$ satisfying the required properties. \end{proof} Applying Proposition 33, we have the following equivalence to the LRC, \begin{conj} For every set $S \con \n$ of size $k$, there is a natural number $n$, and $x \in \z_n$, such that $xS \cap B = \emptyset$ for $B = \pm\set{0,1,\dots, m}$ where $m = \ceil{n/(k+1)}-1$. \end{conj} \begin{proof}[Proof of Equivalence] If the LRC is true then for any $k$ element set $S \con \n$ we have $\del(S) \ge 1/(k+1)$, so by Proposition 33, the above conjecture will hold with $n = s_i + s_j$. This follows since, by the proposition, $\norm{\fr{a}{n}s_l} \ge \del(S) \ge 1/(k+1)$ so that the numbers $\cl{as_l}=as_l\!\!\mod(n)$ fall outside of the set $B= \pm \set{0,1,\dots,m}$ with $m= \ceil{n/(k+1)}-1$. Otherwise if $\cl{as_l} \in B$ then $\norm{as_l/n}=\norm{\cl{as_l}/n} = \norm{q/n}$ for some $q \in B$. But all such $q \in \pm \set{0,1,\dots, m}$ have $q/n \in \pm \set{0, 1/n,\dots, m/n}$ and since $m/n \le (\ceil{\fr{n}{k+1}}-1)/n < \fr{n}{k+1}/n = 1/(k+1)$, we have $\norm{q/n} \le \norm{\pm m/n} < 1/(k+1)$ which would be a contradiction. This same argument shows that if there is an $n$ and $x \in \z_n$ where $xS \cap B = \emptyset$, we have $\norm{xs_i} \ge 1/(k+1)$ for all $1 \le i \le k$, which implies $\del(S) \ge 1/(k+1)$. If this happens for all $S \con \n$ of arbitrary size $k$, then the LRC holds by Proposition 4. \end{proof} Given a set of runners with positive integer speeds $\set{s_i}_1^k$, Proposition 33 shows that we can compute, in finite steps, the time a certain runner with speed $s_i$ becomes lonely. Let $r_j = s_j - s_i$ for all $j \neq i$. Set $M = \ds\max_{l \neq k} (r_j + r_k)$. Then compute $a_{(j, l)} = j+l$ for all $a_{(j,l)} \le M+1$, which is a finite computation. By Proposition 33, one of the numbers in the set $$\set{\fr{a_{(j,l)}}{(r_q + r_m)}}_{j,l,q,m =1}^{k-1}$$ is a time when $s_i$ becomes lonely. Or, more precisely, since we cannot assume $s_i$ is ever lonely, the set above contains a time when $s_i$ is the furthest distance possible from every other runner.
{ "attr-fineweb-edu": 2.722656, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdLLxaKgTyGYK9L9r
\section{Introduction}\label{sec:intro} Sports analytics have received an increasing interest in statistics community and they continue to offer new challenges as a result of ever-increasing data sources. Conventional statistical research for sports analytics was mainly concerned with forecasting the game results, such as predicting the number of goals scored in soccer matches \citep{dixon1997modelling, karlis2003analysis,baio2010bayesian}, and the basketball game outcomes \citep{carlin1996improved,caudill2003predicting,cattelan2013dynamic}. More recently, fast development in player tracking technologies has greatly facilitated the data collection \citep{albert2017handbook}, and in turn substantially expanded the role of statistics in sports analytics, including granular evaluation of player/team performance \citep{cervone2014pointwise,franks2015characterizing, cervone2016multiresolution,wu2018modeling}, and in-game strategy evaluation \citep{fernandez2018wide,sandholtz2019measuring}. In professional basketball research, shooting pattern remains to be a fundamental metric for evaluating players' performance and has aroused great interest among statisticians. Shot charts, as graphical representations of players' shot locations, provide an excellent tool to summarize and visualize shooting patterns for players. To account for the spatial correlation in shot chart data, several spatial statistical models have been studied in the literature. For example, \citet{reich2006spatial} proposed a multinomial logit regression model with spatially varying coefficients to quantify the effects of several hand-crafted features (e.g., home or away games, average number of blocks made by the defensive player, the presence of certain other teammates) on the probability of making a shot over different basketball court regions. More recently, spatial point process \citep{miller2014factorized, jiao2019bayesian} has emerged as a promising direction for shot chart data analysis in recognition of the randomness nature of shot locations. In those works, it is common to first summarize the shot charts as intensity matrices of the underlying point process and then conduct a regression analysis of pre-specified artificial baseline shooting patterns on game outcomes. A common finding in these studies is that the shooting behaviors are {\it highly heterogeneous} among different players, which calls for a clustering analysis towards a deeper understanding of the player-level heterogeneity and the improvement of existing statistical models by incorporating the latent clustering structure. To date, most existing clustering approaches for shot chart data analysis are distance-based (e.g., $K$-means and hierarchical clustering), and hence lacking a probabilistic interpretation. A model-based clustering approach was proposed in \citet{hu2020bayesian} based on calculating the similarity matrix between intensity matrices of players' shot charts. However, that method still lacks an intuitive model interpretation for the clustering results since the clustering is performed based on the similarity matrix, rather than the intensity matrices. The main goal of this paper is to fill this gap by introducing a novel Bayesian model-based clustering approach for learning the basketball players' heterogeneity via their shot chart data analysis. The key novelty of our method starts from treating each shot chart as a {\it matrix}, i.e., the basketball court is divided into a few rectangle regions and the number of shots (or the intensity of the underlying spatial point process) over those regions are represented as elements in the corresponding matrix. The immediate benefit of treating each sampling unit (shot chart) as a matrix is that it automatically takes account for the spatial structure information in the analysis. Moreover, it allows us to conveniently extend the classical Gaussian mixture model (for vectors) for clustering matrix-valued shot chart data purpose. Gaussian mixture models (and mixture models in general) have been widely used in many applications thanks to their convenient probabilistic interpretation and elegant computational solutions such as the expectation maximization (EM). However, mixture models for matrix-valued data have received little attention until recently. Most existing works \citep{viroli2011finite,thompson2020classification,gao2018regularized} are based on the EM framework, which requires pre-specifying the number of clusters while the inference cannot be easily conducted for clustering outputs over different cluster numbers. A Bayesian approach was proposed in \citet{viroli2011model} by imposing a prior on the number of clusters and drawing posterior samples with a birth and death Markov chain Monte-Carlo algorithm \citep[BDMCMC;][]{stephens2000bayesian}. However, that approach requires a careful parameter tuning process in BDMCMC and the computation does not scale up with the size of matrices. To date, it remains challenging to conduct efficient Bayesian inference for matrix-valued mixture models due to the large parameter space (e.g., the number of parameters is at least of order $O(p^2 + q^2)$ for $p \times q$ matrices) and the fact that the parameter space is not fixed as the number of clusters varies. Moreover, there is a lack of understanding of the theoretical properties for these mixture models. Our methodology development is directly motivated by solving the aforementioned challenges. In particular, we propose MFM-MxN, which is a novel nonparametric Bayesian mixture model of matrix normal distributions (MxN) under the mixture of finite mixtures framework \citep[MFM;][]{miller2018mixture}. The main idea is to represent each cluster (of shot charts) by a matrix normal distribution and allow the number of clusters to be random. We develop a Gibbs sampler that enables efficient full Bayesian inference on the number of clusters, mixture probabilities as well as other modeling parameters. We demonstrate its excellent numerical performance through simulations and an analysis of the NBA shot chart data. In addition, we establish a consistency result for the posterior estimates of the cluster number and the associated modeling parameters. Our proposed method is unique in the following aspects. First, the idea of representing each player's shot chart as an intensity matrix and formally introducing the concept of {\it matrix data analysis} for solving clustering problem is novel. In fact this idea and our proposed approach are widely applicable to general sports applications such as baseball and football studies, and provide a valuable alternative to the existing literature that mainly relies on spatial methods. Secondly, by adopting a full Bayesian framework, the clustering results yield useful probabilistic interpretation. Moreover, the developed posterior sampling scheme also renders efficient computation and convenient inference compared to all the other methods for modeling matrix-valued data in the literature. Thirdly, our theoretical result is among the first of its kind for mixture models of matrix-variate distributions. The posterior consistency result not only provides a theoretical justification for the excellent empirical performance (e.g., high probability of selecting the correct number of clusters), but also connects to the existing theoretical findings on mixture models in general (for vector-valued data). \section{Motivating Data Example}\label{sec:motivating_data} We consider a dataset consisting of locations of field goal attempts (FTA) from the offensive half court in all 82 games during 2017–2018 National Basketball Association (NBA) regular season. Following \citet{hu2020bayesian}, we focus on 191 players who have made more than 400 FTAs in that season. The rookie year players, such as Lonzo Ball and Jayson Tatum, are not included in our analysis. All the shooting locations are in a 47 ft (baseline to mid court line) by 50 ft (sideline to sideline) rectangle, which is the standard court size for NBA games. We select nine players (DeMar DeRozan, LeBron James, Giannis Antetokounmpo, Stephen Curry, Nick Young, Eric Gordon, Steven Adams, Clint Capela, DeAndre Jordan) and visualize their shot charts in Figure~\ref{fig:data_presnet}. From this figure, we observe a clear heterogeneity pattern, e.g., the three players in the first row have a more balanced spatial location pattern in their FTAs than those from the other six players; the three players in second row have more FTAs around 3-pt line; and the three players in last row have more FTAs near the basket. Those observations seem closely related to their positions and playing styles in the game. Our goal in this paper is to synthesize these empirical findings through a formal model-based clustering approach. \begin{figure}[htp] \centering \includegraphics[width=0.8\textwidth]{data_visual.pdf} \caption{Shot charts for selected NBA players} \label{fig:data_presnet} \end{figure} \section{Method}\label{sec:method} In this section, we first give a brief review of log Gaussian Cox process and matrix normal distribution, and then present our Bayesian matrix normal mixture model in Section \ref{sec:MxN_MFM}. \subsection{Log Gaussian Cox Process}\label{sec:lgcp} Consider a collection of 2D spatial locations $\bm{S} = \{\bm{s}_1, \bm{s}_2, \dots, \bm{s}_N\}$ over a study region $\mathcal{B} \subset \mathbb{R}^2$. It is common to represent the underlying spatial pattern by a spatial point process characterized by a quantity called intensity. Formally, within a region $\mathcal{B}$, the intensity at location $\bm{s}\in \mathcal{B}$ is defined as \begin{equation*} \lambda(\bm{s}) = \lim_{|d \bm{s}\rightarrow 0|}\left(\frac{\textrm{E}[N(d\bm{s})]}{|d \bm{s}|} \right), \end{equation*} where~$d\bm{s}$ is an infinitesimal region around~$\bm{s}$, $|d \bm{s}|$ represents its area, and $N(d \bm{s})$ denotes the number of events that happens over $d\bm{s}$. A spatial Poisson point process is a process such that the number of events/points in any subregion $A\subset \mathcal{B}$ follows a Poisson distribution with mean $\lambda(A) = \int_{A}\lambda(\bm{s})d \bm{s}$ for some function $\lambda(\cdot)$. Similarly with the Poisson distribution, a Poisson process $\mathcal{PP}(\lambda(\cdot))$ satisfies $\text{E}(N_{\bm{S}}(A))=\text{Var}(N_{\bm{S}}(A))=\lambda(A)$. A homogeneous Poisson process (HPP) assumes $\lambda(\bm{s})=\lambda$, i.e., the intensity is a constant over the entire region $\mathcal{B}$. A more realistic case is to let $\lambda(\bm{s})$ vary spatially, which leads to a nonhomogeneous Poisson process. Among the class of Poisson processes, log Gaussian Cox process (LGCP) has received a lot of attention in practice thanks to its flexibility and easy interpretability. A LGCP is a doubly-stochastic Poisson process with a correlated and spatially-varying intensity \citep{moller1998log}, defined as follows, \begin{equation} \label{eq:h_lgcp} \bm{S} \sim \mathcal{PP}(\lambda(\cdot)), ~~ \lambda(\cdot) =\exp(Z(\cdot)), ~~ Z(\cdot) \sim \mathcal{GP}(0,k(\cdot,\cdot)), \end{equation} where $Z(\cdot)$ is a zero-mean Gaussian process with covariance kernel $k(\cdot,\cdot)$. From \eqref{eq:h_lgcp}, the LGCP can be viewed as an exponentiated Gaussian process, such as a Gaussian random field \citep[GRF;][]{rasmussen2003gaussian}, which assumes that the log intensities at different spatial locations are normally distributed, and spatially correlated. To relate to our basketball shot chart data discussed in Section \ref{sec:motivating_data}, for all the players of interest, we can model their shot charts $\bm{S}^{(1)},\bm{S}^{(2)},\ldots,\bm{S}^{(n)}$ through a LGCP and estimate their associated intensity functions, denoted by $\lambda^{(1)}(\cdot),\lambda^{(2)}(\cdot),\ldots,\lambda^{(n)}(\cdot)$. This step can be conveniently implemented using integrated nested Laplace approximation \citep[INLA;][]{rue2009approximate}. See more details of implementation in \citet{cervone2016multiresolution} and \citet{hu2020bayesian}. For illustration, we plot the estimated intensity maps for three selected players in Figure~\ref{fig:estimated_intensity}. \begin{figure}[htp] \centering \includegraphics[width=0.75\textwidth]{3players_intensity.pdf} \caption{Estimated Intensity Maps for three selected players} \label{fig:estimated_intensity} \end{figure} \subsection{Matrix normal distribution}\label{sec:matrix_data} Next we provide a brief review of matrix normal distribution. Consider a $p \times q$ random matrix $Y$. We say $Y$ follows a matrix-variate normal distribution (MxN) with parameters $M$, $U$ and $V$, denoted by, $Y \sim \mathcal{N}_{p,q}(M, U, V)$, if it has the following probability density function \begin{equation} \label{eq:MxN} f(Y; M, U, V) = \frac{\exp( -\frac{1}{2} \text{tr}[V^{-1} (Y - M)^{\intercal} U^{-1} (Y - M)] )}{ (2\pi)^{pq/2} |V|^{p/2} |U|^{q/2} }, \end{equation} where matrix $M \in \mathcal{R}^{p \times q}$ is the mean of $Y$, and $|\cdot|$ denotes the matrix determinant. Here positive definite matrices $U \in \mathcal{R}^{p \times p}$ and $V \in \mathcal{R}^{q \times q}$ are row-wise covariance and column-wise covariance parameters, describing the covariances between, respectively, each of the $p$ rows and the $q$ columns of $Y$. It is clear from \eqref{eq:MxN} that the matrix normal distribution can be viewed as a multivariate normal distribution with a Kronecker product covariance structure \citep{gupta1999matrix}, that is, $Y \sim \mathcal{N}_{p,q}(M, U, V)$ is equivalent to $\text{vec}(Y) \sim \mathcal{N}_{pq}(\text{vec}(M), V \otimes U)$, where $\text{vec}(\cdot)$ is a vectorization operator that stacks all the columns in a matrix into a tall column vector. Since $V \otimes U = (\frac{1}{a} V) \otimes (a U)$ for any $a \neq 0$, we impose a constraint $\text{tr}(V) = q$ for model identifiability purpose. From the definition of the matrix normal distribution, we can see that it enjoys a parsimonious covariance structure. By representing a $(pq) \times (pq)$ covariance as the Kronecker product of a $p \times p$ and a $ q \times q$ covariance matrix, it effectively reduces the number of unknown parameters from $pq (pq +1)/2$ to $\{p(p+1) + q(q+1) \}/2$. Moreover, it provides a useful interpretation by projecting the spatial variability onto column and row directions, which can be viewed as a spatial version of the analysis of variance (ANOVA) model. For the basketball shot chart data, it is natural to divide the offensive half court equally into rectangle regions and represent the measurements (e.g., number of shots being made by a player) over those regions in a matrix form. Moreover, we can model the logarithm of the corresponding intensity function over the matrix by a matrix normal distribution. It is also worthy mentioning that there are other useful distributions defined for matrix-valued data, such as matrix-variate t distribution \citep{thompson2020classification}. Our proposed Bayesian mixture model of matrix normal distributions can be naturally extended to those distributions; and we focus on matrix normal distribution here for its convenience and easy interpretation. \subsection{Bayesian matrix normal mixture model}\label{sec:MxN_MFM} To account for the potential heterogeneity in the matrix-valued data, we propose a Bayesian mixture model where each mixture component is represented by a matrix normal distribution. More specifically, suppose that there are a total number of $K$ clusters, with weights $\pi_1,\ldots,\pi_K$, and each mixture follows a different matrix normal distribution. Then we adopt the mixture of finite mixtures (MFM) framework \citep{miller2018mixture} by assigning prior distributions on those unknown model parameters as follows, \begin{align} \label{eq:MFM_MxN} & K \sim p_{K}, \ \ p_{K} \ \ \text{is a p.m.f on} \ \mathbb{N}^{+} = \left\{1,2,\ldots \right\}, \nonumber \\ & \pi = (\pi_1,\ldots,\pi_k) \sim \text{Dir}_{k}(\gamma,\ldots,\gamma), \ \ \text{given} \ K = k, \nonumber \\ & P(Z_i = j) = \pi_j \ \ \text{for every} \ \ i=1,\ldots,n, ~\text{and}~ j=1,\ldots,k, \ \ \text{given} \ \pi, \nonumber \\ & M_1,\ldots, M_k \stackrel{i.i.d}{\sim} \mathcal{N}_{p,q}(M_{0} , \Sigma_{0}, \Omega_{0}) \ \ \text{given} \ K = k, \nonumber \\ & U_{1},\ldots,U_{k} \stackrel{i.i.d}{\sim} \mathcal{IW}_{p}(2 \alpha, (2\beta)^{-1} ) \ \ \text{given} \ K = k, \nonumber \\ & V_{1},\ldots,V_{k} \stackrel{i.i.d}{\sim} \mathcal{IW}_{q}(2\psi, (2\rho)^{-1} ) \ \ \text{given} \ K = k, \nonumber \\ & Y_{i} \sim \mathcal{N}_{p,q}(M_{Z_{i}}, U_{Z_{i}}, V_{Z_{i}}) \ \text{independently for} \ i=1,\ldots,n, \ \text{given} \ \bm{\Theta} \ \text{and} \ Z_{1}, \ldots, Z_{n}, \end{align} where $Z_1,\ldots,Z_n$ are cluster membership indicators that take values in $\{1,\ldots,K\}$ for each observation $Y_i$, $\bm{\Theta} = (\bm{\Theta}_{1}, \ldots, \bm{\Theta}_{K})$ and $\bm{\Theta}_{k} = (M_{k}, U_{k}, V_{k}), k = 1,\ldots,K$ are the collection of the parameters in the matrix normal distribution, and $\gamma$, $\psi, \rho$ are hyper-parameters. Here $\mathcal{IW}_{p}(\nu, S^{-1})$ means an inverse-Wishart distribution on $p \times p$ positive definite matrices with degree of freedom $\nu (\nu > p-1)$ and scale parameter $S$, the probability density of which is proportional to $|\Sigma|^{-(\nu+p+1)/2} \exp(-\text{tr}(S \Sigma^{-1}/2))$. In our data analysis, $Y_i$'s are the log intensities $\log(\hat{\lambda}^{(i)}(\cdot))$ of LGCPs obtained in Section \ref{sec:lgcp}. We follow the convention to choose $p_{K}$ as a Poisson($\tau=1$) distribution truncated to take only positive values \citep{miller2018mixture,geng2019bayesian}. The prior distributions for $\bm{\Theta}_{k}$'s are specified to facilitate Bayesian inference via Gibbs sampling by taking advantage of the Normal-Normal and Normal-inverse-Wishart conjugacy. We will discuss more details about the numerical implementation in the later sections. The matrix normal mixture model has been previously studied in \citet{viroli2011finite} under the EM framework and in \citet{gao2018regularized} by imposing regularization on the mean structure for sparsity structure. However, in both works, it remains challenging to conduct full inference on the number of clusters and the cluster parameters simultaneously. \citet{viroli2011model} considered a Bayesian matrix normal mixture model and proposed to use birth and death MCMC algorithm for posterior inference. However, that method does not scale up to the size of the matrix and the theoretical property of the Bayesian estimators remains largely unknown. We will provide more details about computation and theoretical results, and highlight our contributions in the next two sections. \section{Bayesian Inference}\label{sec:inference} In this section, we present a Gibbs sampler that enables efficient Bayesian inference for the proposed model and adopt the Dahl's method \citep{dahl2006model} for post-processing the MCMC outputs. \subsection{MCMC Algorithm}\label{sec:mcmc} By exploiting the conditional conjugacy property in model specification \eqref{eq:MFM_MxN}, we derive a collapsed Gibbs sampler algorithm for efficient Bayesian inference. Detailed derivations of the full conditionals are provided in Sections S3 and S4 of the Supplementary Materials. For the basketball application, we find it plausible to assume that different mixture components share the same covariance structure, that is, $U_{1} = \cdots = U_{K} = U$ and $V_{1} = \cdots = V_{K} = V$. Extension to allow distinct covariances for different clusters is possible by considering auxiliary parameters when updating indicator variables $Z_i, i=1,\ldots,n$ using the method in \citet{neal2000markov}. Based on the Algorithm 2 of \citet{neal2000markov}, we obtain the following proposition that provides the full conditional distribution of $Z_i, i=1,\ldots,n$ while collapsing the number of clusters $K$. \begin{prop} \label{prop:Z_update} The full conditional distributions $P(Z_i | Z_{-i}, \bm{\Theta} )$ is given by \begin{equation} P(Z_i | Z_{-i}, \bm{\Theta} ) \propto \begin{cases} (\#c + \gamma) f(Y_{i} | M_{Z_{i}}, U, V) & \text{at an existing cluster} \ $c$ \\ \frac{V_{n}(\#\mathcal{C}_{-i} + 1)}{V_{n}(\#\mathcal{C}_{-i})} \gamma m(Y_{i} | U, V) & \text{if} \ c \ \text{is a new cluster} \end{cases} , \end{equation} where $\#c$ refers to the cardinality of the cluster labeled as $c$, $f(Y_{i} | M_{Z_{i}}, U, V)$ is the density function of MxN defined in \eqref{eq:MxN}, $V_{n}(t)$ is a coefficient for the partition distribution defined as $$ V_{n}(t) = \sum_{k=1}^{\infty} \frac{k_{(t)}}{(\gamma k)^{(n)}} p_K(k), $$ with $k_{(t)} = k(k-1)\ldots(k-t+1)$, $(\gamma k)^{(n)} = \gamma k (\gamma k + 1)\ldots(\gamma k + n - 1)$, $\mathcal{C}_{-i}$ represents a partition of the set $\left\{1,2,\ldots,n\right\} \setminus \{i\}$, and let $\#\mathcal{C}_{-i}$ denote the number of blocks in the partition $\mathcal{C}_{-i}$. Also, we define $m(Y_{i} | U, V)$ as \begin{equation*} \frac{\exp(-\frac{1}{2}[\text{vec}(Y_{i})^{\intercal}(V^{-1} \otimes U^{-1})\text{vec}(Y_{i}) + \text{vec}(M_{0})^{\intercal} (\Omega_{0}^{-1} \otimes \Sigma_{0}^{-1}) \text{vec}(M_{0}) - \tilde{\mu}^{\intercal} \tilde{\Sigma}^{-1} \tilde{\mu} ])}{(2\pi)^{pq/2}|V|^{p/2} |U|^{q/2} |\Omega_{0}|^{p/2} |\Sigma_{0}|^{q/2}} |\tilde{\Sigma}|^{pq/2}, \end{equation*} where $\tilde{\Sigma}^{-1} = V^{-1} \otimes U^{-1} + \Omega_{0}^{-1} \otimes \Sigma_{0}^{-1} $ and $\tilde{\mu} = \tilde{\Sigma} [ (V^{-1} \otimes U^{-1}) \text{vec}(Y_{i}) + (\Omega_{0}^{-1} \otimes \Sigma_{0}^{-1}) vec(M_{0}) ]$. \end{prop} The derivation of $m(Y_{i} | U, V)$ in Proposition \ref{prop:Z_update} is given in the Supplementary Materials. Our collapsed Gibbs sampler algorithm for proposed model is summarized as Algorithm 1 in Section S5 of the Supplementary Materials. We make the following recommendations for hyperparameter values in priors: \begin{itemize} \item $\alpha = (p+1)/2$, $\psi = (q+1)/2$, $(2\beta)^{-1} = I_{p}$, $(2\rho)^{-1} = I_{q}$, which ensure the prior distributions for covariance matrices to be fairly diffuse while scale parameters $2\beta$, $2\rho$ are chosen to possess the simplest possible forms. \item $\gamma = 3$, which puts low probability on small group sizes. \item Set $M_0$ as the (element-wise) middle point of the observations. \item $\Sigma_0 = \text{diag}(\sigma_{1}^{2}, \ldots, \sigma_{p}^{2})$, $\Omega_0 = \text{diag}(\omega_{1}^2,\ldots,\omega_{q}^2)$, where $\sigma_{1},\ldots,\sigma_{p}$ and $\omega_{1},\ldots,\omega_{q}$ are equal to half of the ranges along the respective rows and columns. \end{itemize} Numerical experiments have confirmed that the above hyperparameters work well, and hence these values will be used for all simulation studies and case studies in this paper. \subsection{Post MCMC Inference}\label{sec:post_mcmc} We carry out posterior inference on the group memberships using Dahl's method \citep{dahl2006model}, which proceeds as follows, \begin{itemize} \item \emph{Step 1.} Define membership matrices $\mathcal{A}^{(l)} =(\mathcal{A}^{(l)}(i,j))_{i,j \in \left\{1,\ldots,n\right\} } = (\mathbbm{1}(Z_{i}^{(l)} = Z_{j}^{(l)}))_{n \times n}$, where $l = 1, \ldots, L$ is the index for the retained MCMC draws after burn-in, and $\mathbbm{1}(\cdot)$ is the indicator function. \item \emph{Step 2.} Calculate the element-wise mean of the membership matrices $\bar{\mathcal{A}} = \frac{1}{L} \sum_{l=1}^{L} \mathcal{A}^{(l)}$. \item \emph{Step 3.} Identify the most \emph{representative} posterior draw as the one that is closest to $\bar{\mathcal{A}}$ with respect to the element-wise Euclidean distance $\sum_{i=1}^{n} \sum_{j=1}^{n} (\mathcal{A}^{(l)}(i,j) - \bar{\mathcal{A}}(i,j))^{2}$ among the retained $l = 1,\ldots,L$ posterior draws. \end{itemize} The posterior estimates of cluster memberships $Z_1,\ldots,Z_n$ and other model parameters $\bm{\Theta}$ can be also obtained using Dahl's method accordingly. \section{Theory}\label{sec:theory} Next we study the theoretical properties for the posterior distribution obtained from model \eqref{eq:MFM_MxN}. In order to establish the posterior contraction results, we consider a refined parameter space $\bm{\Theta^*}$ defined as $\cup_{k=1}^{\infty} \bm{\Theta_k^*}$, where $\bm{\Theta_k^*}$ corresponds to the compact parameter space for all the model parameters (i.e., mixture weights, matrix normal mean and covariances) given a fixed cluster number $K=k$. More precisely, we define $\bm{\Theta_k^*}$ as \begin{align*} \Big\{&w_1,\ldots,w_k \in (\epsilon, 1-\epsilon),~ \sum_{i}^k w_i = 1, ~~ M_1,\ldots,M_k \in (-C_1, C_1)^{p \times q}, \\ & \sigma_{1}(U_i), \ldots, \sigma_p(U_i) \in (\underline{\sigma}, \bar{\sigma}), ~~ e_{1}(U_i),\ldots,e_p(U_i) \in (-C_2,C_2)^p ~~\text{for every}~i=1,\ldots,k,\\ & \sigma_{1}^*(V_j), \ldots, \sigma_q^*(V_j) \in (\underline{\sigma}, \bar{\sigma}), ~~ e_{1}^*(V_j),\ldots,e_q^*(V_j) \in (-C_3,C_3)^q ~~\text{for every}~j=1,\ldots,k. \Big\}, \end{align*} where $\epsilon, \underline{\sigma}, \bar{\sigma}, C_1,C_2,C_3$ are some positive constants, $\{\sigma_1(U_i),\ldots,\sigma_p(U_i); e_1(U_i),\ldots,e_p(U_i)\}$, $\{\sigma_1^*(V_j),\ldots,\sigma_q(V_j); e_1^*(V_j),\ldots,e_q^*(V_j)\}$ are eigenvalues and eigenvectors for matrix $U_i$ and $V_j$, respectively. We also define the mixing measure as $G = \sum_{i=1}^k w_i \delta_{\gamma_i}$, where $\delta$ is the point mass measure, and $\gamma_i = \{M_i, U_i, V_i \}$ is the collection of parameters for the matrix normal distribution in cluster $i$ for $i=1,\ldots,k$. For two sequence of real numbers $\{a_n\}$ and $\{b_n\}$, we define $a_n \lesssim b_n$ if there exists a universal positive constant $C$ whose value is independent of $n$ such that $a_n \leq C b_n$. For any two mixing measures $G_1 = \sum_{i=1}^k p_i \delta_{\gamma_i}$ and $G_2 = \sum_{j=1}^{k'} p_j' \delta_{\gamma_j}$, we define their Wasserstein distance as $W(G_1,G_2) = \inf_{q \in \mathcal{Q}} \sum_{i,j} q_{ij} \|\gamma_i - \gamma_j\| $, where $\| \cdot \|$ is the element-wise $L_2$-distance, $\mathcal{Q}$ denotes the collection of joint discrete distribution on the space of $\{1,\ldots,k\} \times \{1,\ldots,k'\}$ and $q_{ij} $ is the probability being associated with $(i,j)$-element and it satisfies the constraint that $\sum_{i=1}^k q_{ij} = p_j'$ and $\sum_{j=1}^{k'} q_{ij} = p_i$, for every $i=1,\ldots,k$ and $j=1,\ldots,k'$. Let $K_0$, $G_0$, $P_0$ be the true number of clusters, the true mixing measure, and the corresponding probability measure, respectively. Then the following theorem establishes the posterior consistency and contraction rate for the cluster number $K$ and mixing measure $G$. The proof is given in Supplementary Materials, Section S6; and it is based on the general results for Bayesian mixture models in \citet{guha2019posterior}. \begin{theorem}\label{thm1} Let $\Pi_n(\cdot \mid Y_1,\ldots,Y_n)$ be the posterior distribution obtained from \eqref{eq:MFM_MxN} given a random sample $Y_1,\ldots,Y_n$. Assume that the parameters of interest are restricted to $\bm{\Theta^*}$. Then we have \begin{align*} \Pi_n(K = K_0 \mid Y_1,\ldots,Y_n) \rightarrow 1, ~\text{and}~~ \Pi_n (W(G,G_0)\lesssim (\log n/n)^{-1/2} \mid Y_1,\ldots,Y_n) \rightarrow 1, \end{align*} almost surely under $P_0$ as $n \rightarrow \infty$. \end{theorem} Theorem \ref{thm1} shows that our proposed Bayesian method is able to correctly identify the unknown number of clusters and the latent clustering structure with posterior probability tending to one as the sample size increases. The requirement of a compact parameter space $\bm{\Theta^*}$ is commonly used in the Bayesian nonparametrics literature \citep{guha2019posterior}, and it is practically relevant since the model parameters are expected to take values in a pre-specified range. For example, it is reasonable to assume that the mixture weights are greater than some extremely small number such as $.001\%$ to yield meaningful clustering results. \section{Simulation}\label{sec:simu} \subsection{Simulation Setup}\label{sec:setup} We conduct simulation studies to examine the finite-sample performance of the proposed method based on three evaluation metrics, (i) probability of choosing the correct number of clusters, (ii) Rand index \citep{rand1971objective}, and (iii) root mean squared error in estimating $V \otimes U$. Those three metrics serve as useful evaluation measures in terms of the model selection accuracy, clustering structure recovery performance, and parameter estimation accuracy. We compare the performance of the proposed method with that of two classical benchmark methods, $K$-means clustering \citep{hartigan1979algorithm} and spectral clustering \citep{ng2002spectral}. Both methods take the vectorized matrices as the input. Those two benchmark methods are implemented using the built-in function \texttt{kmeans} and the function \texttt{specc} in R package \texttt{kernlab} \citep{kernlab} with a Gaussian kernel under default settings, respectively. The Rand index is calculated using the function \texttt{rand.index} in R package \texttt{fossil} \citep{vavrek2011fossil}. When generating the data, we consider two matrix sizes: (i) small matrix size, where $p=10$ and $q=6$, and (ii) large matrix size, where $p=25$ and $q=18$. For small matrix size, we generate three clusters of signals from matrix normal distributions with weights $\pi = (0.3, 0.3, 0.4)$ and the mean matrices $M_1, M_2, M_3 \in \mathcal{R}^{10\times6}$ displayed in the first row of Figure \ref{fig:shapes_10by6_simulation}, where the elements are coded as $1$ if corresponding regions are shaded, and $0$ otherwise. The row-wise covariance matrix $U$ is drawn from a standard Wishart distribution with $\nu = 11$ and dimension $10$ (to ensure that the marginal variance of the noise is equal to $\sigma^2$, $U$ is converted to a correlation matrix), and the column-wise covariance matrix $V$ is a $6 \times 6$ AR(1) matrix with $\rho = 0.9$ (i.e., $V = \Sigma_{AR(1), 0.9, 6}$). We set the total sample size $n \in \{ 100, 200, 400\}$. To examine the performance of the proposed method and the other two competitive methods under different noise levels, we also consider another setting under which the row-wise covariance matrix $V = 0.5^2 \times \Sigma_{AR(1), 0.9, 6}$. We run $100$ Monte-Carlo replications, and for each replication we run MCMC chains for $1500$ iterations, where the first $1000$ draws are discarded as burn-in for the experiments on small size matrix. \begin{figure} \centering \def\tabularxcolumn#1{m{#1}} \begin{tabularx}{\linewidth}{@{}cXX@{}} \begin{tabular}{lcr} \subfloat[$M_1$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_M1.jpg}} & \subfloat[$M_2$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_M2.jpg}} & \subfloat[$M_3$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_M3.jpg}} \\ \subfloat[$\hat{M}_1$, $n=100$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n100_recovered_M1.jpg}} & \subfloat[$\hat{M}_2$, $n=100$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n100_recovered_M2.jpg}} & \subfloat[$\hat{M}_3$, $n=100$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n100_recovered_M3.jpg}}\\ \subfloat[$\hat{M}_1$, $n=200$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n200_recovered_M1.jpg}} & \subfloat[$\hat{M}_2$, $n=200$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n200_recovered_M2.jpg}} & \subfloat[$\hat{M}_2$, $n=200$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n200_recovered_M3.jpg}}\\ \subfloat[$\hat{M}_1$, $n=400$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n400_recovered_M1.jpg}} & \subfloat[$\hat{M}_2$, $n=400$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n400_recovered_M2.jpg}} & \subfloat[$\hat{M}_3$, $n=400$]{\includegraphics[width=0.27\textwidth]{shape_10by6_simulation_n400_recovered_M3.jpg}}\\ \end{tabular} \end{tabularx} \caption{True signals and representative draws from recovered signals by MFM under $n=100, 200, 400$ and high noise level ($V = \Sigma_{AR(1), 0.9, 6}$). } \label{fig:shapes_10by6_simulation} \end{figure} For large matrix size, we consider three mean patterns shown in Figure \ref{fig:log_inten_simu}, each of which is designed to represent a prevalent offensive style in the NBA shooting chart data. From the log intensity maps in Figure~\ref{fig:log_inten_simu}, we note that the Group 2 represents all-around players such as LeBron James; Group 3 represents three-point shooters such as Eric Gordon; and Group 1 represents inside players such as Steven Adams. With the mean structure being the log intensity shown in Figure \ref{fig:log_inten_simu}, we generate our simulated data from matrix normal distributions with column-wise covariance matrix $V = \sigma^2 \times \Sigma_{AR(1), \rho, 25}$ and row-wise covariance matrix $U$ drawn from a standard Wishart distribution with $\nu = 19$ and dimension $18$ (to ensure that the marginal variance of the noise is equal to $\sigma^2$, $U$ is converted to a correlation matrix). We fix $n=200$ to mimic the number of players in the motivating data example, and choose $\pi=(0.3,0.4,0.3)$. We also run $100$ replicates and for each replicate we run a MCMC chain for $1200$ iterations, where the first $600$ draws are discarded as burn-in. To ensure a comparison that is as fair as possible, for each replication, the number of clusters for $K$-means and spectral clustering are chosen as the same number of clusters obtained from the proposed MxN-MFM. The MCMC settings are chosen based on pilot runs, and the overlaid traceplots of Rand index further justify the validity of the chosen MCMC settings, which are shown in the Supplementary Materials. All computations presented in this paper were performed in R (version 3.6.0) \citep{R-Team} on a computing server (256GB RAM, with 8 AMD Opteron 6276 processors, operating at 2.3 GHz, with 8 processing cores in each). \begin{figure}[htp] \centering \includegraphics[width=0.8\textwidth]{simulation11.pdf} \caption{Log Intensity Maps of Three Patterns} \label{fig:log_inten_simu} \end{figure} \subsection{Simulation Results}\label{sec:simu_results} \subsubsection{Small matrix size results} We first present the results for small matrix size ($p=10, q=6)$. Table \ref{tb:RI_10by6} shows the mean Rand index for three different methods under different sample size and noise levels. It is clear that the proposed method (MFM-MxN) outperforms $K$-means and spectral clustering under all scenarios, and its advantage is more salient when the noise level is higher, which is particularly important for clustering. The clustering accuracy (mean Rand index $> 0.95$) of the proposed MFM-MxN method is also compelling at the absolute scale, as the rule-of-thumb Rand index threshold value for ``good clustering'' is $0.80$. We then summarize the distribution of estimated number of clusters for MFM-MxN in Table \ref{tb:Khat_10by6}. We note that the probability of identifying the correct number of clusters is very satisfactory ($>80\%$) for the proposed MFM-MxN, and this probability increases as the sample size increases. These results confirm the benefit of taking account for the matrix structure and the flexibility of full Bayesian inference in the clustering analysis. We also present the RMSE for the estimation of covariance matrix $V \otimes U$ in Figure \ref{fig:VU_RMSE_10by6}. It is clear that the estimation accuracy improves as the noise level drops down and when the sample size increases. This is also confirmed in Figure \ref{fig:shapes_10by6_simulation} where the recovered signals are significantly less noisy and better recapitulate the true signals as the sample size increases. \begin{table}[htp] \centering \caption{Simulation results for small matrix size: Mean Rand index obtained from MFM-MxN, $K$-means and Spectral Clustering under different sample size and noise levels (high: $V = \Sigma_{AR(1), 0.9, 6}$, low: $V = 0.5^2 \times \Sigma_{AR(1), 0.9, 6}$) based on 100 Monte-Carlo replications. \label{tb:RI_10by6}} \begin{tabular}{l|lll|lll} & & $\sigma=1$ & & & $\sigma=0.5$ & \\ \hline & MFM & $K$-means & Spectral & MFM & $K$-means & Spectral \\ \hline $n=100$ & $\bm{0.977}$ & 0.558 & 0.559 & $\bm{0.964}$ & 0.837 & 0.886 \\ $n=200$ & $\bm{0.958}$ & 0.550 & 0.552 & $\bm{0.967}$ & 0.846 & 0.911 \\ $n=400$ & $\bm{0.984}$ & 0.553 & 0.555 & $\bm{0.979}$ & 0.878 & 0.956\\ \bottomrule \end{tabular} \end{table} \begin{table}[htp] \centering \caption{Simulation results for small matrix size: Percentage ($\%$) of selected number of clusters $\hat{K}$ for MFM-MxN under different sample sizes and noise levels (high: $V = \Sigma_{AR(1), 0.9, 6}$, low: $V = 0.5^2 \times \Sigma_{AR(1), 0.9, 6}$) based on $100$ Monte-Carlo replicates. The true number of clusters is 3. \label{tb:Khat_10by6}} \begin{tabular}{l|lll|lll} \toprule noise & & high & & & low & \\ \hline $\hat{K}$& 2 & 3 & 4 & 2 & 3 & 4 \\ \hline $n=100$ & 10 & $\bm{90}$ & 0 & 16 & $\bm{84}$ & 0 \\ $n=200$ & 18 & $\bm{82}$ & 0 & 14 & $\bm{86}$ & 0\\ $n=400$ & 7 & $\bm{93}$ & 0 & 9 & $\bm{91}$ & 0 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[htp] \centering \includegraphics[width=.8\textwidth]{VU_RMSE_10by6_all.pdf} \caption{Histograms of RMSE (over 100 replicates) for the covariance estimates ($\hat{V} \otimes \hat{U}$) under different sample sizes ($n=100, 200, 400$) and noise levels (high: $V = \Sigma_{AR(1), 0.9, 6}$, low: $V = 0.5^2 \times \Sigma_{AR(1), 0.9, 6}$). \label{fig:VU_RMSE_10by6}} \end{figure} \subsubsection{Large matrix size results} In Tables \ref{tb:Khat_25by18} and \ref{tb:RI_25by18}, we present the results for large matrix size data ($p=25$, $q=18$). Similar to previous findings, our proposed method is able to correctly find the true number of clusters at least $80\%$ of the time for different settings. Our method also has the highest average Rand index ($> 0.95$), which is higher than that of the other two benchmark methods, indicating that the proposed method is very powerful in terms of recovering the latent clustering structure, even for matrix data of larger size. \begin{table}[htp] \centering \caption{Simulation results for large matrix size: Percentage ($\%$) of number of clusters for MFM-MxN under different noise levels ($V = \sigma^2 \Sigma_{AR(1), \rho, 6}$) based on $100$ replicates. The true number of clusters is 3. \label{tb:Khat_25by18}} \begin{threeparttable} \begin{tabular}{l|lll|lll|lll} & \multicolumn{3}{c|}{$\sigma=1.5$} & \multicolumn{3}{c|}{$\sigma=1.0$} & \multicolumn{3}{c}{$\sigma=0.5$} \\ \hline & 2 & 3 & 4 & 2 & 3 & 4 & 2 & 3 & 4 \\ \hline $\rho=0.9$ & 11 & $\bm{89}$ & 0 & 11 & $\bm{89}$ & 0 & 11 & $\bm{89}$ & 0 \\ $\rho=0.6$ & 13 & $\bm{87}$ & 0 & 13 & $\bm{87}$ & 0 & 12 & $\bm{88}$ & 0 \\ $\rho=0.3$ & 0 & $\bm{94}$\tnote{\dag} & 2 & 13 & $\bm{87}$ & 0 & 11 & $\bm{89}$ & 0 \\ \bottomrule \end{tabular} \begin{tablenotes} \centering \footnotesize \item[\dag] Results for four runs are not shown due to large estimated cluster numbers. \end{tablenotes} \end{threeparttable} \end{table} \begin{comment} \begin{table}[htp] \centering \caption{$p=25$, $q=18$. Estimated number of clusters for MFM-MxN, noise levels ($V = \sigma^2 \Sigma_{AR(1), \rho, 6}$, $\sigma=1.5, 1.0, 0.5$, $\rho = 0.9, 0.6, 0.3$) across $50$ replicates. The true number of clusters is 3. \label{tb:Khat_25by18}} \begin{threeparttable} \renewcommand{\arraystretch}{1.3} \begin{tabular}{l|lll} \toprule $\hat{K}$ & 2 & 3 & 4 \\ \midrule $\sigma=1.5$ & & & \\ \quad $\rho=0.9$ & 8 & 42 & 0 \\ \quad $\rho=0.6$ & 9 & 41 & 0 \\ \quad $\rho=0.3$\tnote{\dag} & 0 & 49 & 0 \\ \midrule $\sigma=1.0$ & & & \\ \quad $\rho=0.9$ & 8 & 42 & 0 \\ \quad $\rho=0.6$ & 8 & 42 & 0 \\ \quad $\rho=0.3$ & 9 & 41 & 0 \\ \midrule $\sigma=0.5$ & & & \\ \quad $\rho=0.9$ & 9 & 41 & 0 \\ \quad $\rho=0.6$ & 10 & 40 & 0 \\ \quad $\rho=0.3$ & 10 & 40 & 0 \\ \bottomrule \end{tabular} \begin{tablenotes} \centering \footnotesize \item[\dag] A run is discarded due to large estimated cluster numbers. \end{tablenotes} \end{threeparttable} \end{table} \end{comment} \begin{comment} \begin{table}[htp] \centering \caption{Mean Rand index under different methods (MxN-MFM, $K$-means and Spectral Clustering), noise levels ($\sigma = 1.5, 1.0, 0.5$) and correlation structures ($\rho = 0.9, 0.6, 0.3$), across 50 replicates. \label{tb:RI_25by18}} \fbox{\begin{tabular}{l|lll} & MFM & $K$-means & Spectral \\ \midrule $\sigma=1.5$ & & & \\ \quad $\rho=0.9$ & 0.963 & 0.917 & 0.918 \\ \quad $\rho=0.6$ & 0.957 & 0.917 & 0.945 \\ \quad $\rho=0.3$ & 1.000 & 0.959 & 0.984 \\ \midrule $\sigma=1.0$ & & & \\ \quad $\rho=0.9$ & 0.963 & 0.910 & 0.934 \\ \quad $\rho=0.6$ & 0.957 & 0.899 & 0.929 \\ \quad $\rho=0.3$ & 0.957 & 0.917 & 0.946 \\ \midrule $\sigma=0.5$ & & & \\ \quad $\rho=0.9$ & 0.959 & 0.905 & 0.951 \\ \quad $\rho=0.6$ & 0.953 & 0.895 & 0.924 \\ \quad $\rho=0.3$ & 0.953 & 0.923 & 0.930 \\ \end{tabular}} \end{table} \end{comment} \begin{table}[htp] \centering \caption{Simulation results for large matrix size: Mean Rand index for MFM-MxN, $K$-means and Spectral Clustering under different noise level $\sigma$ and correlation strength $\rho$, based on 50 replicates. \label{tb:RI_25by18}} \resizebox{\textwidth}{!}{\begin{tabular}{l|lll|lll|lll} & & $\sigma=1.5$ & & & $\sigma = 1.0$ & & & $\sigma = 0.5$ & \\ \hline & MFM & $K$-means & Spectral & MFM & $K$-means & Spectral & MFM & $K$-means & Spectral \\ \hline $\rho=0.9$ & $\bm{0.963}$ & 0.917 & 0.918 & $\bm{0.963}$ & 0.910 & 0.934 & $\bm{0.959}$ & 0.905 & 0.951\\ $\rho=0.6$ & $\bm{0.957}$ & 0.917 & 0.945 & $\bm{0.957}$ & 0.899 & 0.929 & $\bm{0.953}$ & 0.895 & 0.924 \\ $\rho=0.3$ & $\bm{1.000}$ & 0.959 & 0.984 & $\bm{0.957}$ & 0.917 & 0.946 & $\bm{0.953}$ & 0.923 & 0.930 \\ \bottomrule \end{tabular}} \end{table} \section{Application to NBA Shot Chart Data analysis}\label{sec:real_data} In this section, we apply the proposed method to investigate the shooting pattern of players in the 2017-2018 NBA regular season. Our analysis is purely based on the location of shots and hence the resulting clusters are completely data-driven without considering other information about players or their teams. The shots that are made 36 ft away from the baseline are not included in this analysis as they are usually not part of the regular tactics. We start by obtaining the intensity surface\footnote{The resulting intensity surface is scaled to adjust for the number of games played by the respective player.} for each player by fitting an LGCP to raw shot location data using off-the-shelf functions in R package \texttt{inlabru} \citep{inlabru}. The logarithm of each intensity surface is then discretized to generate a $25$ by $18$ matrix as the main variable of interest. To implement our method, we run $50$ independent MCMC chains with random initial values, each of which has $6000$ iterations where the first $4000$ are discarded as burn-in to ensure the convergence. We select a representative chain that has the highest mean concordance value (mean Rand index = $0.82$) compared to all other chains with respect to the clustering memberships. This representative chain yields 3 groups of size 71, 23 and 97, respectively. Visualizations of intensity matrices with contour for selected players from these three groups are presented in Figure~\ref{fig:real_data_group}. A full list of player names for each group are given in the Supplementary Materials, Section S1. We also plot the estimated covariance matrices in Section S7. We find that both the column-wise covariance $\hat{U}$ and row-wise covariance matrix $\hat{V}$ enjoy a banded structure, i.e., the correlations excluding the diagonal and off-diagonal entries are quite small, which confirms that our method is able to take spatial/location information into account by modeling the matrix structure in the data appropriately. \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{log_intensity_groups.pdf} \caption{Log Intensity Matrices with Contour for Selected Players} \label{fig:real_data_group} \end{figure} Several interesting observations can be made from the visualization results. We see that for the players in Group 1, they are able to make all types of shots, including three-pointers, perimeter shot, and also shots over the painted area. However, compared with the players in Group 3, they have less three-pointers. Most players in this group are Power Forward and Small Forward. However, we can still find some players such as Dwyane Wade, Ricky Rubio and Rajon Rondo in this group. This can be explained by the fact that the three players mentioned above do not have a good three-point shot percentage and tend to attack the basket in paint area. The players in Group 2 have most shots located near the hoop. They are good at making alley-oops and slam dunks, however, not good at making three-pointers. Most of them are Center. There is still an interesting player in this group, Dejounte Murray. He plays very similarly with Tony Park who is a previous player of Spurs and makes more field goal attempts on paint area like the Center. For the players in Group 3, we find that they have more three-pointers compared to the other two groups, and the players in this group are almost all Shooting Guard. Although we still find that Kevin Durant belongs to this group, because he is an all-rounder and has an excellent ability of scoring three-pointers. In addition, there are some inside players in this group such as Kevin Love, Kelly Olynyk, and DeMarcus Cousins. This reflects the recent trend that the NBA teams start to prefer or even require their inside players to shoot from downtown and release the space in paint area. Our findings also confirm that the number of pure inside players decreases as the three-pointer becomes a conventional weapon for most players. \section{Discussion} \label{sec:discussion} In this paper, we propose a novel Bayesian nonparametric clustering approach for learning the latent heterogeneity in matrix-valued response data. Building upon the mixture of finite mixtures framework, we develop a collapsed Gibbs sampler for efficient Bayesian inference and adopt Dahl's method for post MCMC inference. Numerical results have confirmed that the proposed method is able to simultaneously infer the number of clusters and the model parameters with high accuracy. Comparing to the traditional clustering techniques such as $K$-means and spectral clustering, the proposed method is able to improve the clustering performance especially when the noise level is high for the reason that the rich spatial location information is incorporated by handling the data in the matrix format. In the analysis of the NBA shot charts data, three prevalent shooting patterns along with the respective players are identified. The results provide valuable insights to both players and managers - players can obtain more comprehensive understandings of their current attacking patterns, and hence develop further training plans accordingly; the managers can be equipped with more \emph{objective} and principled analysis of shooting patterns of the players in the league, and hence make better data-informed decisions on player recruiting. A few topics beyond the scope of this paper are worth further investigation. A natural extension is Bayesian clustering for general multi-way data such as tensors. Also, as matrix inverse is required at each MCMC update, the proposed estimation procedure can be slow when the matrix size is very large. Proposing an efficient algorithm for large matrix data devotes an interesting future work, and we envision low-rank approximation and sparse compression as two promising directions to mitigate this computational challenge. Finally, jointly estimating intensity surface and grouping information is another interesting direction for future work. \section*{Acknowledgements} The authors would like to thank Dr.~Yishu Xue and Dr.~Hou-Cheng Yang for providing the organized data which include the raw shot charts and estimated intensity maps via INLA and R code for data visualization. \section*{Supplementary Materials}\vspace{-2mm} Technical details about the posterior derivation, proof of theorems, additional numerical results are provided in the Online Supplementary Materials. R code and the data for the computations of this work are available at https://github.com/fyin-stats/MFM-MxN. \begin{comment} \begin{description} \item[code:] The supplemental files for this article include R programs which can be used to replicate the simulation study included in the article. \item[Appendix:] The supplemental files include the proof of Theorem 1, and additional Figures from the numerical studies. \end{description} \end{comment} \bibliographystyle{abbrvnat}
{ "attr-fineweb-edu": 2.154297, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeB3xaJJQnK0WJqpI
\section{Introduction} \noindent Different types of sports have different game rules, frequency of games, style, etc that make the sport unique and interesting. In particular, how often games are held is a factor that differs a lot across different sports. Baseball, which is a sport that the league is very popular in countries like United States (Major League Baseball, or MLB), South Korea (Korea Baseball Organization league, or KBO league), and Japan (Nippon Professional Baseball, or NPB), has a feature that each team plays a game nearly everyday. This is in contrast to, say, the English Premier League (the soccer league in England), where each team has a game once or twice a week. In particular, in the KBO league, each of the ten teams has a match everyday except for Mondays. This feature that there is a game nearly everyday could be associated somehow to a team's performance in a game. For instance, a team might be more likely to do well on a game if it had won all the recent five games compared to the case where it had lost all the recent three games. In fact, when discussing a preview or prediction on a game, many KBO league news articles mention the number of consecutive wins or losses the team currently has. This paper focuses on the KBO league, in particular how the consecutiveness of the schedule is related to each team's outcome. More specifically, this paper examines, for each of the ten teams in the KBO league, if we were to model the game outcomes (represented as a single sequence) as a $k^{\text{th}}$ order Markov chain, which value of $k$ is the most effective. In section 2, we introduce the KBO league in general, and in section 3, the higher-order Markov chain model whose possible states are win ($``W"$), draw ($``D"$), and loss ($``L"$), particularly the one used in the \textsf{markovchain} R package$^{4}$, is discussed. Then in sections 4 and 5, how we assess the model fit and the actual model results are reported, and lastly in section 6 we discuss conclusions and potential future work. \section{KBO League Introduction} \noindent In this section, we introduce the KBO league in general. The KBO league began in 1982 with six teams back then: Haitai Tigers, Lotte Giants, MBC Blue Dragons, OB Bears, Sammi Superstars, and Samsung Lions$^{1}$. With some teams being the successor of the teams back then in 1982, currently the KBO league has ten teams competing: Doosan Bears, SK Wyverns, Hanwha Eagles, Nexen Heroes, LG Twins, Samsung Lions, Lotte Giants, KIA Tigers, KT Wiz, and NC Dinos. Unlike in the MLB (where team names represent the home location: e.g. New York Yankees), the KBO league team names represent the sponsor corporation. Furthermore, unlike in MLB where the game doesn't end as a draw (or a tie) except for exceptional reasons like weather (or sometimes darkenss), in a KBO league game, if the two teams have the same score after the 12th inning, the game ends as a draw. The league does not have sub-leagues. Rather, the ten teams together compete in the pennant race in such a way that each of the ten teams face the other nine teams 16 times, eight home games and eight away games, thereby playing total 144 games in the regular season. In post-season, the 5th place team competes in a wild-card round against the 4th place team. In the wild-card round, if the 4th place wins the first game, then the round is immediately over with the 4th place going to the semi-playoffs, but if the 5th place wins the first game, then the two teams compete in the second game where that game's winner goes to the semi-playoffs. The wild-card round victor faces the 3rd place team in the semi-playoffs with a best-of-five format, and the semi-playoffs victor faces the 2nd place in the playoffs, also in best-of-five. Finally, the playoffs victor plays against the 1st place team in the final round called the Korean Series with a best-of-seven format. Note that the rules mentioned in this paragraph could change in future seasons (for example, in the 2015 season the total number of games changed from 128 to 144), but at least in the 2018 season, those rules are applied$^{2}$. Table 1 shows the ranking of the KBO league 2018 as of August 18th, 2018, which is right before the Asian Games break (the KBO league in 2018 has a break of approximately 3 weeks since some of the players go to the Jakarta-Palembang Asian Games 2018 as part of the South Korean national team). \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Rank & Team & Games & Wins & Draws & Losses & Winning rate & Games behind \\ \hline 1 & Doosan Bears & 113 & 73 & 0 & 40 & 0.646 & 0.0 \\ \hline 2 & SK Wyverns & 112 & 62 & 1 & 49 & 0.559 & 10.0 \\ \hline 3 & Hanwha Eagles & 114 & 62 & 0 & 52 & 0.544 & 11.5 \\ \hline 4 & Nexen Heroes & 118 & 61 & 0 & 57 & 0.517 & 14.5 \\ \hline 5 & LG Twins & 116 & 56 & 1 & 59 & 0.487 & 18.0 \\ \hline 6 & Samsung Lions & 116 & 54 & 3 & 59 & 0.478 & 19.0 \\ \hline 7 & Lotte Giants & 110 & 51 & 2 & 57 & 0.472 & 19.5 \\ \hline 8 & KIA Tigers & 110 & 51 & 0 & 59 & 0.464 & 20.5 \\ \hline 9 & KT Wiz & 113 & 47 & 2 & 64 & 0.423 & 25.0 \\ \hline 10 & NC Dinos & 116 & 47 & 1 & 68 & 0.409 & 27.0 \\ \hline \end{tabular} \caption{KBO League 2018 Rank (as of August 18th, 2018)} \label{tab:num1} \end{center} \end{table} \section{Higher-Order Markov Chain Model} \noindent The goal of this paper is to use higher-order Markov chains to model the game outcomes for each team in the KBO league. This section introduces the higher-order Markov chain model and parameter estimation methods. The notations and formulations of the model discussed in this section follow Chapter 6 in Ching, Huang, Ng, and Siu (2013)$^{3}$, in which the \textsf{markovchain} R package$^{4}$, which we use for the computation in this study, was implemented based on. \subsection{First-order Markov Chain} First, we briefly introduce the (discrete time) first-order Markov chain, usually referred to just ``Markov chain". Let the data sequence $\{ X^{(i)} \}_{i=1}^{n} = \{ X^{(1)}, X^{(2)}, \cdots, X^{(n)} \} $ be a stochastic process where each $X^{(i)}$ can take finite or countable number of possible values. We call such possible values `states'. For example, a stochastic process whose possible states are ``sunny'', ``cloudy'', and ``rainy'' may produce a data sequence of $\{ \text{``cloudy''}, \text{``cloudy''}, \text{``rainy''}, \text{``sunny''}, \text{``rainy''} \}$. The set of all possible states is called the `state space', which can consist of essentially anything: numbers, letters, weather conditions, baseball game outcomes, etc. In this paper, we let $S$ denote the state space, and let $m$ denote the number of possible states (i.e. $m = |S|$). The name ``Markov chain" comes from the (first-order) Markov property. The property states, or assumes, that the next state $X^{(n)}$ is conditionally independent of all the states so far (i.e. $X^{(n-1)}, X^{(n-2)}, \cdots, X^{(1)}$) given the current state $X^{(n-1)}$. That is: for any timestep $n$, \begin{equation} P(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}, X^{(n-2)}=x_{n-2}, \cdots, X^{(1)}=x_{1}) = P(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}) \end{equation} Intuitively, we wander around the states in the state space, and the most recent past is only what matters for the present. Moreover, the model assumes that for each pair of states $(i, j)$, there is a fixed transition probability $p_{ij}$, which is the probability that the process moves to state $i$ given that it's currently at state $j$. The chain always decides its state at the next timestep according to these transition probabilities, which can be represented as a single $m \times m$ matrix called the ``transition matrix". In our notation, the row $i$ \& column $j$ entry of the transition matrix has $p_{ij}$, the transition probability from state $j$ to state $i$. Intuitively, we can think of each column of the transition matrix representing the ``from" state, and each row being the ``to" state. Clearly, each column of the transition matrix must sum to 1. \subsection{Higher-order Markov Chain} In the first-order Markov chain model, the assumption was that the state at timestep $n$ only depends on the state at the timestep immediately before (i.e. $n-1$) and all the further past are meaningless. We can relax the assumption in such a way that the state at a timestep depends on more of the recent past. Formally, a $k^{th}$ order Markov chain assumes that the state at timestep $n$ only depends on the states at the recent $k$ timesteps (i.e. $n-1, n-2, \cdots, n-k$). That is: for any timestep $n$: \begin{equation} \begin{split} & \quad \enskip P(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}, X^{(n-2)}=x_{n-2}, \cdots, X^{(1)}=x_{1}) \\ &= P(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}, X^{(n-2)}=x_{n-2}, \cdots, X^{(n-k)}=x_{n-k}) \end{split} \end{equation} Notice that if we set $k=1$, then the model is equivalent to what was introduced in Section 3.1, and this is why it is called the ``first-order Markov chain". Furthermore, the $k^{th}$ order Markov chain model assumes that there is an $m \times m$ transition matrix $Q^{(l)}$ defined for each lag $l \in \{1, \cdots, k\}$. The row $i$ \& column $j$ entry of the $l$-step transition matrix $Q^{(l)}$ has the probability that the process will move to state $i$ after $l$ timesteps given that currently it's at state $j$. Again, clearly it must be true that each column of $Q^{(l)}$ sums to 1, $\forall l \in \{1, \cdots, k\}$. Also, each lag $l \in \{1, \cdots, k\}$ has a non-negative weight $\lambda_{l}$ with: \begin{equation} \sum_{l=1}^{k} \lambda_{l} = 1 \end{equation} Then, the model says: \begin{equation} \mathbf{X}^{(n+k+1)} = \sum_{l=1}^{k} \lambda_{l} Q^{(l)} \mathbf{X}^{(n+k+1-l)} \end{equation} where $\mathbf{X}^{(n+k+1-l)}$ is an $m \times 1$ vector that shows the probability distribution of the $m$ states at timestep $n+k+1-l$, which essentially shows, for each state $i$, if we draw this Markov chain process many times, what proportion of those simulations will be at state $i$ at timestep $n+k+1-l$. Equation (4) can be rewritten as: \begin{equation} P(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}, X^{(n-2)}=x_{n-2}, \cdots, X^{(n-k)}=x_{n-k}) = \sum_{l=1}^{k} \lambda_{l} q_{x_{new}, x_{n-l}}^{(l)} \end{equation} where $q_{x_{new}, x_{n-l}}^{(l)}$ denotes the row $x_{new}$ \& column $x_{n-1}$ entry of the matrix $ Q^{(l)} $. It can be shown that if $Q^{(l)}$ is irreducible and aperiodic, $\lambda_{l} > 0$, and $\sum_{l=1}^{k} \lambda_{l} = 1$, then this model has a stationary distribution $\mathbf{X}$ that satisfies $\Big( \mathbf{I} - \sum_{l=1}^{k} \lambda_{l} Q^{(l)} \Big) \mathbf{X} = \mathbf{0}$ and also $\text{lim}_{n \rightarrow \infty} \mathbf{X}^{(n)} = \mathbf{X}$, where $\mathbf{I}$ denotes the $m \times m$ identity matrix, and $\mathbf{0}$ is the length-$m$ vector of all $0$'s. Now we discuss the methods for estimating the model parameters: $Q^{(l)}$ and $\lambda_{l}$ for each $\l \in \{1, \cdots, k\}$. Notice that this higher-order Markov chain model has $k + km^{2}$ parameters since each transition matrix $Q^{(l)}$ has $m^{2}$ entries. Again, assume we observe a data sequence of length $n$: $\{ X^{(t)} \}_{t=1}^{n} = \{ X^{(1)}, X^{(2)}, \cdots, X^{(n)} \} $. For every ordered pair of states $(i,j)$, for each lag $l \in \{1, \cdots, k\}$, we define the transition frequency $f_{ji}^{(l)}$ as the number of times in the given data sequence such that the process is at state $i$ and then after $l$ steps it is at state $j$. Naturally, we can write these altogether in matrix form: we define the $l$-step transition frequency matrix $F^{(l)}$ (of size $m \times m$) as: \begin{equation} F^{(l)} = \begin{bmatrix} f_{11}^{(l)} & f_{12}^{(l)} & \cdots & f_{1m}^{(l)} \\ f_{21}^{(l)} & f_{22}^{(l)} & \cdots & f_{2m}^{(l)} \\ \vdots & \vdots & \ddots & \vdots \\ f_{m1}^{(l)} & f_{m2}^{(l)} & \cdots & f_{mm}^{(l)} \\ \end{bmatrix} \end{equation} Of course, this matrix is defined for every lag $l \in \{1, \cdots, k\}$. Then, for each lag $l \in \{1, \cdots, k\}$, we can estimate the $l$-step transition matrix $Q^{(l)}$ as: \begin{equation} \hat{Q}^{(l)} = \begin{bmatrix} \hat{q}_{11}^{(l)} & \hat{q}_{12}^{(l)} & \cdots & \hat{q}_{1m}^{(l)} \\ \hat{q}_{21}^{(l)} & \hat{q}_{22}^{(l)} & \cdots & \hat{q}_{2m}^{(l)} \\ \vdots & \vdots & \ddots & \vdots \\ \hat{q}_{m1}^{(l)} & \hat{q}_{m2}^{(l)} & \cdots & \hat{q}_{mm}^{(l)} \\ \end{bmatrix} \end{equation} where \begin{equation} \hat{q}_{ij}^{(l)} = \left\{ \begin{array}{ll} \frac{f_{ij}^{(l)}}{\sum_{i=1}^{m}f_{ij}^{(l)}} & \quad \text{if } \sum_{i=1}^{m}f_{ij}^{(l)} \neq 0 \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation} Note that $\hat{q}_{ij}^{(l)} = 0$ if there is no observation such that the process is at state $j$ and then after $l$ steps it is at some state, which happens when state $j$ appears only at the last $l$ timesteps of the observed data sequence. Also, the stationary distribution $\mathbf{X}$ can be estimated from the observed data sequence as the proportion of the occurrence of each state in the sequence. That is: for each state $i$, our estimate of the corresponding entry in the stationary distribution is just the number of times state $i$ appears in our length-$n$ sequence divided by $n$. Let's denote such an estimate by $\hat{\mathbf{X}}$. Given the estimated transition matrices $\hat{Q}^{(1)}, \cdots, \hat{Q}^{(k)}$ and the estimated stationary distribution $\hat{\mathbf{X}}$, we can estimate the $\lambda_{l}$ parameters via solving the following linear programming problem: \begin{equation} \underset{\lambda}{\text{min}} \sum_{i=1}^{m} w_{i} \end{equation} \text{subject to} \begin{equation} \begin{bmatrix} w_{1} \\ w_{2} \\ \vdots \\ w_{m} \end{bmatrix} \ge \hat{\mathbf{X}} - \Big[ \hat{Q}^{(1)} \hat{\mathbf{X}} \enskip | \enskip \hat{Q}^{(2)} \hat{\mathbf{X}} \enskip | \cdots | \enskip \hat{Q}^{(k)} \hat{\mathbf{X}} \Big] \begin{bmatrix} \lambda_{1} \\ \lambda_{2} \\ \vdots \\ \lambda_{m} \end{bmatrix} , \end{equation} \begin{equation} \begin{bmatrix} w_{1} \\ w_{2} \\ \vdots \\ w_{m} \end{bmatrix} \ge - \hat{\mathbf{X}} + \Big[ \hat{Q}^{(1)} \hat{\mathbf{X}} \enskip | \enskip \hat{Q}^{(2)} \hat{\mathbf{X}} \enskip | \cdots | \enskip \hat{Q}^{(k)} \hat{\mathbf{X}} \Big] \begin{bmatrix} \lambda_{1} \\ \lambda_{2} \\ \vdots \\ \lambda_{m} \end{bmatrix} , \end{equation} \begin{equation} \forall i \in \{ 1, \cdots, m \}. \enskip w_{i} \ge 0, \end{equation} \begin{equation} \forall l \in \{ 1, \cdots, k \}. \lambda_{l} \ge 0, \end{equation} \begin{equation} \sum_{l=1}^{k} \lambda_{l} = 1 \end{equation} \section{Method for Assessing Model Fit} \noindent Now that we know how the model is defined and how the parameters are estimated (in the \textsf{markovchain} R package$^{4}$), in this section, we introduce how we assess the quality of the model fit, given a fitted higher-order Markov chain model. For each of the ten teams in the KBO league, we fit a $k^{th}$ order Markov chain on its data sequence of the outcomes of the recent 100 games, for $k = 1, \cdots, 13$. Here, the state space is $\{ ``W", ``D", ``L" \}$ where each state (in the listed order) represents win, draw, and loss, respectively. Each fitted object in the \textsf{markovchain} R package$^{4}$ returns the estimated $\lambda_{l}$ parameters, the estimated $Q^{(l)}$ matrices, and the estimated stationary distribution $\mathbf{X}$. We assess which value of $k$ has the corresponding $k^{th}$ order Markov chain model best describing the team's data sequence via the following procedure. For each team: \noindent\rule{8cm}{0.4pt} \begin{algorithmic} \State $tenGames \gets \text{Randomly choose 10 out of the 100 games in the team's data sequence}$ \For {$k \text{ in } \{1, \cdots, 13 \}$} \For {$game \text{ in } tenGames$} \For {$state \text{ in } \{ ``W", ``D", ``L" \}$} \State $p_{state} \gets P(game=state | \text{recent $k$ observations})$ computed via Equation (5) \enskip (We'll get $p_{W}, p_{D}, p_{L}$) \EndFor \State $predict \gets X \sim Categorical(p_{W}, p_{D}, p_{L})$ \EndFor \EndFor \State $team\_k\_acc \gets (\text{number of correct predictions}) / 10$ \end{algorithmic} \noindent\rule{8cm}{0.4pt} In words, for each team, we first randomly select 10 games out of the 100 present in the team's sequence. We examine across every value of $k$ (corresponding to the $k^{th}$ order Markov chain fitted to this team's sequence) via: \begin{enumerate} \item For each of the ten games, for each of the three possible states, compute the estimated probability that the game's outcome was that particular state given the recent $k$ observations, using Equation (5) and the estimated $\lambda_{l}$'s and the $Q^{(l)}$'s. \item Then, run a simulation from a Categorical distribution (which is essentially a generalization of the Bernoulli distribution where there can be more than two categories) that has three categories (``W'', ``D'', and ``L'') with the computed probabilities. The sampled outcome is our prediction on this game's result. Compare our prediction with the actual game outcome in the team's data sequence. \item Calculate the prediction accuracy: Out of the ten predictions, how many are correct? \end{enumerate} After this process, for each team, we have the prediction accuracy of each of the 13 values of $k$. We assess the fit of the $k^{th}$ order Markov chain model applied to this team's sequence via how high the prediction accuracy is. That is: for each team, we rank the 13 values of $k$ on how well the $k^{th}$ order Markov chain modeled, or described, the observed length-$100$ sequence of the team. \section{Results} \noindent Here we present the model fit results. For each team, We execute the process described in Section 4 and draw a barplot where each bar in the vertical axis represents each $k$ value, and the horizontal axis, of course, shows the prediction accuracy of the corresponding $k^{th}$ order Markov chain model fitted on that team's sequence. The barplots are shown in Figure 1. \begin{figure}[h!] \centering \includegraphics[width=16cm, height=20cm]{"barcharts".png} \caption{Model Fit Result for Each Team} \end{figure} Intuitively, if a particular value of $k$ has the highest prediction accuracy compared to the other values, we can think of it as: for predicting this team's performance of an arbitrary day, considering exactly the recent $k$ games' outcomes works the best, compared to any smaller or larger values of $k$. One of the most interesting patterns we can see in the plots that both Doosan Bears and SK Wyverns, which are the 1st and 2nd ranked teams in the league, have a skewed-to-the right shape (except that the lowest value $k=1$ has a low accuracy), meaning that lower values of $k$ tend to predict better than higher values. In particular, they both have $k = 2$ having the highest accuracy. This means that taking only the recent two matches into account best describes the team's performance in general. The 3rd ranked team Hanwha Eagles also has a right-skewed shape except that the accuracy rises again for the $k < 9$. So overall, the top 3 teams in the league tend to have their recent few games (say, one, two, and three) associated with the performance in a new game the most. Considering the fact that these teams all have relatively high winning rates, we could interpret this result as: a characteristic of the top teams is that once they have a good pace for a few recent games in a row, then it is likely that they will perform well again. On the other hand, once we additionally incorporate earlier games as well, the prediction tends to become poorer. The remaining seven teams (that is: the lower seven) tend to have a reasonably symmetric barplot, with the exception of Lotte Giants (7th ranked) that has a right-skewed shape. Such symmetries indicate that these teams don't have a particular value of $k$ such that the $k^{th}$ order Markov chain better models their outcomes compared to other orders. So either considering only the few recent games or taking more further past outcomes into account does not appear to have much difference in predicting the outcome. Perhaps one could think of this characteristic as: for these teams, the performance in recent games (regardless of which value $k$ takes out of $\{ 1, \cdots, 13 \}$) tend to not be influential to its performance today in the first place. \section{Discussions and Future Work} \noindent Through our results we saw that the top three teams in the KBO league (Doosan, SK, and Hanwha) has a common characteristic that overall, lower values of $k$ tend to have the $k^{th}$ order Markov chain to better model their outcome. On the other hand, the remaining teams except Lotte has a reasonably symmetric shape in their barplot, meaning there is not really a particular value of $k$ that works considerably well compared to other values. However, our analyses has limitations and thus potential future work that can improve the study. First, all of the interpretations were based on exploratory analyses. We plotted a barchart for each team, visually observing how well each value of $k$ did in terms of its $k^{th}$ order Markov chain predicting the match outcomes and thereby modeling the team's performance. We cannot make any formal conclusions at this point. To do so, we could utilize statistical tests on our data, but unfortunately, the size of our data is currently too small. For example, considering applying some kind of two-sample t-test for comparing the top-half teams and the bottom-half teams. We currently only have sample sizes of $n_{1} = 5$ and $n_{2} = 5$. One way of obtaining a larger sample would be to look for the observations in the past years of the KBO league: we go through each year's data, include the top-half ranked teams' sequences into group 1, and the other teams' sequences into group 2. This approach does have a risk that we have to assume that observations across different years for the same team are independent (by definition of the t-test). That is: we have to assume that the 2018 edition of Doosan Bears is independent of the 2017 edition of Doosan Bears, which, according to common sense, is not really a valid assumption to make. Another way would be to incorporate data of other leagues such as the MLB and the NPB since those leagues also have the characteristic that each team plays a game nearly everyday. In addition, given the task to predict a team's game outcome, depending solely on the team's recent game results is perhaps an oversimplification of the task. Common sense tells us that there are other numerous factors that affect a team's performance in a game: e.g. the team's winning rate against the opponent in this season, statistics regarding the starting pitchers' of both teams, whether it's a home game or an away game, etc. So we could perhaps use a classical regression / classification model such as linear regression, support vector machines, deep neural networks, etc where we include those canonical features and additionally the result of the recent $k$ games as the predictors. Furthermore, if we want to stay with higher-order Markov chains but gain better modeling, we could consider using the higher-order multivariate Markov chains, where we are given $s$ separate categorical sequences that all have the same state space, instead of just one. The $k^{th}$ order multivariate Markov chain model says that the probability distribution (across the $m$ states) for the $j^{th}$ sequence at an arbitrary timestep depends on the probability distributions of all the $s$ sequences, including its own, at the recent $k$ timesteps. This model is also implemented in the the \textsf{markovchain} R package$^{4}$. In our study, this model can be utilized in such a way that given an arbitrary baseball game between two teams, the data sequence for both teams (so total $s=2$ sequences) are incorporated to better model the game result. That is: we consider the recent trend, or flow, of both of the two competing teams. \newpage
{ "attr-fineweb-edu": 2.484375, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdYw5qg5A57oeAi4t
\section{The 150m Showdown} The 150m race will be run as a 50m curve + 100m straight. While the exact configuration has yet to be decided, the best choice would be to have a curve of fairly large radius (larger than for regular indoor tracks). From floor plans for the event, it seems as if the curve will be close in size to those of outdoor tracks. I'll assume that the race would be run in the equivalents of lane 3 and 4. While the model does not account for wind assistance (Bailey's +0.7 m/s, and Johnson's +0.4 m/s), it's reasonable to assume that each athlete is stronger than last year and capable of moving slightly faster. Using our model, we can take a stab at how Bailey might be able to handle this race. The value of $p$ we should use is still up in the air, but it seems likely that it won't be small. After all, Bailey is a 100m specialist, and wouldn't handle turns very well. So, we'll take $p$ to be either 0.6, 0.7, or at the worst, 0.8 (these are the squares of the percentage of centrifugal forces felt, which would mean we're roughly considering between 75\% and 90\% of the force). Michael Johnson's 50m split in his Olympic 200m final was about 6.4s, so assuming similar conditions, Bailey clearly leads off the curve. After this point, it's hard to determine how Johnson would handle the straight. His split of 10.12s was run on a complete turn, so we could guess that he could shave off about 0.05s. This would put his Skydome split at about 10.07s (slightly quicker than his 1994 PB). Continuing this logic, let's guess that he'd be able to hold a greater speed over the straight, and clock in between 0.05--0.10s faster than his Atlanta 150m split of 14.83s. Since Johnson is probably a more consistent curve runner, we could assume that his time doesn't vary more than 0.01s from lane to lane. If this is the case, then Johnson could optimistically clock between 14.73s -- 14.78s on June 1st. The results of the model runs for Bailey are listed in Tables~\ref{3db150},\ref{4db150}, and are broken up into 50m splits. Since the model calculates ``raw'' race times, a reaction time must be added on (I've assumed a +0.170s reaction, similar to Atlanta). The ``final'' race times are in the last column of the tables. For the best Bailey guess ($p=0.60$), he takes Johnson regardless of lane choice (assuming 14.73s is an overestimate for Johnson). However, for larger $p$, which may be more realistic, Bailey's lane choice starts to become crucial. That is, for $p=0.70$, he can only win if he's assigned the outside lane. In the worst case, Bailey gets edged out, drained from fighting the curve. \section{How about 200m?} This model could also be used to predict possible 200m times which Bailey might be able to run. His PBs are recorded as 20.76s, and 20.39s (wind--assisted), both in 1994. Over the span of 3 years, it's most likely the case that his overall endurance has increased, and that he would be capable of running in the range of 20.30s (again, bearing in mind that his training is as a 100m specialist). Tables~\ref{200outdoors} and ~\ref{200indoors} show predictions for outdoor and indoor races, respectively. Assuming the target range described above, then outdoor $p$ values of 0.50 -- 0.70 provide realistic estimates (20.28s -- 20.58s). Meanwhile, for indoor venues (Table~\ref{200indoors}), lower values of $p$ give believable times. These are slower than the outdoor predictions, as one might expect, yet still within the grasp of the 100m champion (around 20.47s--20.86s). The higher values of $p$ for the indoor track give what are certainly inaccurate times, and hardly world class. Recall, though, that the centrifugal force depends on $1/R$; a sprinter traveling at the same speed over a radius half as big feels twice the force. Since $p$ is the square of the percentage of force felt, then a ratio of 4:1 for outdoor:indoor values of $p$ seems reasonable. This is why tracks are banked, so the sprinter doesn't have to expend too much energy to compensate for the curve (unless the race is run in Sherbrooke, where the sprinter must fight not to fall into the center!). \section{And the winner is...} So, what is the end results of all this? Will Bailey win the 150m showdown, or will Johnson? It all depends on how each handles the turn, their lane assignments, and their reaction times (which, in essence, are the factors that determine the winner in any race!). Realistically, people aren't machines that abide by equations, so the model doesn't pretend to say how Bailey will {\it definitely} run. What it does show, though, is that the race is not a clear--cut victory by either party: it literally could come right down to the wire. If each performs at their Atlanta prime (or better), then I, for one, will be on the edge of my seat at Skydome for those 14.?? seconds! What will the victory signify? Should Johnson prevail, would he usurp Bailey's title of World's Fastest Man? In my opinion, no. Bailey won the traditional event to claim the title, set a world record, and achieved a higher speed than Johnson. American sour--grapes aside, this spectacle will only serve to show who would win over 150m. And truthfully, the winners will be Bailey {\it and} Johnson, who will walk from Skydome a combined \$1.5M richer than they were the day before. Canadian Track and Field will win, because the event will hopefully regenerate significant interest in the sport. Finally, the audience will win, because they will be treated to a magnificent race between two of history's greatest sprinters. \pagebreak \begin{table} \begin{center} {\begin{tabular}{|l||c c c c c c c c c c|}\hline Split&10m&20&30&40&50&60&70&80&90&100m \\ \hline\hline Speed&9.32&10.95&11.67&11.99&12.10&12.10&11.99&11.85&11.67&11.47 \\ \hline Raw&1.89&2.90&3.79&4.64&5.47&6.29&7.12&7.96&8.81&9.67 \\ \hline $+$reaction& 2.06&3.07&3.96&4.81&5.64&6.46&7.29&8.13&8.98&9.84 \\ \hline Official&1.9&3.1&4.1&4.9&5.6&6.5&7.2&8.1&9.0&9.84 \\ \hline \end{tabular}} \end{center} \caption{Predicted splits (s) and speed (m/s) compared with official for Bailey's 100m final in Atlanta. Reaction time is rounded to $+$0.17s.} \label{100splits} \end{table} \begin{table} \begin{center} {\begin{tabular}{|c|c|c|c|c|}\hline $p$&$t_{50}$&$t_{100}$&$t_{150}$&$t_{150}+0.170$\\ \hline 0.60& 5.62 & 9.95 & 14.57 & 14.74 \\ \hline 0.70 & 5.64 & 9.99 & 14.61 & 14.78 \\ \hline 0.80 & 5.67 & 10.03 & 14.66 & 14.83 \\ \hline \end{tabular}} \end{center} \caption{Bailey's predicted Skydome 150m times for various values of $p$, assuming race is run in lane 3.} \label{3db150} \end{table} \begin{table} \begin{center} {\begin{tabular}{|c|c|c|c|c|}\hline $p$&$t_{50}$&$t_{100}$&$t_{150}$&$t_{150}+0.170$\\ \hline 0.60& 5.61 & 9.93 & 14.54 & 14.71 \\ \hline 0.70 & 5.63 & 9.97 & 14.59 & 14.76 \\ \hline 0.80 & 5.65 & 10.00 & 14.63 & 14.80 \\ \hline \end{tabular}} \end{center} \caption{Bailey's predicted Skydome 150m times for various values of $p$, assuming race is run in lane 4}. \label{4db150} \end{table} \begin{table} \begin{center} {\begin{tabular}{|c|c c|c c| c| c| c|}\hline $p$&$t_{50}$&$v_{50}$&$t_{100}$&$v_{100}$&$t_{150}$&$t_{200}$&$t_{200}+0 .15$\\ \hline 0.25 & 5.53 & 11.74 & 9.89 & 11.03 & 14.56 & 19.81 & 19.96 \\ \hline 0.36 & 5.55 & 11.60 & 9.98 & 10.85 & 14.69 & 19.96 & 20.11 \\ \hline 0.50 & 5.59 & 11.43 & 10.09 & 10.65 & 14.84 & 20.13 & 20.28 \\ \hline 0.60 & 5.61 & 11.31 & 10.16&10.51&14.93&20.24&20.39 \\ \hline 0.70 & 5.63 & 11.20 & 10.24 & 10.39 & 15.09 & 20.43 & 20.58 \\ \hline \end{tabular}} \end{center} \caption{Bailey's predicted outdoor 200m times, as run in lane 4.} \label{200outdoors} \end{table} \begin{table} \begin{center} {\begin{tabular}{|c|c|c|c|c|c|}\hline $p$&$t_{50}$&$t_{100}$&$t_{150}$&$t_{200}$&$t_{200}+0.15$\\ \hline 0.20& 5.62 & 9.91 & 14.88 & 20.32 & 20.47 \\ \hline 0.30& 5.68 & 10.01 & 15.17 & 20.71 & 20.86 \\ \hline 0.40& 5.75 & 10.13 & 15.43 & 21.05 & 21.20 \\ \hline 0.50& 5.81 & 10.22 & 15.67 & 21.37 & 21.52 \\ \hline 0.60& 5.88 & 10.32 & 15.91 & 21.68 & 21.83 \\ \hline 0.70& 5.94 & 10.42 & 16.13 & 21.97 & 22.12 \\ \hline 0.80& 5.99 & 10.50 & 16.33 & 22.23 & 22.38 \\ \hline \end{tabular}} \end{center} \caption{Bailey's predicted indoor 200m times, as run in lane 4.} \label{200indoors} \end{table} \end{document}
{ "attr-fineweb-edu": 2.181641, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdag5qhDBPRyD_Gv6
\section{Introduction} \label{sec:intro} Soccer is undoubtedly the {\em king of sports}, with approximately 4 billion global following \cite{worldatlas}. However, despite this huge global interest it still lags behind with respect to advanced quantitative analysis and metrics capturing teams' and players' performance as compared to other sports with much smaller fan base (e.g., baseball, basketball). Traditionally sports metrics quantify on-ball events. However, soccer epitomizes the notion of team sports through a game of space and off-ball movement. In soccer every player has possession of the ball an average of only 3 minutes \cite{fernandez2018wide}, and hence, metrics that quantify on-ball events will fail to capture a player's influence on the game. Expected goals ({{\tt xG}}) \cite{lucey2015quality,fairchildspatial} is probably the most prominent, advanced metric used in soccer today. {{\tt xG}} takes into account the context of a shot (e.g., location, number of defenders in the vicinity etc.) and provides us with the probability of the shot leading to a goal. {{\tt xG}} allows us to statistically evaluate players. For example, if a player is over-performing his expected goals, it suggests that he is either lucky or an above-average finisher. If this over-performance persists year-after-year then the latter will be a very plausible hypothesis. Nevertheless, while expected goals represent a straightforward concept and has been already used by mainstream soccer broadcast media, its application on evaluating players is still limited to a specific aspect of the game (i.e., shot taking) and only to players that actually take shots (and potentially goalkeepers). A more inclusive version of {{\tt xG}}, is the Expected Goal Chains ({{\tt xGC}}) \cite{xGC}. {{\tt xGC}} considers all passing sequences that lead to a shot and credits each player involved with the expected goal value for the shot. Of course, not all passes are created equally \cite{Power:2017:PCE:3097983.3098051} and hence, {{\tt xGC}} can over/under estimate the contribution of a pass to the final shot. The last few years player tracking technology has started penetrating the soccer industry. During the last world cup in Russia, teams obtained player tracking data in real time \cite{economist-worldcup}! The availability of fine-grained spatio-temporal data have allowed researchers to start looking into more detailed ways to evaluate soccer players through their movement in space. For example, Hoang {\em et al.} \cite{le2017coordinated,le2017data} developed a deep imitation learning framework for identifying the {\em optimal} locations - i.e., the ones that minimize the probability of conceding a goal - of the defenders in any given situation based on the locations of the attackers (and the other defensive players). Fernandez and Bornn \cite{fernandez2018wide} also analyzed player tracking data and developed a metric quantifying the contribution of players in space creation as well as, this space's value, while a nice overview of the current status of advanced spatio-temporal soccer analytics is provided by Bornn {\em et al.} \cite{doi:10.1111/j.1740-9713.2018.01146.x}. Player tracking data will undoubtedly provide managers, coaches and players with information that previously was considered to be {\em intangible}, and revolutionize soccer analytics. However, to date all of the efforts are focused on specific aspects of the game. While in the future we anticipate that a manager will be able to holistically evaluate the contribution of a player during a game over a number of dimensions (e.g., space generation, space coverage, expected goals etc.), currently this is not the case. Not to mention that player tracking technology is still slow in widespread adoption. Therefore, it has been hard to develop soccer metrics similar to Win Shares and/or Wins Above Replacement Player that exist for other sports (e.g., baseball, basketball etc.) \cite{james2002win,vorp}. These - all-inclusive - metrics translate on field performance to what managers, coaches, players and casual fans can understand, relate to and care about, i.e., wins. Our study aims at filling exactly this gap in the existing literature discussed above. The first step towards this is quantifying the positional values in soccer. For instance, how much more important are the midfielders compared to the goalkeeper when it comes to winning a game? In order to achieve this we use data from games from 11 European leagues as well as FIFA ratings for the players that played in these games. These ratings have been shown to be able to drive real-world soccer analytics studies \cite{cotta2016using}, they account for a variety of factors (e.g., player aging) and they are easy to obtain\footnote{Data and code are available at: \url{https://github.com/kpelechrinis/eLPAR-soccer}.}. Using these ratings we model the final goal differential of a game through a Skellam regression that allows us to estimate the impact of 1 unit of increase of the FIFA rating for a specific position on the probability of winning the game. As we will elaborate on later, to avoid any data sparsity problems (e.g., very few team play with a sweeper today), we group positions in the four team lines (attack, midfield, defense and goalkeeping) and use as our model's independent variables the difference on the average rating of the corresponding lines. Using this model we can then estimate the {\bf expected} league points added above replacement ({{\tt eLPAR}}) for every player. The emphasis is put on the fact that this is the expected points added from a player, since it is based on a fairly static, usually pre-season\footnote{FIFA ratings change a few times over the course of a season based on the overall player's performance.}, player rating, and hence, does not capture the exact performance of a player in the games he played. However, when we describe our model in detail it should become evident that if these data (i.e., game-level player ratings) are available the exact same framework can be used to evaluate the actual league points added above replacement from every player. The contribution of our work is twofold: \begin{enumerate} \item We develop a pre-game win probability model for soccer that is accurate and well-calibrated. More importantly it is based on the starting lineups of the two teams and hence, it can account for personnel changes between games. \item We develop the expected league points added above replacement ({{\tt eLPAR}}) metric that can be used to identify positional values in soccer and facilitate quantitative (monetary) player valuation in a holistic way. \end{enumerate} The rest of the paper is organized as follows. Section \ref{sec:method} describes the data we used as well as the Skellam regression model we developed for the score differential and its evaluation. Section \ref{sec:moneyball} further details the development of our expected league points added above replacement using the Skellam regression model. In this section we also discuss the implications for the players' transfer market. Finally, Section \ref{sec:discussion} concludes our work, while also discussing future directions for further improvements of our framework. \section{Data and Methods} \label{sec:method} In this section we will present the data that we used for our analysis, existing modeling approaches for the goal differential in a soccer game, as well as, the Skellam regression model we used. Table \ref{tab:notations} summarizes some of the notations that we are going to use throughout the paper. \begin{table}[htbp] \begin{center \begin{tabular}{r c p{6.5cm} } \toprule $X$ & $\triangleq$ & Goals scored by the home team\\ $Y$ & $\triangleq$ & Goals scored by the visiting team\\ $Z$ & $\triangleq$ & $X-Y$\\ $p$ & $\triangleq$ & Individual player \\ $\pi$ & $\triangleq$ & On field position \\ $\Pi$ & $\triangleq$ & Set of all on field positions \\ $r_{p}$ & $\triangleq$ & FIFA rating for player $p$\\ $\phi$ & $\triangleq$ & On field team formation \\ $v_{p}$ & $\triangleq$ & Market value for player $p$ \\ $c_{p}$ & $\triangleq$ & Cost per 1 league point paid for player $p$\\ $w_{p}$ & $\triangleq$ & (Monthly) Wage for player $p$ \\ \bottomrule \end{tabular} \vspace{0.1in} \caption{Notations used throughout the study} \label{tab:notations} \end{center} \end{table} \subsection{Soccer Dataset} \label{sec:data} In our study we make use of the Kaggle European Soccer Database \cite{kaggle-data}. This dataset includes all the games (21,374 in total) from 11 European leagues\footnote{English Premier League, Bundesliga, Serie A, Scotish Premier League, La Liga, Swiss Super League, Jupiler League, Ligue 1, Eredivisie, Liga Zon Sagres, Ekstraklasa.} between the seasons 2008-09 and 2015-16. For every game, information about the final result as well as the starting lineups are provided. There is also temporal information on the corresponding players' ratings for the period covered by the data. A player's $p$ rating takes values between 0 and 100 and includes an overall rating $r_{p}$, as well as {\em sub-ratings} for different skills (e.g., tackling, dribbling etc.). There are 11,060 players in totals and an average of 2 rating readings per season for every player. One of the information that we need for our analysis and is not present in the original dataset, is the players' position and his market value. We collected this information through FIFA's rating website (\url{www.sofifa.com}) for all the players in our dataset. The goals scored in a soccer game have traditionally been described through a Poisson distribution \cite{lee1997modeling,karlis2000modelling}, while a negative binomial distribution has also been proposed to account for possible over-dispersion in the data \cite{pollard198569,greenhough2002football}. However, the over-dispersion, whenever observed is fairly small and from a practical perspective does not justify the use of the negative binomial for modeling purposes considering the trade-off between complexity of estimating the models and improvement in accuracy \cite{karlis2000modelling}. In our data, we examined the presence of over-dispersion through the Pearson chi-squared dispersion test. We performed the test separately for the goals scored from home and away teams and in both cases the dispersion statistic is very close to 1 (1.01 and 1.1 respectively), which allows us to conclude that a Poisson model fits better for our data. Another important modeling question is the dependency between the two Poisson processes that capture the scoring for the two competing teams. In general, the empirical data exhibit a small correlation (usually with an absolute value for the correlation coefficient less than 0.05) between the goals scored by the two competing teams and the use of Bivariate Poisson models has been proposed to deal with this correlation \cite{karlis2003analysis}. Simple put, $(X,Y)\sim BP(\lambda_1, \lambda_2, \lambda_3)$, where: \begin{equation} P(X=x, Y=y) = e^{-(\lambda_1+\lambda_2+\lambda_3)}\dfrac{\lambda_1^x}{x!}\dfrac{\lambda_2^y}{y!} \sum_{k=0}^{\min (x,y)} \binom{x}{k} \binom{y}{k} k! \bigg(\dfrac{\lambda_3}{\lambda_1 \lambda_2}\bigg)^k \label{eq:bpois} \end{equation} The parameter $\lambda_3$ captures the covariance between the two marginal Poisson distributions for $X$ and $Y$, i.e., $\lambda_3 = Cov(X,Y)$. In our data, the correlation between the number of goals scored from the home and away team is also small and equal to -0.06. While this correlation is small, Karlis and Ntzoufras \cite{karlis2003analysis} showed that it can impact the estimation of the probability of a draw. However, a major drawback of the Bivariate Poisson model is that it can only model data with positive correlations \cite{karlis2005bivariate}. Given that in our dataset the correlation is negative, and hence, a Bivariate Poisson model cannot be used, an alternative approach is to directly model the difference between the two Poisson processes that describe the goals scored for the two competing teams. With $Z$, $X$ and $Y$ being the random variables describing the final score differential, the goals scored from the home team and the goals scored from the away team respectively, we clearly have $Z=X-Y$. With $(X,Y)\sim BP(\lambda_1,\lambda_2,\lambda_3)$, $Z$ has the following probability mass function \cite{skellam1946frequency}: \begin{equation} P(z) = e^{\lambda_1 + \lambda_2}\cdot \bigg(\dfrac{\lambda_1}{\lambda_2}\bigg)^{z/2}\cdot I_z(2~ \sqrt[]{\lambda_1\lambda_2}) \label{eq:skellam} \end{equation} where $I_r(x)$ is the modified Bessel function. Equation (\ref{eq:skellam}) describes a Skellam distribution and clearly shows that the distribution of $Z$ does not depend on the correlation between the two Poisson distributions $X$ and $Y$. In fact, Equation (\ref{eq:skellam}) is exactly the same as the distribution of the difference of two independent Poisson variables \cite{skellam1946frequency}. Therefore, we can directly model the goal differential without having to explicitly model the covariance. Of course, the drawback of this approach is that the derived model is not able to provide estimates on the actual game score, but rather only on the score differential. Nevertheless, in our study we are not interested in the actual score but rather in the win/lose/draw probability. Hence, this does not pose any limitations for our work. \subsection{Skellam Regression Model} \label{sec:skellam_reg} Our objective is to quantify the value of different positions in soccer. This problem translates to identifying how an one-unit increase in the rating of a player's position impacts the probability of his team winning. For instance, if we substitute our current striker who has a FIFA rating of 79, with a new striker with a FIFA rating of 80, how do our chances of winning alter? Once we have this information we can obtain for every player an expected league points added per game over a reference, i.e., replacement, player (Section \ref{sec:elpar}). This can then be used to obtain a more objective market value for players based on their position and rating (Section \ref{sec:mv}). \begin{figure}[t]% \centering \includegraphics[width=7cm]{plots/soccer-positions} % \caption{We grouped player positions to four distinct groups, namely, goalkeeping, attack, midfielders and defense.}% \label{fig:positions}% \vspace{-0.1in} \end{figure} In order to achieve our goal we model the goal differential $Z$ of a game using as our independent variables the player/position ratings of the two teams that compete. Hence, our model's dependent variable is the goal differential (home - away) of game $i$, $z_i$, while our independent variables are the positional rating differences of the two teams, $x_{i,\pi}=r_{p(h,\pi,i)}-r_{p(a,\pi,i)},~\forall \pi \in \Pi$, where $r_{p(h,\pi,i)}$ ($r_{p(a,\pi,i)}$) is the rating of the home (away) team player that covers position $\pi$ during game $i$ and $\Pi$ is the set of all soccer positions. One of the challenges with this setting is the fact that different teams will use different formations and hence, it can be very often the case that while one team might have 2 center backs and 2 wing backs, the other team might have 3 center backs only in its defensive line. This will lead to a situation where the independent variables $x_{i,\pi}$ might not be well-defined. While this could potentially be solved by knowing the exact formation of a team (we will elaborate on this later), this is unfortunately a piece of information missing from our data. Nevertheless, even this could create data sparsity problems (e.g., formation/player combinations that do not appear often). Hence, we merge positions to four groups, namely, attacking line, midfielders, defensive line and goalkeeping. Figure \ref{fig:positions} depicts the grouping of the positions we used to the four lines $L = \{l_{D},l_{M},l_{A},l_{GK}\}$. Note that this grouping in the four lines has been used in the past when analyzing soccer players as well \cite{he2015football}. The independent variables of our model are then the differences in the average rating of the corresponding lines. The interpretation of the model slightly changes now, since the independent variable captures the rating of the whole line as compared to a single position/player. Under this setting we fit a Skellam regression for $Z$ through maximum likelihood estimation. In particular: \begin{mydefinition}{Final Goal Differential}{mod:skellam} We model the goal differential $Z_i$ of game $i$ using the following four co-variates: \begin{itemize} \item The difference between the average player rating of the defensive line of the two teams $x_{D}$ \item The difference between the average player rating of the midfielders of the two teams $x_{M}$ \item The difference between the average player rating of the attacking line of the two teams $x_{A}$ \item The difference between the goalkeeper's rating of the two teams $x_{GK}$ The random variable $Z$ follows a Skellam distribution, where its parameters depend on the model's covariates $\mathbf{x} = (x_{D},x_{M},x_{A},x_{GK})$: \begin{eqnarray} Z \sim Skellam(\lambda_1,\lambda_2)\\ \log(\lambda_1) = \mathbf{b}_1^T \cdot \mathbf{x} \\ \log(\lambda_2) = \mathbf{b}_2^T \cdot \mathbf{x} \end{eqnarray} \end{itemize} \end{mydefinition} Table \ref{tab:skellam_reg} shows the regression coefficients. It is interesting to note that the coefficients for the two parameters are fairly symmetric. $\lambda_1$ and $\lambda_2$ can be thought of as the mean of the Poisson distributions describing the home and visiting team respectively and hence, a positive relationship between an independent variable and the average goals scored for one team corresponds - to an equally strong - negative relationship between the same variable and the average goals scored for the opposing team. An additional thing to note is that an increase on the average rating of any line of a team contributes positively to the team's chances of winning (as one might have expected). Finally, having the distribution for the random variable $Z$, we can estimate the win, loss home probability, as well as, the draw probability as: $\Pr[Home Wins] = \Pr[Z>0]$, $\Pr[Home Wins] = \Pr[Z<0]$ and $\Pr[Home Wins] = \Pr[Z=0]$ respectively. \begin{table}[ht]\centering \begin{tabular}{c c c } \toprule \textbf{Variable} & \textbf{$\log(\lambda_1)$} & \textbf{$\log(\lambda_2)$} \\ \midrule Intercept & 0.37*** & 0.07*** \\ & (0.012) & (0.015) \\ $x_{D}$ & 0.02*** & -0.03*** \\ & (0.01) & (0.002) \\ $x_{M}$ & 0.02*** & -0.015*** \\ & (0.01) & (0.002) \\ $x_{A}$ & 0.01***& -0.01*** \\ & (0.001) & (0.001) \\ $x_{GK}$ & 0.001& -0.004** \\ & (0.001) & (0.002) \\ \midrule N & 21,374 & 21,374 \\ \bottomrule \addlinespace[1ex] \multicolumn{3}{l}{\textsuperscript{***}$p<0.01$, \textsuperscript{**}$p<0.05$, \textsuperscript{*}$p<0.1$} \end{tabular} \caption{Skellam regression coefficients} \label{tab:skellam_reg} \end{table} \iffalse \begin{table}[ht]\centering \begin{tabular}{c c c } \toprule \textbf{Variable} & \textbf{$\log(\lambda_1)$} & \textbf{$\log(\lambda_2)$} \\ \midrule Intercept & 0.41*** & 0.13*** \\ & (0.006) & (0.006) \\ $x_{D}$ & 0.02*** & -0.02*** \\ & (0.01) & (0.002) \\ $x_{M}$ & 0.02*** & -0.02*** \\ & (0.01) & (0.001) \\ $x_{A}$ & 0.01***& -0.01*** \\ & (0.001) & (0.001) \\ $x_{GK}$ & 0.001& -0.002** \\ & (0.001) & (0.001) \\ \midrule N & 21,374 & 21,374 \\ \bottomrule \addlinespace[1ex] \multicolumn{3}{l}{\textsuperscript{***}$p<0.01$, \textsuperscript{**}$p<0.05$, \textsuperscript{*}$p<0.1$} \end{tabular} \caption{Skellam regression coefficients} \label{tab:skellam_reg} \end{table} \fi Before using the model for estimating the expected league points added above replacement for each player, we examine how good the model is in terms of actually predicting the score differential and the win/draw/lose probabilities. We use an 80-20 split for training and testing of the model. We begin our evaluation by calculating the difference between the goal differential predicted by our model and the actual goal differential of the game \cite{10.2307/2684286}. Figure \ref{fig:model_eval} (top) presents the distribution of this difference and as we can see it is centered around 0, while the standard deviation is equal to 1.6 goals. Furthermore, a chi-squared test cannot reject the hypothesis that the distribution is normal with mean equal to 0 and a standard deviation of 1.6. However, we would like to emphasize here that the most important aspect of the model is the probability output and rather the accuracy of predicting the game outcome. Inherently game outcomes include uncertainty and we want our model's probability output to capture this. For instance, let us consider two models, $M_1$ and $M_2$ that both predict the home team to win (i.e., a home team win is the most probable among the three possible outcomes). $M_1$ assigns a home win probability of 0.4, while $M_2$ assigns a home win probability of 0.7. Assuming that the home team wins both have the same accuracy, however it should be clear that they cannot be both accurate in terms of the assign probability. For developing a metric that captures the contribution of a player to his team's win chances, we need a model that provides us with accurate win/loss/draw probabilities. As we will see in Section \ref{sec:elpar} we will use the changes in these probabilities to calculate an expected league points added for every player based on their position and rating. Hence, we need to evaluate how accurate and well-calibrated these probabilities are. This can be evaluated through the probability calibration curves \cite{weisheimer2014reliability}. A calibration curve presents on the horizontal axis the predicted probability and on the vertical axis the observed probability. More specifically, in order to build the probability calibration curve of a binary classifier we group the test data based on the predicted probability $\pi_{pred}$ of belonging to class ``1''. Then for each of these groups we calculate the fraction of the test data points that were indeed of class ``1'', which is the observed probability $\pi_{obs}$. Ideally we should have $\pi_{pred}=\pi_{obs}$. Figure \ref{fig:model_eval} (bottom) presents the probability calibration curves for our Skellam regression model. Given that we have 3 possible results (i.e., win, loss and draw), we present three curves from the perspective of the home team, that is, a home team win, loss or draw. The $x$-axis presents the predicted probability for each event, while the $y$-axis is the observed probability. In particular we quantize the data in bins of 0.05 probability range, and for all the games within each bin we calculate the fraction of games for which the home team won/lost/draw, and this is the observed probability. To reiterate, we would like to have these two numbers being equal. Indeed, as we can see for all 3 events the probability output of our model is very accurate, that is, all lines are practically on top of the $y=x$ line. It is interesting to note, that our model does not provide a draw probability higher than 30\% for any of the games in the test set, possibly due to the fact that the base rate for draws in the whole dataset is about 25\%. \begin{figure}% \centering \includegraphics[width=6.5cm]{plots/prediction-error} % \includegraphics[width=6.5cm]{plots/calibration} % \caption{Our model is accurate in predicting the score differential as well as the win/loss/draw probabilities of a soccer game.}% \label{fig:model_eval}% \vspace{-0.1in} \end{figure} \section{eLPAR and Market Value} \label{sec:moneyball} We begin by defining the notion of a replacement player and developing {{\tt eLPAR}}. We also show how we can use {{\tt eLPAR}} to obtain {\em objective} player and transfer fee (monetary) valuations. \subsection{Replacement Player and Expected League Points Added} \label{sec:elpar} The notion of replacement player was popularized by Keith Woolner \cite{woolner2002understanding} who developed the Value Over Replacement Player (VORP) metric for baseball. The high level idea is that player talent comes at different levels. For instance, there are superstar players, average players and subpar player talent. These different levels come in different proportions within the pool of players, with superstars being a scarcity, while subpar players (what Woolner termed replacement players) being a commodity. This essentially means that a team needs to spend a lot of money if it wants to acquire a superstar, while technically a replacement player comes for free. Since a replacement player can be thought of as a {\em free} player, a good way to evaluate (and consequently estimate a market value for) a player is to estimate the (expected) contribution in wins, points etc. that he/she offers above a replacement player. One of the main contributions of Woolner's work is to show that average players have value \cite{vorp}! Hence, if we were to use the average player as our reference for evaluating talent, we would fail to recognize the value of average playing time. Nevertheless, replacement level, even though it is important for assigning economic value to a player, it is a less concrete mathematical concept. There are several ways that have been used to estimate the replacement level. For example, one can sort players (of a specific position) in decreasing order of their contract value and obtain as replacement level the talent at the bottom 20th percentile \cite{winston2012mathletics}. What we use for our study is a {\em rule-of-thumb} suggested from Woolner \cite{vorp2}. In particular, the replacement level is set at the 80\% of the positional average rating. While the different approaches might provide slightly different values for a replacement player, they will not affect the relative importance of the various positions identified by the model. In our case the replacement levels for all lines are very close to each other and around a rating of 56. So the question now becomes how are we going to estimate the expected league points added above replacement (${\tt eLPAR}$) given the model from Section \ref{sec:skellam_reg} and the replacements levels of each line. First let us define ${\tt eLPAR}$ more concretely: \begin{mydefinition2}{{\tt eLPAR}}{def:elpar} Consider a game between teams with only replacement players. Player $p$ substitutes a replacement player in the lineup. ${\tt eLPAR}_{p}$ describes how many league points (win=3 points, draw = 1 point, loss = 0 points) $p$ is expected to add for his team. \end{mydefinition2} \iffalse \begin{figure*}% \centering \includegraphics[width=4cm]{plots/defense} % \includegraphics[width=4cm]{plots/middlefield} % \includegraphics[width=4cm]{plots/attack} % \includegraphics[width=4cm]{plots/gk} % \caption{The replacement level rating (green vertical line) for each one of the positional lines in soccer is around 56.}% \label{fig:ratings}% \end{figure*} \fi Based on the above definition, ${\tt eLPAR}_{p}$ can be calculated by estimating the change in the win/draw/loss probability after substituting a replacement player with $p$. However, the win probability model aforementioned does not consider individual players but rather lines. Therefore, in order to estimate the expected points to be added by inserting player $p$ in the lineup we have to consider the formation used by the team. For example, a defender substituting a replacement player in a 5-3-2 formation will add a different value of expected points as compared to a formation with only 3 center-backs in the defensive line. Therefore, in order to estimate ${\tt eLPAR}_{p}$ we need to specify the formation we are referring to. Had the formation been available in our dataset we could have built a multilevel model, where each combination of position and formation would have had their own coefficients\footnote{And in this case we would also be able to analyze better the impact of positions within a line (e.g., value of RB/LB compared to CB).}. Nevertheless, since this is not available our model captures the formation-average value of each line. In particular, ${\tt eLPAR}_{p}$ for player $p$ with rating $r_{p}$ can be calculated as following: \begin{enumerate} \item Calculate the increase in the average rating of the line $l \in L$ when $p$ substituted the replacement player based on $r_{p}$, formation $\phi$ and the replacement player rating for the line $r_{replacement,\phi,l}$ \item Calculate, using the win probability model above, the change in the win, loss and draw probability ($\delta P_w$, $\delta P_d$ and $\delta P_l$ respectively) \item Calculate ${\tt eLPAR}_{p}(\phi)$ as: \begin{equation} {\tt eLPAR}_{p}(\phi) = 3\cdot \delta P_w + 1\cdot \delta P_d \label{eq:elpar} \end{equation} \end{enumerate} It should be evident that by definition a replacement player has ${\tt eLPAR} = 0$ - regardless of the formation - while if a player has rating better than a replacement, his ${\tt eLPAR}$ will be positive. However, the actual value and how it compares to players playing in different positions will depend on the formation. In Figure \ref{fig:elpar_formations} we present the expected league points added per game for players with different ratings (ranging from 50 to 99) and for different formations. While there are several different formations that a team can use, we chose 4 of the most often used ones. \begin{figure}% \centering \includegraphics[width=4.2cm]{plots/4-4-2} % \includegraphics[width=4.2cm]{plots/4-5-1} % \includegraphics[width=4.2cm]{plots/3-5-2} % \includegraphics[width=4.2cm]{plots/4-3-3} % \includegraphics[width=8cm]{plots/allformations} % \caption{Expected league points added above replacement for different formations, player ratings and positions.}% \label{fig:elpar_formations}% \end{figure} One common pattern in all of the formations presented is the fact that for a given player rating goal keepers provide the smallest expected league points above replacement - which is in line with other studies/reports for the value of goal keepers in today's soccer \cite{economist-gk}. It is also evident that depending on the formation the different positions offer different {\em value}. For example, a 4-5-1 system benefits more from an attacker with a rating of 90 as compared to a defender with the same rating, while in a 3-5-2 formation the opposite is true. To reiterate this is an expected value added, i.e., it is not based on the actual performance of a player but rather on static ratings for a player. Given that teams play different formations over different games (or even during the same game after in-game adjustments), a more detailed calculation of ${\tt eLPAR}$ would include the fraction of total playing time spent by each player on a specific formation. With $T$ being the total number of minutes played by $p$, and $t_{\phi}$ the total minutes he played in formation $\phi$, we have: \begin{equation} {\tt eLPAR}_{p} = \dfrac{1}{T}\sum_{\phi} t_{\phi} \cdot{\tt eLPAR}_{p}(\phi) \label{eq:elpar_formation} \end{equation} The last row in Figure \ref{fig:elpar_formations} presents the average ${\tt eLPAR}$ for each position and player rating across all the four possessions (assuming equal playing time for all formations). As we can see for the same player rating, a defender adds more expected league points above replacement, followed by an attacker with the same rating. A middlefielder with the same rating adds only slightly less expected league points compared to an attacker of the same rating, while a goal keeper (with the same rating) adds the least amount of expected league points. A team manager can use this information to identify more appropriate targets given the team's style play (formations used) and budget. In the following section we will explore the relation between the market value of a player and his ${\tt eLPAR}$. \begin{figure*}[h]% \centering \includegraphics[width=5cm]{plots/mv-positions} \includegraphics[width=6cm]{plots/cost-point} % \includegraphics[width=6cm]{plots/cost-rating} \caption{Even though goalkeepers are among the lowest paid players in soccer, they still are overpaid in terms of expected league points contributions. Defenders are undervalued when it comes to contributions in winning.}% \label{fig:mv}% \end{figure*} \subsection{Positional Value and Player Market Value} \label{sec:mv} In this section we will explore how we can utilize {{\tt eLPAR}} to identify possible {\em inefficiencies} in the player's transfer market. In particular, we are interested in examining whether the transfer market overvalues specific positions based on the {{\tt eLPAR}} value they provide. Splitting the players into the four lines Figure \ref{fig:mv} (left) presents the average difference of the players' market value $v$ - i.e., the transfer fee paid from a team to acquire a player under contract - between different lines. Each cell represents the difference between the corresponding row and column position, while crossed out pairs correspond to non-statistically significant differences (at the 5\% significance level). As we can see, on average, defenders (first row) are the lowest paid players, bespite the fact that as aforementioned (Figure \ref{fig:elpar_formations}) for a given player rating, a defensive player provides the maximum {{\tt eLPAR}} value. Nevertheless, what we are really interested in is the monetary value that a team pays for 1 expected league point above replacement per player. Granted there is a different supply of players in different positions. For example, only 8.5\% of the players are goal keepers, as compared to approximately 35\% of defenders\footnote{There is another approximately 35\% of midfielders and 21\% of attackers.}, and hence, one might expect goalkeepers to be paid more than defenders. However, there is also smaller demand for these positions and hence, we expect these two to cancel out to a fairly great extend, at least to an extend that should not over-inflate the market values. By dividing the market value $v_{p}$ of a player with his ${\tt eLPAR}_{p}$ value, we obtain an estimate for the monetary cost $c_{p}$ that teams are willing to pay for obtaining 1 league point above replacement from this player (i.e., $c_{p}=\dfrac{v_{p}}{{\tt eLPAR}_{p}}$). Given that 1 league point is worth the same in terms of league standings regardless of where it comes from (e.g., a striker or a goalkeeper), we should expect that $c_{p_1} = c_{p_2}, \forall p_1, p_2$ (or at least approximately equal). Figure \ref{fig:mv} (middle) presents the cost (in Euros) per 1 expected league point (above replacement) for different positions as a function of the ${\tt eLPAR}$ they provide. An {\em efficient} market, as alluded to above, would have four straight horizontal lines, one on top of the other, since the cost of 1 expected league point for a team should be the same regardless of where this point is expected from. However, what we observe is that the market over-values significantly goal keepers (even though on average they are only the 3rd highest paid line), and this is mainly a result of their low ${\tt eLPAR}$ (the best goalkeeper in our dataset provides an ${\tt eLPAR}$ of just over 0.1 per 90 minutes). Furthermore, teams appear to be willing to pay a premium for expected league points generated by the offense as compared to points generated by the defense, and this premium increases with ${\tt eLPAR}$. This becomes even more clear from the right plot in Figure \ref{fig:mv}, where we have plot the same cost per 1 league point from player $p$, $c_{p}$, but as a function of a player's FIFA rating $r_{p}$. As we can see teams are willing to pay multiples in premium for 1 expected league point coming from a goalkeeper with 85 FIFA rating as compared to 1 expected league point (i.e., the same performance) coming from a defender with the same rating (vertical dashed line). Player wages exhibit similar behavior (the ranking correlation between transfer/market value and a player's wage is 0.94). Given that there is no salary cap in European soccer, teams can potentially overpay in general in order to bring in the players they want. Hence, across teams comparisons are not appropriate, since different teams have different budget and ability to pursue players. However, within team comparison of contracts among its players is one way to explore whether teams are being rational in terms of their payroll. In particular, we can examine the distribution of a team's total budget among their players, and investigate whether this is in line with their positional values. This analysis will provide us with some relative insight on whether teams spend their budget proportional to the positional and personal on-field value (i.e., FIFA rating) of each player. Let us consider two specific teams, that is, FC Barcelona and Manchester United. We will use the wages $w$ of the starting 11 players of the two teams (from the 2017-18 season) and considering the total budget $\mathcal{B}$ constant we will redistribute it based on the ${\tt eLPAR}$ of each player. We do not consider substitutions, since an accurate comparison would require the expected (or actual) time of play. Each point on the left two plots in Figure \ref{fig:money} corresponds to one of the starting 11 players for Barcelona and Manchester United respectively. The size of each point corresponds to the FIFA rating of the player, while the color corresponds to the position the player covers. The x-axis corresponds to the actual (monthly) wage of the player, while the y-axis corresponds to their {{\tt eLPAR}}-based wage. We present two series of {{\tt eLPAR}}-based wages, one that corresponds to the default formation of each team (4-4-2 for Barcelona and 4-3-3 for Manchester United), and one that corresponds to the average of the formations presented in Figure \ref{fig:elpar_formations}. Points that fall under the $y=x$ line corresponds to players whose actual wage is higher than their ${\tt eLPAR}$-based wage, while players that correspond to points that fall over the $y=x$ are {\em underpaid}. The way we calculate the re-distribution is as following: \begin{enumerate} \item Calculate the fraction $f_p=\dfrac{{\tt eLPAR}_p}{{\tt eLPAR}_{total}}$ of total ${\tt eLPAR}$ that player $p$ contributes to his team (${\tt eLPAR}_{total} = \sum_{p=1}^{11} {\tt eLPAR}_p$) \item Calculate the ${\tt eLPAR}$-based wage for player $p$ as $f_p \cdot \mathcal{B}$ \end{enumerate} As we can see there are differences in the wages projected when using ${\tt eLPAR}$. Both teams for example appear to overpay their goalkeepers based on their expected league points above replacement per 90 minutes. Of course, some players are under-valued, and as we can see these players are mainly in the defensive line. These results open up interesting questions for soccer clubs when it comes to budget decisions. Budget is spent for two reasons; (a) to win, as well as, (b) to maximize the monetary return (after all, sports franchises are businesses). The premium that clubs are willing to pay an attacker over a defender for the same amount of league points can be seen as an investment. These players bring fans in the stadium, increase gate revenue (e.g., through increased ticket prices), bring sponsors, sell club merchandise, etc. For example, even though attackers are approximately only 20\% of the players' pool, 60\% of the top-selling jerseys in England during 2018 belonged to attackers \cite{jerseys}. Therefore, when we discuss the money spent from a team for a transfer (or a wage), winning is only one part of the equation. While teams with large budget (like Manchester United and Barcelona) might be able to pay premiums as an investment, other teams in the middle-of-the-pack can achieve significant savings, without compromising their chances of winning. In fact, clubs with limited budget can maximize their winning chances, which is an investment as well (winning can bring in revenues that can then be used to acquire better/more popular players leading to a positive feedback loop). A club with a fixed budget $\mathcal{B}$ can distribute it in such a way that maximizes the expected league points {\em bought} (even under positional constraints). For instance, with $\mathcal{B} = 6$ millions and with the need for a center back and a goalkeeper, if we use the average market values for the two positions we should allocate 55\% of the budget (i.e., 3.3 millions) for the goalkeeper and 45\% of the budget for the defender. Using the average market value of a player for a given position and rating from our data, this will eventually get us a goalkeeper with a 74 FIFA rating and a defender with a 73 FIFA rating. These two players will contribute about 0.028 expected league points above replacement per 90 minutes. However, if we allocate 500K for the goalkeeper and 5.5 millions for the defender this will get us around 0.033 expected league points (a goalkeeper with 68 FIFA rating and a defender with 78 FIFA rating), or simply put the team will have bought 1 expected league point at a 18\% discount as compared to the rest of the market (i.e., with the same amount of money, the team will have obtained 18\% more expected points above replacement per game). \iffalse \begin{table} \begin{tcolorbox}[tab2,tabularx={X||Y|Y|Y|Y|Y|Y},title=FC Barcelona,boxrule=0.5pt] Players & FIFA Rating & Wage ($\euro$) & ${\tt eLPAR}$ & ${\tt eLPAR}$ Wage ($\euro$) & ${\tt eLPAR}$ (4-4-2) & ${\tt eLPAR}$ Wage (4-4-2) ($\euro$) \\\hline\hline M. Stegen & 87 & 185K & 0.092& 79K & 0.093 & 83K \\\hline\hline S. Roberto & 82 & 150K & 0.32 & 271.5K& 0.30 & 266K \\\hline Pique & 87 & 240K & 0.38 & 324K & 0.36& 317K \\\hline S. Umtiti & 84 & 175K & 0.35 & 292.5K& 0.32 & 286.5K\\\hline Jordi Alba & 87 & 185K & 0.38 & 324K & 0.35 & 317.5K \\ \hline\hline O. Dembele & 83 & 150K & 0.28 & 239K & 0.29& 258K \\ \hline I. Rakitic & 86 & 275K & 0.31 & 266K & 0.32 & 287K \\\hline s. Busquets & 87 & 250K & 0.32 & 275K & 0.33 & 298K \\\hline Coutinho & 87 & 275K & 0.32 & 275K & 0.33& 298K \\\hline\hline L. Messi & 94 & 565K & 0.41 & 349K & 0.35 & 317K \\\hline L. Suarez & 92 & 510K & 0.39 & 330K & 0.33 & 300K \\\hline\hline \end{tcolorbox} \label{tab:barcelonafc} \caption{FC Barcelona wages and ${\tt eLPAR}$-based projected wages. } \vspace{-0.1in} \end{table} \begin{table} \begin{tcolorbox}[tab3,tabularx={X||Y|Y|Y|Y|Y|Y},title=Manchester United,boxrule=0.5pt] Players & FIFA Rating & Wage ($\euro$) & ${\tt eLPAR}$ & ${\tt eLPAR}$ Wage ($\euro$) & ${\tt eLPAR}$ (4-3-3) & ${\tt eLPAR}$ Wage (4-3-3) ($\euro$) \\\hline\hline De Gea & 91 & 295K & 0.1& 65.5K & 0.11 & 69K \\\hline\hline A. Valencia & 83 & 130K & 0.33 & 208K& 0.31 & 203K \\\hline C. Smalling & 81 & 120K & 0.31 & 193K & 0.28& 188K \\\hline V. Lindelof & 78 & 86K & 0.27 & 169K& 0.25 & 165K \\\hline A. Young & 79 & 120K & 0.28 & 177K & 0.26 & 172.5K \\ \hline\hline N. Matic & 85 & 180K & 0.3 & 189.5K & 0.41& 273K \\ \hline A. Herrera & 83 & 145K & 0.28 & 176K & 0.38 & 254K \\\hline P. Pogba & 88 & 250K & 0.34 & 209.5K & 0.46 & 301.5K \\\hline\hline J. Lingard & 81 & 115K & 0.27 & 168K & 0.15 & 100K \\\hline R. Lukaku & 86 & 210K & 0.32 & 202.5K & 0.18 & 121K \\\hline A. Sanchez & 88 & 325K & 0.35 & 216K & 0.19 & 129K \\\hline\hline \end{tcolorbox} \label{tab:manunfc} \caption{Manchester United wages and ${\tt eLPAR}$-based projected wages. } \vspace{-0.1in} \end{table} \fi \begin{figure*}[ht]% \centering \includegraphics[width=5.5cm]{plots/barca.png} \includegraphics[width=5.5cm]{plots/manu.png} % \includegraphics[width=6cm]{plots/premier_league.png} \caption{The two left figures present the actual monthly wage and the {{\tt eLPAR}}-based wage for each player of FC Barcelona and Manchester United respectively. Each point corresponds to a starting player and points above the $y=x$ line corresponds to players that are {\em undervalued}. The right plot presents a linear relationship between total transfer budget and league points for Premier League. }% \label{fig:money}% \end{figure*} \iffalse \begin{figure}% \centering \includegraphics[width=8cm]{plots/barca.png} % \caption{Barcelona.}% \label{fig:barca}% \end{figure} \begin{figure}% \centering \includegraphics[width=8cm]{plots/manu.png} % \caption{Manu.}% \label{fig:manu}% \end{figure} \fi \subsection{Fair Transfer Fees} \label{sec:transfer} In the last example above, the transfer fees mentioned (i.e., 500K and 5.5M) are based on the current market transfer and most probably will still be an over-payment for the talent acquired. What basically one can achieve with an approach as the one described above is to optimize the team's transfers based on the current market values. However, we can use our model and analysis to also estimate a {\em fair} (i.e., considering only a team's winning chances) transfer fee for a player. For this we would need to know what 1M Euros is worth in terms of league points. To do so we will need the total transfer budget of teams and the total number of league points they obtained. For example, Figure \ref{fig:money} (right) presents the relationship between a team's transfer budget and the total points obtained for the 2017-18 Premier League. The slope of the linear fit is 0.44, with good explanatory power ($R^2 = 0.71$). This essentially means that 1M Euros in transfer budget is associated 0.44 Premier League points. Therefore for a player $p$ with ${\tt eLPAR}_p$, who is expected to play $N$ games, a fair transfer fee is $\dfrac{N \cdot {\tt eLPAR}_p}{0.44}$. For example, recently a transfer that was discussed a lot was that of goal keeper Danny Ward from Liverpool to Leicester. Based on Ward's current FIFA rating (70) and his potential upside (FIFA rating of 78), the transfer fee should be between 3.3 and 5.2 million pounds, assuming he plays all 38 Premier League games next season (he ended up not being the starting goalkeeper for Leicester). However, Leicester paid 10 million pounds for this transfer \cite{skysports-ward}. Again there might be other reasons that Leicester was willing to pay 10 million pounds for Ward, and similar transfers can only be accurately - if at all - evaluated only after the players leaves/transfers from his new team. For instance, if Ward ends up playing 10 full seasons with Leicester his transfer fee can even be considered a {\em steal}. The same will be true if Leicester sells Ward for double this price within a couple of years. In general, estimating transfer fees is a much more complex task, but ${\tt eLPAR}$ can facilitate these estimations by considering the on-pitch expected contributions of the player. We would like to emphasize that here we want to just showcase how {{\tt eLPAR}} can be used to facilitate transfer fee decisions. The relationship between transfer budget and league points is different for different leagues and needs to be built separately, while for robustness more seasons need to be considered (appropriately adjusted for inflation). \iffalse \begin{figure}% \centering \includegraphics[width=8cm]{plots/premier_league} % \caption{In Premier League 1M Euros in transfer budget is worth 0.44 league points.}% \label{fig:premier_league}% \end{figure} \fi \iffalse Our objective is to explore whether the market value of a player closely follows his on-field expected contribution. Given that in European soccer there is no salary cap similar to north American professional sports leagues (i.e., we cannot put a monetary value to 1 win), we will rely on relative comparisons. For this we will begin by building a model for a player's market value. There are various factors that can affect the market value of a player and hence, are included in our model as explanatory variables. In particular we include the player's age, his FIFA rating, the player's potential based on the upper limit on his rating provided by FIFA, the player's position, as well as the supply of players at the same position and with the same rating (in particular +/- 1). Table \ref{tab:mv_mod} presents our results. \begin{table}[ht]\centering \begin{tabular}{c c } \toprule \textbf{Variable} & \textbf{Player Market Value (in millions)} \\ \midrule Intercept & -14.92*** \\ & (0.73) \\ Age & -0.15*** \\ & (0.009) \\ Rating & 0.30*** \\ & (0.01) \\ Potential & 0.01*** \\ & (0.001) \\ Supply & -0.006*** \\ & (0.0003) \\ Position(GK) & -4.56*** \\ & (1.47)\\ Position(M) & -7.19***\\ & (0.94)\\ Position(O) & -12.27***\\ & (1.14)\\ {\bf Interaction terms} & \\ Position(GK)$\cdot$ Supply & -0.025*** \\ & (0.002)\\ Position(M)$\cdot$ Supply& -0.0023*** \\ & (0.0004)\\ Position(O)$\cdot$ Supply& -0.11*** \\ & (0.0007)\\ Position(GK)$\cdot$ Rating & 0.056** \\ & (0.027)\\ Position(M)$\cdot$ Rating& 0.11*** \\ & (0.017)\\ Position(O)$\cdot$ Rating& 0.15***\\ & (0.02)\\ Position(GK)$\cdot$ Potential & 0.035 \\ & (0.030)\\ Position(M)$\cdot$ Potential& 0.025 \\ & (0.018)\\ Position(O)$\cdot$ Potential& 0.067** \\ &(0.021) \\ \midrule N & 10,997 \\ \bottomrule \addlinespace[1ex] \multicolumn{2}{l}{\textsuperscript{***}$p<0.01$, \textsuperscript{**}$p<0.05$, \textsuperscript{*}$p<0.1$} \end{tabular} \caption{Market Value Regression Model Coefficients} \label{tab:mv_mod} \end{table} \fi \section{Conclusions and Discussion} \label{sec:discussion} In this work our objective is to build an appropriate model that will allow us to understand positional values in soccer and consequently develop a metric that can provide an estimate for the {\em expected} contribution of a player on the field translated in units that managers and fans associate with (i.e., league points). We start by developing a win probability model for soccer games based on the ratings of the four lines of the teams (attack, middlefield, defense and goalkeeper). We then translate this positional values to expected league points added above a replacement player ({{\tt eLPAR}}) considering a team's formations. We further show how this framework can be useful for financial decisions by analyzing transfer fees and players' wages and relating them back to each player's {{\tt eLPAR}}. Our results indicate that specific positions are over-valued when only considering their contribution to winning the game. However, our study is only the first step towards understanding the positional value in soccer. In particular, while our results show that goal keepers might provide the least amount of value, these results are tight to the data we used. Currently we have built a single model for all the leagues in our dataset. However, building a separate model for different leagues could reveal differences in the positional value among leagues that might have to do with style of play, strength and skillsets in each league etc \cite{Noslo18}. Furthermore, in top-level competition - for which we do not have data (e.g., Champions League) - goal keepers might provide much more value than in the leagues we analyzed, which include both top-tier and lower-tier national league. However, regardless of this, the analytical framework that we introduced can be replicated on different datasets. Furthermore, our modeling framework can be improved with additional (meta) data. In particular: ({\bf 1}) Our framework can integrate the actual formation that the teams used. This will allow us to build a multilevel regression model, which will allow us to include covariates for more fine-grained positions (e.g., center back, center middlefielder etc.) and obtain a more detailed view of positional value tied to the formation used. ({\bf 2}) We can also include information about substitutions during a game (another piece of information not available to us). This will allow us to (a) obtain a weighted average for the average rating of a line based on the substitutions, and (b) a much more accurate estimate for a player's total playing time. ({\bf 3}) Our current study is based on static player ratings obtained from FIFA. This only allows us to estimate the expected league points added over a replacement player. While these ratings capture the overall performance of a player during past season(s) and hence, it is still appropriate for estimating his monetary value, actual game ratings for players will allow us to estimate the actual league points added over replacement by a player over the course of a season. These game ratings for example can be composed through appropriate analysis of player tracking data, which at the least will provide us with information about how much time a combo-player (e.g., a left middlefielder who can also play left wing/forward) played at each line. ({\bf 4}) We can add interaction terms between the different covariates in the regression model, in order to see how for a example the defensive line interacts with the opposing attack line etc. Furthermore, we can use as our dependent variable the difference in the expected goals, rather than the actual goals scored. Expected goals (xGs) have been shown to be a better predictor of the quality of a team and better predictor of future performance \cite{StatsBomb-xG}. However, this would also require the availability of player tracking data to estimate the xGs in a game. Finally, one of the most important contributions of our study is its potential to be applied to other sports that exhibit similar characteristics with soccer that do not allow well-established methods like plus/minus to be applied. For example, American Football is a good example where colinearities will be severe for a plus/minus approach. Using player ratings from NFL Madden (in a similar way we use player ratings from FIFA), or even player grades from games (e.g., grades from Pro Football Focus) we can evaluate the contribution of 1 unit increase in the Madden rating/grade of a player to the expected points added from a team's play. The latter could be modeled through an expected points model. This could be a significant step towards defining a metric similar to Wins Above replacement for NFL, and finally understanding the contribution of each position in winning. \iffalse We believe that this study will trigger further research on the positional value in soccer. An immediate improvement over our current model is to consider the actual formation that the teams used (a piece of information missing in our current dataset). This will allow us to build a multilevel regression model where we will include covariates for more fine grained positions (e.g., center back, right back, center middlefielder etc.). We can also include information about substitutions during a game (another piece of information not available to us). This will allow us to (a) obtain a weighted average for the average rating of a line based on the substitutions, and (b) a much more accurate estimate for a player's total playing time. Furthermore, our current study is based on static player ratings obtained from FIFA. This only allows us to estimate the {\bf expected} league points added over a replacement player. While these ratings capture the overall performance of a player during past season(s) and hence, it is still appropriate for estimating his monetary value, actual game ratings for players will allow us to estimate the {\em {\bf actual}} league points added over replacement by a player over the course of a season. These game ratings for example can be composed through appropriate analysis of player tracking data, which at the least will provide us with information about how much time a combo-player (e.g., a left middlefielder who can also play left wing/forward) played at each line. We will explore these direction as part of our future research, while we will also explore the applicability of a similar approach towards quantifying positional value for American Football (NFL). In particular, using player ratings from NFL Madden (in a similar way we use player ratings from FIFA), we can evaluate the contribution of 1 unit increase in the Madden rating of a player to the expected points added from a team's play. This could be a significant step towards defining a metric similar to Wins Above replacement for NFL, and finally understanding the contribution of each position in winning. \fi
{ "attr-fineweb-edu": 2.246094, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdOc4eIZjh60lkua2
\section{Introduction} Nowadays, with the rapid development of analyzing player performance by collecting the past matches they have played, researchers have cooperated with sports teams and players to boost the advancement of sports analytics. However, it is difficult for novel algorithms to be verified in real-time matches due to the cost and the performance concerns of players. To mitigate the problem, \citet{DBLP:conf/aaai/KurachRSZBERVMB20} proposed a reinforcement learning football environment, which benefits researchers by reproducing and testing algorithms quickly offline. Nonetheless, there is no existing environment to develop new ideas in turn-based sports, e.g., badminton, tennis. Directly using existing environments is not feasible due to the varying nature of different sports. Therefore, we focus on one of the turn-based sports, badminton, to demonstrate our proposed reinforcement learning environment. However, there are at least two challenges to describe various factors in a rally. First, \textbf{3-D Trajectories}: The trajectory of a shuttlecock consists of not only 2-D coordinates but also the height. The actual height of the shuttlecock cannot be detected precisely due to the regulations in the real-world high-ranking matches and the cost of deploying such advanced techniques (e.g., hawk-eye systems). Moreover, there are no existing records for the shuttlecock's height, and it is also difficult for domain experts to label the 3-D trajectory, especially with the height. Second, \textbf{Multi-Agent Turn-Based Environment}: As described in \cite{DBLP:conf/aaai/WangSCP22}, a rally is composed of two players playing alternatively, which is different from the conventional sequence with the same target. Therefore, it is challenging to design proper states, actions, and rewards for both agents since each agent do the complicated action like returning the shuttlecock and player positioning by taking various observation like the shuttlecock's position and the opponent's into consideration. To address these issues, we propose a reinforcement learning badminton environment that is equipped with multiple view angles to review a given match (either a simulation or a real match). In addition, we design the environment based on the multi-agent particle environment (MAPE) \cite{NIPS2017_68a97503} to describe the process of two agents in a rally. In this manner, our badminton environment is able to not only support coaches and players to review and investigate tactics of players in a more flexible way, but also provides researchers with an interface place to quickly demonstrate new algorithms. For a more detailed illustration, please refer to our demonstration here\footnote{https://youtu.be/WRPcbalb6yc.}. \section{Approach} \subsection{Dataset Collection} We use the dataset collected by previous research \cite{DBLP:conf/aaai/WangSCP22}, which includes 75 high-ranking matches from 2018 to 2021 by 31 players from men's singles and women's singles games labeled by domain experts with the BLSR format \cite{DBLP:conf/icdm/WangCYWFP21,10.1145/3551391}. The dataset includes the positions of the players and the shuttlecock as 2-D coordinates, the timestamp of each ball round, the type of ball, the scores, and the motions of players. We aimed to learn the tactics from different players by these datasets. \subsubsection{Mimicking Actual Ball Height} We lack information about the shuttlecock's height in the collected dataset (we only know a label describing whether the hit point is above the net or not). To simulate the actual height, we set the height of each shot type with the average height below the net and standard deviation, and then use normal distribution as the corresponding distribution. \subsection{Reinforcement Learning Badminton Environment} As the tactics vary according to individual player, we have to design a process in a rally that is able to mimic players while considering different factors. Specifically, our environment is based on MAPE, which supports multi-agent training. \subsubsection{Environment Design} The environment is designed following a regular real-world badminton court, which includes two players from each side, a shuttlecock, the net and the boundary. To have a better visualization experience and adapt different application scenarios, we proposed multi-view observation options, which enables the user to monitor the playing process (training process when training agents) through the side view or the top view. To cope with this limitation, we designed a size shrinking method to illustrate the height of the shuttlecock. Specifically, the rendering object is bigger if the shuttlecock is closer to the player, and smaller otherwise. \subsubsection{Turn-Based Procedure} As badminton is a fast-paced sport, it is difficult for the agents to move instantaneously. Therefore, our goal is to make the agent focus on learning the tactics of the badminton player instead of playing badminton. We therefore simplify the real-time game into a turn-based environment. The procedure in a rally is as follows: 1) Assume that the shuttlecock is served by player A. In this sub-step, player A, as an agent, will decide the landing position of the shuttlecock, the ball type to hit, and the defense position to go to after returning the ball. On the other hand, player B, as an opponent agent, will decide the target position to go to in order to return the shuttlecock. 2) The environment will simulate the player's move and the trajectory of the shuttlecock until the shuttlecock reaches the defense region of the opponent. 3) At the moment the shuttlecock enters the opponent's defense region, the simulation will stop. On the other hand, the player will also decide the target position to go to in order to return the shuttlecock. 4) After receiving the players' decision, the environment will keep simulating until the shuttlecock falls into the proper region, that is close enough to the opponent and the height of the shuttlecock is reasonable for the type of shot the opponent is returning. 5) The step is finished, so the roles of the players swap. The environment executes the returning action and goes back to Step 2 until the rally is finished. \subsubsection{Simulation} To produce a realistic environment and enhance the reference value of the environment in reality, we apply the meta-parameter based on the match dataset. The meta-parameter we tuned based on the dataset includes the player speed, the defense range of the players, the returning region distribution of different ball types, and other physical parameters of the shuttlecock. Furthermore, we follow \cite{Chen2009ASO} to simulate the shuttlecock trajectory. \section{Preliminary Results} \begin{figure} \centering \includegraphics[height=!, width=\linewidth, keepaspectratio]{images/multiview.png} \caption{The schematic of the reinforcement learning badminton environment with two supporting views.} \label{fig1: multi-view} \end{figure} \noindent\textbf{Multiple Angles of View. } Figure \ref{fig1: multi-view} illustrates our proposed badminton environment equipped with different views, which enables researchers and domain experts to observe the playing procedure. \noindent\textbf{Multi Agent. } In general reinforcement learning (RL) environments, there is just one agent to interact with an environment in a match. However, badminton games are usually for two or four players to play, so we built our environment based on MAPE to achieve this function. Our environment is able to train not just one, but two or three or four agents in the same match, and can deal situations like a training agent versus with an expert player or two agents controlling two players on the same side respectively in doubles games. \noindent\textbf{Recording Match Data. } One of the characteristics in our environment is that it records the match data through the matches. This technique benefits researchers with not only data augmentation but also debugging for improving training policies.
{ "attr-fineweb-edu": 2.015625, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbvC6NNjgB0Ss0uga
\section{Introduction} \label{sec:intro} Two of the most popular tournaments in the world are the men's and women's National Collegiate Athletic Association (NCAA) Division 1 basketball tournaments. In college basketball, teams are grouped into conferences. Over the course of the regular season, teams compete against opponents within their own conference as well as teams outside their conference. Following the regular season, better performing teams within each conference compete in a conference tournament, with the winner of said tournament earning an invitation to play in the Division 1 tournament. The invitation for winning a conference tournament is called an ``automatic bid". Historically, sixty-four teams are selected for the women's tournament. Thirty-two of the sixty-four teams are automatic bids, corresponding to the thirty-two conference tournament winners. The other thirty-two teams are ``at-large bids", made up of teams failing to win their respective conference tournament. At-large bids are decided by a selection committee, which has guidelines that govern how to choose not only the teams invited to the tournament, but also how to set the tournament bracket, which defines who and where each team will play initially and could play eventually. Teams that earn an automatic bid or an at-large bid are said to have ``made the tournament". In previous iterations of March Madness, the men's tournament has differed slightly from the women's tournament, with the former including a set of games called the First Four. In the First Four, eight teams compete for four spots in the round of 64, also called the First Round. Specifically, the four lowest ranked automatic bids compete for two spots in the First Round, while the four lowest ranking at-large bids compete against each other for the two remaining spots. Thus, the men's tournament includes thirty-two automatic bids and thirty-six at-large bids. In 2022, the women's tournament included a First Four for the first time in tournament history, bringing the total number of teams in the tournament up to sixty-eight \citep{nytfirstfour}. Another difference between the men's and women's tournaments is that the NCAA has historically referred to the men's (but not the women's) tournament as ``March Madness" \citep{ncaabrand}. In this paper, we use the term to describe both the men's and women's tournament. As a result of the COVID-19 pandemic, the NCAA cancelled both the men's and women's 2020 NCAA tournaments. A majority of athletic conferences followed by cancelling their own conference tournaments, leaving many automatic bids for March Madness undecided. Due to these cancellations, natural questions arise with respect to which teams might have made the March Madness field and which teams might have won the tournament, if it had occurred. Using data from the 2019-2020 men's and women's collegiate seasons, we deliver probabalistic answers to these questions. Specifically, we contribute the following: 1) an overall ranking of Division 1 teams, as well as estimates of each team's strength, based on 2019-2020 regular season data, 2) closed-form calculations for probabilities of teams making the 2019-2020 March Madness field, calculated beginning from the point in time at which each conference tournament was cancelled, under a simplified tournament selection process, 3) closed-form calculations of probabilities of teams winning March Madness, given each of several potential brackets, and 4) a new pair of fully audited data sets with observed margins of victory for both men's and women's Division 1 basketball, spanning from the 2014-2015 season through the 2020-2021 season. The calculation of probabilities for teams making the 2019-2020 March Madness field consider each conference tournament's unfinished bracket as well as our estimates of Division 1 team strengths, which we fix following the culmination of the regular season. The closed-form nature of the probabilities also reduces the computational load and eliminates error inherent to simulation-based approaches. To our knowledge, this is the first closed-form approach to take into account partially completed conference tournaments when generating probabilities of making the March Madness field. Estimating March Madness win probabilities prior to the selection of the tournament field and the determination of the March Madness bracket is a difficult problem. If we define all the potential brackets as the set $\mathcal{B}$, we can decompose the probability of a team winning March Madness as \begin{equation} \label{eqn:b-wp} \mathbb{P}(W_u = 1) = \sum_{B \in \mathcal{B}} \mathbb{P}(W_u = 1|B)\mathbb{P}(B), \end{equation} \noindent where $\{W_u = 1\}$ represents team $u$ winning March Madness. However, calculations for all possible brackets are intractable. For a set of, say, 350 teams, there are $\binom{350}{64}$ ways to select a field of teams to compete in a 64-team tournament. Given a tournament field of $N = 2^J$ teams, where $J$ is the number of rounds in the tournament ($J = 6$ for a 64-team tournament), the number of unique brackets for a single-elimination tournament is \begin{equation} \prod_{i = 1}^{N/2} \binom{2i}{2}\bigg/2^{N/2-1}, \label{eqn:num-bracks} \end{equation} \noindent which grows rapidly as $N$ increases. An 8-team tournament results in 315 potential brackets, while a $16$-team tournament results in 638,512,875 potential brackets. In the case of March Madness, the size of the set $\mathcal{B}$ is enormous. Of course, some brackets are more likely than others due to the set of constraints used by the selection committee. However, even if the set of plausible brackets for March Madness was small relative to the complete set $\mathcal{B}$ when the tournaments were cancelled in 2020, estimating $\mathbb{P}(B)$ in \eqref{eqn:b-wp} for any given bracket $B$ depends on the complex and, ultimately, subjective decision making process used by the NCAA selection committee. Thus, we make no attempt to estimate $\mathbb{P}(B)$ for any bracket $B$. Instead, in this paper, we focus on the construction of the marginal probability of each team making the March Madness field. Additionally, using brackets suggested by experts, along with brackets we construct, we compare March Madness win probabilities, $\mathbb{P}(W_u = 1|B)$ for all teams $u$, across different brackets $B$. We find that the win probabilities for teams most likely to win are relatively stable across brackets. Baylor, South Carolina, and Oregon each had more than a 20\% win probability for most of the brackets we considered for the women's tournament. On the men's side, Kansas was the most likely to win the tournament regardless of the bracket. Another contribution of the paper is the novel application of conformal predictive distributions \citep{vovk2019nonparametric} for the estimation of win probabilities. Conformal predictive distributions allow for the construction of win probability estimates under very mild distributional assumptions, reducing dependence on normality assumptions for our results. We find that conformal predictive distributions provide win probability estimates that are superior to other methods relying on stronger assumptions when compared using seven years of men's and women's post-season NCAA basketball data. Section \ref{sec:dfpi-pred-sports} provides background on constructing overall win probabilities for single-elimination tournaments and introduces the closed-form calculation of probabilities related to March Madness. Section \ref{sec:wp-methods} describes three methods for generating win probabilities of individual games, including the construction of win probability estimates through conformal predictive distributions. Section \ref{sec:results} describes the overall results, to include a ranking of the top teams, conference tournament and March Madness win probabilities associated with the 2019-2020 NCAA Division 1 basketball season and a comparison of three win probability generation methods. Section \ref{sec:conclusion} concludes the paper. All of the R code and data sets used in this research are available at \begin{center} \url{https://github.com/chancejohnstone/marchmadnessconformal}. \end{center} \section{Probabilities for March Madness} \label{sec:dfpi-pred-sports} In the following section, we describe win probability as it relates to single-elimination tournaments like March Madness. We also introduce the probability of a team making the March Madness field, given a collection of conference tournament brackets, team rankings and game-by-game win probabilities. We limit our discussion scope in this section primarily to the women's tournament, but the general construction reflects the men's tournament as well. Throughout this paper, we use the common verbiage that a team is ranked ``higher" than another team if the former team is believed to be better than the latter team. Likewise, a ``lower" ranking implies a weaker team. We follow the common convention that a team of rank $r$ has a higher rank than a team of rank $r+s$ for $s>0$. Teams ranked 1 to 32 are collectively identified as ``high-ranked". Teams ranked below 64 are identified as ``low-ranked". While the colloquial use of the term ``bubble teams" is usually reserved to describe a subset of teams near the boundary separating teams in and out of the March Madness field, we use the term to explicitly describe the teams ranked 33 to 64. In Section \ref{sec:sports-app}, we discuss an approach to rank teams based on observed game outcomes. \subsection{Win Probability for Single-Elimination Tournaments} Suppose were are given a game between team $u$ and team $v$ with the win probability for team $u$ defined as $p_{uv}$. While the true value of $p_{uv}$ is not known in practice, we describe methods for estimating probabilities for any match-up in Section \ref{sec:wp-methods}. Given these probabilities, one method for providing estimates of overall tournament win probability is through simulation. We can simulate the outcome of a game between team $u$ and team $v$ by randomly sampling from a standard uniform distribution. A value less than $p_{uv}$ corresponds to a victory for team $u$, while a value greater than $p_{uv}$ represents a victory for team $v$. Every game in a tournament can be simulated until we have an overall winner. We can then repeat the entire simulation process multiple times to get a Monte Carlo estimate of each team's probability of winning said tournament. While a simulation-based approach is effective at providing estimates of the true tournament win probability for each team, simulation requires excessive computational effort, with each estimate having inherent Monte Carlo error. To eliminate Monte Carlo error, we can generate overall tournament win probabilities through closed-form calculation. Suppose we have an eight team single-elimination tournament with the bracket shown in Figure \ref{fig:8bracket}. The highest ranking team, team 1, plays the lowest ranking team, team 8, in the first round. Assuming team 1 was victorious in round one, their second round opponent could be team 4 or 5. In the third round, team 1 could play team 3, 6, 2 or 7. After the first round of the tournament, team 8 has the same potential opponents as team 1. Using the knowledge of a team's potential opponents in future games, we can calculate win probabilities for any upcoming round and, thus, the entire tournament. Formalized in \cite{edwards1991combinatorial}, the tournament win probability for team $u$ given a fixed, single-elimination tournament bracket with $J$ rounds is \begin{equation} \label{eqn:cf-wp} q_{uJ} = q_{uJ-1}\Bigg[ \sum_{s \in \mathcal{O}_{uJ}} p_{us} q_{sJ-1} \Bigg], \end{equation} \noindent where $q_{uj}$ is the probability that team $u$ wins in round $j = 1,\hdots,J$, and $\mathcal{O}_{uj}$ is the set of potential opponents team $u$ could play in round $j$. We explicitly set $q_{u1} = p_{u\mathcal{O}_{u1}}$, where $\mathcal{O}_{u1}$ is team $u$'s opponent in round one. We can extend \eqref{eqn:cf-wp} to single-elimination tournaments of any size or construction as long we are able to determine the set $\mathcal{O}_{uj}$ for any team $u$ in any round $j$. \subsection{Probability for Making the NCAA Tournament} \label{sec:making} With \eqref{eqn:cf-wp} we can generate an overall tournament win probability for each team in a tournament exactly, given a fixed tournament bracket and game-by-game win probabilities. However, following the regular season, but prior to the culmination of all conference tournaments, the field for March Madness is not fully known. Thus, we cannot utilize \eqref{eqn:cf-wp} directly for estimating team win probabilities for the 2020 March Madness tournament. We first turn our attention to estimating each women's team's probability of making the 2020 March Madness field, made up of thirty-two automatic bids and thirty-two at-large bids. Although the closed-form calculations reflect probabilities related to the 2019-2020 women's March Madness tournament, which did not include a First Four, only slight changes are required to reflect the inclusion of a First Four for the men's and future women's tournaments. We define $F_u$ as the indicator variable for whether or not the $u$-th ranked team makes the NCAA tournament field. Knowing that the NCAA tournament is made up of automatic and at-large bids, we define two relevant indicator variables $C_u$ and $L_u$ associated with a team receiving one of these bids, respectively. $C_u$ is one if team $u$ wins its conference tournament and zero otherwise. We define $L_u$ as the number of conference tournaments won by teams ranked below team $u$. Then, under the assumption that higher-ranked at-large bids make the March Madness field before lower-ranked at-large bids, for any team $u$, the probability of making the NCAA tournament is \begin{equation} \mathbb{P}(F_u = 1) = \mathbb{P}(\{C_u = 1\} \cup \{L_u \le t_u\}) = \mathbb{P}(C_u = 1) + \mathbb{P}(L_u \le t_u) - \mathbb{P}(C_u = 1, L_u \le t_u), \label{eqn:out-64} \end{equation} \noindent where $t_u = 64 - u$ is the maximum number of teams ranked below team $u$ that can receive an automatic bid without preventing team $u$ from making the NCAA tournament as an at-large bid. Because there are only 32 conference tournaments, $L_u$ is less than or equal to 32 with probability one. Thus, with the current construction, teams ranked 32 or higher always make the NCAA tournament. For low-ranked teams, \eqref{eqn:out-64} reduces to $\mathbb{P}(C_u = 1)$, aligning with the fact that weaker teams must win their conference tournament to get an invite to March Madness. We can decompose the intersection probability of \eqref{eqn:out-64} into \begin{equation} \label{eqn:cond-T} \mathbb{P}(C_u = 1, L_u \le t_u) = \mathbb{P}(L_u \le t_u|C_u = 1)\mathbb{P}(C_u = 1). \end{equation} \noindent To explicitly describe the probabilities in \eqref{eqn:cond-T}, we split the teams in each conference into two sets, $\mathcal{H}^u_k$ and $\mathcal{L}^u_k$, defining $\mathcal{H}^u_k$ as the set of teams in conference $k = 1, \hdots, K$ ranked higher than or equal to team $u$ and $\mathcal{L}^u_k$ as the set of teams in conference $k$ ranked lower than team $u$. We reference lower or higher-ranked teams in the same conference as team $u$ using $k(u)$ instead of $k$. It is important to emphasize that team $u$ is included in $\mathcal{H}^u_{k(u)}$. Let $C_{\mathcal{H}_k^u}$ be one if a team in $\mathcal{H}_k^u$ wins conference tournament $k$ and zero otherwise. $C_{\mathcal{L}_k^u}$ is defined in a similar manner. We assume that the outcome of any conference tournament is independent of the outcome of any other conference tournament. Thus, we can describe $L_u$ as a sum of independent, but not identically distributed, Bernoulli random variables, \begin{equation} \label{eqn:out-64-sum-k} L_u = \sum_{k = 1}^K C_{\mathcal{L}^u_k}. \end{equation} \noindent If $C_{\mathcal{L}^u_k}$ were identically distributed for all conferences, then $L_u$ would be a binomial random variable. Because this not the case, $L_u$ is instead a Poisson-binomial random variable with cumulative distribution function \begin{equation} \label{eqn:poisson-binom} \mathbb{P}(L_u \le l) = \sum_{m = 0}^{l} \Bigg\{ \sum_{A \in \mathcal{F}_m} \prod_{s \in A} p_s \prod_{s \in A^C} (1 - p_s)\Bigg\}, \end{equation} \noindent where $p_k$ is the probability of a team in $\mathcal{L}^u_k$ winning conference tournament $k$, and $\mathcal{F}_m$ is the set of all unique $m$-tuples of $\{1,\hdots,32\}$. With \eqref{eqn:poisson-binom} known, the conditional portion of \eqref{eqn:cond-T} is a new Poisson-binomial random variable where $p_{k(u)} = 0$ because we condition on team $u$ winning their conference tournament. Thus, the probability of team $u$ making the tournament is \begin{equation} \label{eqn:t1} \mathbb{P}(F_u = 1) = q_{uJ_{k(u)}} + \mathbb{P}(L_u \le t_u) - \Bigg(\sum_{m = 0}^{t_u} \Bigg\{ \sum_{A \in \mathcal{F}_m} \prod_{s \in A} p'_s \prod_{s \in A^C} (1 - p'_s)\Bigg\} \Bigg) \times q_{uJ_{k(u)}}, \end{equation} \noindent where $p'_{k}$ is equal to $p_k$ when $k$ is not equal to $k(u)$ and zero otherwise, and $J_{k(u)}$ is the number of rounds in the conference tournament for conference $k(u)$. While the above derivation provides a closed-form calculation for probabilities of making the March Madness field, it does not describe any team's probability of winning March Madness. To do this, we must also derive closed-form probability calculations for specific tournament brackets. However, as discussed in Section \ref{sec:intro}, it is difficult to explicitly construct calculations for this task due to the inherent subjectivity associated with the seeding of teams. For this reason, we include the derivation of the closed-form marginal probability calculation for a team's March Madness rank under an adjusted tournament selection process utilizing the S-curve method \citep{ncaa2021} in Supplementary Materials. \section{Win Probabilities for Individual Games} \label{sec:wp-methods} Determining win probability in sports primarily began with baseball \citep{lindsey1961progress}. Since then, win probability has permeated many sports and become a staple for discussion among sports analysts and enthusiasts. Example applications of win probability have been seen in sports such as basketball \citep{stern1994brownian, loeffelholz2009nba}, hockey \citep{gramacy9estimating}, soccer \citep{hill1974association, karlis2008bayesian, robberechts2019will}, football \citep{stern1991probability, lock2014nflwp}, cycling \citep{moffatt2014lead}, darts \citep{liebscher2017predicting}, rugby \citep{lee1999applications}, cricket \citep{asif2016play}, table tennis \citep{liu2016new} and even video games \citep{semenov2016performance}. A majority of these methodologies use some form of parametric regression to capture individual and/or team strengths, offensive and/or defensive capabilities or other related effects. We continue the parametric focus by using a linear model framework to estimate team strengths, but our proposed approach makes only minimal closed-form distributional assumptions. Initially, suppose that \begin{equation} y_i = x_i'\beta + \epsilon_i, \label{eqn:lin-model} \end{equation} \noindent where $y_i$ represents the response of interest for observation $i$, $x_i$ is a length $p$ vector of covariates for observation $i$, $\beta$ is the vector of true parameter values and $\epsilon_i$ is a mean-zero error term. We define $y = (y_1, \hdots, y_n)'$ and $X = (x_1, \hdots, x_n)'$, where the vector $y$ and matrix $X$ make up our $n$ observations $D_n = \{(x_i, y_i)\}_{i=1}^n$. We are interested in both predicting $y_{n+1}$ and quantifying uncertainty about $y_{n+1}$, given $x_{n+1}$, for some new observation $(x_{n+1}, y_{n+1})$. In subsequent sections, the response values in $y$ will be margins of victory, and the elements of $\beta$ will include team strength parameters. However, at this stage a slightly more general treatment is useful. In the following section, we discuss event probability estimation using three different methods: conformal predictive distributions based on model \eqref{eqn:lin-model}, linear regression with model \eqref{eqn:lin-model} and an added assumption of mean-zero, normally distributed, independent errors, and logistic regression. We then provide specific application to the sports context, extending the aforementioned methods in order to estimate win probabilities in sports. \subsection{Event Probability with Conformal Predictive Distributions} \label{sec:conf-event} Predictive distributions, e.g., those introduced in \cite{lawless2005frequentist}, provide a method for estimating the conditional distribution of a future observation given observed data. Conformal predictive distributions (CPDs) \citep{vovk2019nonparametric} provide similar results but through the use of a distribution-free approach based on conformal inference \citep{gammerman1998learning}. In the following section we provide a general treatment of conformal inference, followed by an introduction to conformal predictive distributions. \subsubsection{Conformal Inference} The aim of conformal inference is to quantify uncertainty in classification and/or regression tasks under weak distributional assumptions. In a regression context, conformal inference produces conservative prediction intervals for some unobserved response $y_{n+1}$ through the repeated inversion of some hypothesis test, say \begin{equation} H_0: y_{n+1} = y_c \; \textrm{ vs. } \; H_a: y_{n+1} \ne y_c, \label{eqn:conf-permute-2} \end{equation} \noindent where $y_{n+1}$ is the response value associated with an incoming covariate vector $x_{n+1}$, and $y_c$ is a candidate response value \citep{lei2018distribution}. The only assumption required to achieve valid prediction intervals is that the data $D_n$ combined with the new observation $(x_{n+1}, y_{n+1})$ comprise an exchangeable set of observations. The inversion of \eqref{eqn:conf-permute-2} is achieved through refitting the model of interest with an augmented data set that includes the data pair $(x_{n+1}, y_c)$. For each candidate value, a set of \textit{conformity scores} is generated, one for each observation in the augmented data set. A conformity score measures how well a particular data point conforms to the rest of the data set and traditionally utilizes the data pair $(x_i,y_i)$ and the prediction for $y_i$, denoted $\hat{y}_i(y_c)$, as arguments. While the prediction $\hat{y}_i(y_c)$ is dependent on both $(x_{n+1}, y_c)$ and $D_n$, we omit dependence on $x_{n+1}$ and $D_n$ in our notation. We define \begin{equation} \pi(y_c, \tau) = \frac{1}{n+1} \sum_{i = 1}^{n+1} \Big[\mathbb{I}\{R_i(y_c) < R_{n+1}(y_c)\} + \tau\mathbb{I}\{R_i(y_c) = R_{n+1}(y_c)\}\Big], \label{eqn:conf-p-values-2} \end{equation} \noindent where, for $i = 1, \hdots, n$, $R_i(y_c)$ is the conformity score for the data pair $(x_i, y_i)$ as a function of $(x_{n+1}, y_c)$, $R_{n+1}(y_c)$ is the conformity score associated with $(x_{n+1},y_c)$, and $\tau$ is a $U(0,1)$ random variable. In hypothesis testing we generate a probability associated with an observed test statistic, specifically the probability of a \textit{more extreme} value than the observed test statistic under the assumption of a specified null hypothesis, also known as a $p$-value. With the construction of $\pi(y_c, \tau)$, we generate an estimate of the probability of an observation \textit{less extreme} than the candidate value $y_c$. Thus, $1 - \pi(y_c, \tau) $ provides a $p$-value associated with \eqref{eqn:conf-permute-2} \citep{shafer2008tutorial, lei2018distribution}. The inclusion of the random variable $\tau$ generates a smoothed conformal predictor \citep{vovk2005algorithmic}. For a fixed $\tau$, we can construct a conformal prediction region for the response associated with $x_{n+1}$, \begin{equation} C_{1-\alpha, \tau}(x_{n+1}) = \{y_c \in \mathbb{R} \; : \; (n+1) \pi(y_c, \tau) \le \lceil(1 - \alpha)(n+1) \rceil \}, \label{eqn:conf-pi-2} \end{equation} \noindent where $1-\alpha$ is the nominal coverage level. When $\tau$ is one, $\pi(y_c, 1)$ is the proportion of observations in the augmented data set whose conformity score is less than or equal to the conformity score associated with candidate value $y_c$. Regardless of the conformity score, a conformal prediction region with nominal coverage level $1-\alpha$ is conservative \citep{vovk2005algorithmic}. Thus, for some new observation $(x_{n+1},y_{n+1})$, \begin{equation} \label{eqn:conf-prob} \mathbb{P}\big(y_{n+1} \in C_{1-\alpha, \tau}(x_{n+1})\big) \ge 1 - \alpha. \end{equation} \noindent \subsubsection{Conformal Predictive Distributions} \label{sec:cpds} In the previous section we explained conformal inference in general terms. However, we can construct $\pi(y_c, \tau)$ with certain conformity scores to achieve inference for different events associated with $y_{n+1}$. One commonly used conformity score in a regression setting is the absolute residual, $|y_i - \hat{y}_i(y_c)|$, which leads to symmetric prediction intervals for $y_{n+1}$ around a value $\tilde{y}$ satisfying $\tilde{y} = \hat{y}_{n+1}(\tilde{y})$. The traditional residual associated with a prediction, $y_i - \hat{y}_i(y_c)$, results in a one-sided prediction interval for $y_{n+1}$ of the form $\big(-\infty, u(D_n, x_{n+1}) \big)$. Additionally, the selection of the traditional residual as our conformity score turns $\pi(y_c, \tau)$ into a conformal predictive distribution \citep{vovk2019nonparametric}, which provides more information with respect to the behavior of random variables than, say, prediction intervals. For example, with a CPD, we can provide an estimate of the probability of the event $y_{n+1} \le y^*$. For the the remainder of this paper we construct $\pi(\cdot, \tau)$ using the conformity score $R_i(y_c) = y_i - \hat{y}_i(y_c)$. As previously stated, $1-\pi(y_c, \tau)$ provides a $p$-value associated with \eqref{eqn:conf-permute-2}. Thus, $1-\pi(y_c,1/2)$ is analogous to the mid $p$-value, which acts a continuity correction for tests involving discrete test statistics. We point the interested reader to \cite{lancaster1949combination, lancaster1961significance}, \cite{ barnard1989alleged} and \cite{routledge1992resolving} for additional details on the mid $p$-value. We set $\tau = 1/2$ for the computation of our conformal predictive distributions throughout the remainder of this paper. While we have generalized conformal predictive probabilities for the event $y_{n+1} \le y^*$, we focus on the case where $y^*$ is equal to zero in later sections and instead describe probabilities associated with the event $y_{n+1} > 0$, which represent win probabilities when $y_{n+1}$ is a margin of victory. \subsection{Other Event Probability Methods} \label{sec:other-wp-methods} We specifically outline two competing methods to conformal predictive distributions: event probability through linear regression and event probability through logistic regression. \subsubsection{Event Probability Through Linear Regression} \label{sec:norm-wp} We can estimate the expected value of some new observation $y_{n+1}$ using \eqref{eqn:lin-model}, but additional assumptions are required to provide event probabilities. In linear regression, the error term $\epsilon_{i}$ is traditionally assumed to be a mean-zero, normally distributed random variable with variance $\sigma^2 < \infty$. Together, these assumptions with independence among error terms make up a Gauss-Markov model with normal errors (GMMNE). A least-squares estimate for the expectation of $y_{n+1}$, $\hat{y}_{n+1}$, is $x'_{n+1}\hat{\beta}$ where $\hat{\beta} = (X'X)^{-1}X'y$ when $X$ is a full rank $n \times p$ matrix of covariates. Given the assumption of a GMMNE, $\hat{y}_{n+1}$ is normally distributed with mean $x'_{n+1}\beta$ and variance $\sigma^2(x'_{n+1}(X'X)^{-1}x_{n+1})$. The prediction error for observation $n+1$, $r_{n+1} = y_{n+1} - \hat{y}_{n+1}$, is also normally distributed with mean zero and variance $\sigma^2(1 + x'_{n+1}(X'X)^{-1}x_{n+1})$. Dividing $r_{n+1}$ by its estimated standard error then yields a $t$-distributed random variable. Thus, we can describe probabilities for events of the form $y_{n+1} > s$ using the standard predictive distribution \begin{equation} \mathbb{P}(y_{n+1} > s) = 1 - F_{t,n-p}\Bigg(\frac{s-\hat{y}_{n+1}}{\hat{\sigma}\sqrt{1+ x'_{n+1}(X'X)^{-}x_{n+1}}}\Bigg), \label{eqn:pivot} \end{equation} \noindent where $\hat{\sigma}^2 = y'(I - X'(X'X)^{-1}X'y/(n-p)$ is the usual unbiased estimator of the error variance $\sigma^2$, and $F_{t,n-p}$ is the cumulative distribution function for a $t$-distributed random variable with $n - p$ degrees of freedom \citep{wang2012fiducial, vovk2019nonparametric}. \subsubsection{Event Probability Through Logistic Regression} While linear regression allows for an estimate of $\mathbb{P}(y_{n+1} > 0)$ based on assumptions related to the random error distribution, we can also generate probability estimates explicitly through logistic regression. Suppose we still have observations $D_n$. We define a new random variable $z_i$ such that $z_i = \mathbb{I}\{y_i > 0\}$. Instead of assumptions related to the distribution of the random error term $\epsilon_i$, we assume a relationship between the expectation of $z_i$, defined as $p_i$, and the covariates $x_i$ such that $\log \big( \frac{p_i}{1-p_i} \big) = x_i'\beta$. Then, we can then derive an estimate for $p_i$, \begin{equation} \label{eqn:bradley-probs} \hat{p}_{i} = \frac{e^{x_{i}'\hat{\beta}}}{1 + e^{x_{i}'\hat{\beta}}}, \end{equation} \noindent where $\hat{\beta}$ is the maximum-likelihood estimate for $\beta$ under the assumption that $z_1, \hdots, z_n$ are independent Bernoulli random variables. \subsection{Application to Win Probability in Sports} \label{sec:sports-app} We now extend the methods outlined in Section \ref{sec:conf-event} and Section \ref{sec:other-wp-methods} to a sports setting for the purpose of generating win probabilities. Specifically, we wish to identify win probabilities for some future game between a home team $u$ and away team $v$. The method of generating win probabilities in our case are made possible through the estimation of team strengths. One of the earliest methods for estimating relative team strength comes from \cite{harville1977ranks, harville1980predictions}, which uses the \textit{margin of victory} (MOV) for each game played. We focus on the initial linear model \begin{equation} y_{uv} = \mu + \theta_u - \theta_v + \epsilon_{uv}, \label{eqn:lin-model-sports1} \end{equation} \noindent where $y_{uv}$ represents the observed MOV in a game between team $u$ and $v$ ($u \ne v$), with the the first team at home and the second away, $\theta_u$ represents the relative strength of team $u$ across a season, $\mu$ can be interpreted as a ``home court" advantage parameter, and $\epsilon_{uv}$ is a mean-zero error term. We can align \eqref{eqn:lin-model-sports1} with \eqref{eqn:lin-model} and identify games across different periods, e.g., games happening in a given week, by assuming \begin{equation} y_{uvw} = x_{uvw}'\beta + \epsilon_{uvw}, \label{eqn:lin-model-sports} \end{equation} \noindent where $y_{uvw}$ represents the observed MOV in a game between team $u$ and $v$ ($u \ne v$) in period $w$, $\beta$ is the parameter vector $(\mu, \theta_1,\hdots,\theta_{p-1})'$, $\epsilon_{uvw}$ is a mean-zero error term, and $x_{uvw}$ is defined as follows. For $i = 1, \hdots, p$, let $e_t$ be the $t$-th column of the $p \times p$ identity matrix, and let $e_{p+1}$ be the $p$-dimensional zero vector. Then, $x_{uvw}= e_1 + e_{u+1} - e_{v+1}$ for a game played on team $u$'s home court or $x_{uvw}= e_{u+1} - e_{v+1}$ for a game played at a neutral site. Without loss of generality, we estimate team strengths under model \eqref{eqn:lin-model-sports} relative to an arbitrarily chosen baseline team. Let $\hat{\theta}_u$ be element $u$ + 1 of the least squares estimate for $\beta$ under model \eqref{eqn:lin-model-sports}, and define $\hat{\theta}_p = 0$. Then $\hat{\theta}_u - \hat{\theta}_v$ is the estimated margin of victory for team $u$ in a neutral-site game against team $v$, and $\hat{\theta}_1, \hdots, \hat{\theta}_p$ serve as estimated strengths of teams $1, \hdots, p$, respectively. The rank order of these estimated team strengths provides a ranking of the $p$ teams. By the definition of $y_{uvw}$, the probability that $y_{uvw}$ is greater than zero is the probability of a positive MOV, representing a win for the home team. Thus, with the assumption of \eqref{eqn:lin-model-sports}, we can now describe the event probability methods outlined in Section \ref{sec:conf-event} and Section \ref{sec:other-wp-methods} as they relate to win (and loss) probabilities in sports. The different model assumptions do not change the inherent construction of event probability estimates with CPDs. We can align CPDs with model \eqref{eqn:lin-model-sports} by defining \begin{equation} \pi_w(y_c, \tau) = \frac{1}{n_w+1} \sum_{(u,v,w)} \Big[\mathbb{I}\{R_{uvw}(y_c) < R_{n_w+1}(y_c)\} + \tau\mathbb{I}\{R_{uvw}(y_c) = R_{n_w+1}(y_c)\}\Big], \label{eqn:cpd-week} \end{equation} \noindent where $n_w$ is the number of observations up to and including period $w$, $x_{n_w + 1}$ is the covariate vector associated with our game of interest, $R_{uvw}(y_c)$ is constructed using the using the prediction $\hat{y}_{uvw}(y_c)$ and $R_{n_w + 1}(y_c)$ is the conformity score associated with $(x_{n_w + 1}, y_c)$. We call the construction of win probability through CPDs \textit{conformal win probability}. As discussed in Section \ref{sec:cpds}, we use a mid $p$-value approach, selecting $\tau = 1/2$ for our work. To provide further intuition for the the use of conformal win probability, consider a women's basketball game between home team Baylor and away team Oregon State, two highly ranked teams during the 2019-2020 season (see Section \ref{sec:results} for more results related to the top women's teams). We wish to estimate probabilities associated with margins of victory for this particular game. For a specific margin of victory, e.g., a margin of victory of five, $\pi_w(5,\tau)$ is a probability estimate of the event $y_{n+1} \le 5$, which represents a margin of victory of less than or equal to five. Additionally, an estimate for the probability that Baylor wins, i.e., the margin of victory is greater than zero, is $1-\pi_w(0,\tau)$. Figure \ref{fig:cpd-baylor} shows the conformal predictive distribution for margin of victory in the case of Baylor vs. Oregon State for the 2019-2020 season. Note that the distribution in Figure \ref{fig:cpd-baylor} has jumps that are too small to be visible. Thus, the distribution is nearly continuous. It is straightforward to reassign probability so that the support of the conformal predictive distribution lies entirely on non-zero integers to match the margin of victory distribution. However, our reassignment does not effect our win probability estimate, so we omit the details here. With the additional assumptions of mean-zero, independent, normally distributed error terms under \eqref{eqn:lin-model-sports}, the probability construction shown in \eqref{eqn:pivot} becomes \begin{equation} \label{eqn:t-wp-sports} 1 - F_{t,n_w-p}\Bigg(\frac{-\hat{y}_{uvw}}{\hat{\sigma}\sqrt{1+ x'_{uvw}(X_{w-1}'X_{w-1})^{-1}x_{uvw}}}\Bigg), \end{equation} \noindent where $X_w$ is the matrix of covariates up to and including period $w$. For logistic regression, we could instead assume \begin{equation} \label{eqn:bradley-param2} \log \bigg( \frac{p_{uvw}}{1-p_{uvw}} \bigg) = x_{uvw}'\beta, \end{equation} \noindent where $p_{uvw}$ is the probability that $y_{uvw}$ is greater than to zero. Then, $p_{uvw}$ is the probability that home team $u$ wins against away team $v$ in period $w$. Similar approaches to \eqref{eqn:bradley-param2} are seen in \cite{bradley1952rank} and \cite{lopez2015building}. The interpretation for $\theta_u - \theta_v$ under model \eqref{eqn:bradley-param2} is no longer the strength difference between teams $u$ and $v$ in terms of MOV, but rather the $\log$-odds of a home team victory when home team $u$ plays away team $v$ at a neutral site. As in linear regression, the rank order of the estimates of the $\theta$ parameters obtained by logistic regression provides a ranking of the teams. \section{Application to March Madness} \label{sec:results} The following section relays the results of the application of conformal win probabilities to the 2019-2020 NCAA Division 1 basketball season. We include estimates of team strengths, probabilities of making the March Madness field, tournament win probabilities, and a comparison of the win probability methods outlined in Section \ref{sec:wp-methods}. \subsection{Overall Team Strengths for 2019-2020 Season} \label{sec:ranks} The regular season ranks and estimated team strengths for the top ten women's and men's teams are shown in Table \ref{tab:end-of-season-ranks-top-women} and Table \ref{tab:end-of-season-ranks-top-men}, respectively. We provide additional 2019-2020 rankings from different sources for comparison, including Associated Press (AP), NCAA Evaluation Tool (NET), KenPom (KP), Ratings Percentage Index (RPI), and College Sports Madness (CSM). \begin{table}[h] \centering \caption{Top 10 NCAA women's teams for 2019-2020 season} \begin{tabular}{c|c|c|c|c|c} \hline Team & Estimated Strength & Rank & AP & RPI & CSM \\ \hline Baylor & 40.68 & 1 & 3 & 4 & 4 \\ South Carolina & 40.30 & 2 & 1 & 1 & 1 \\ Oregon & 39.32 & 3 & 2 & 2 & 2 \\ Maryland & 37.90 & 4 & 4 & 3 & 6 \\ Connecticut & 36.17 & 5 & 5 & 4 & 3 \\ Mississippi St. & 29.07 & 6 & 9 & 10 & 12 \\ Indiana & 27.91 & 7 & 20 & 14 & 19 \\ Stanford & 27.82 & 8 & 7 & 6 & 7 \\ Louisville & 26.36 & 9 & 6 & 7 & 6 \\ Oregon State & 25.80 & 10 & 14 & 20 & 17 \\ \hline \end{tabular} \label{tab:end-of-season-ranks-top-women} \end{table} \begin{table}[h] \centering \caption{Top 10 NCAA men's teams for 2019-2020 season} \begin{tabular}{c|c|c|c|c|c} \hline Team & Estimated Strength & Rank & AP & NET & KP \\ \hline Kansas & 25.26 & 1 & 1 & 2 & 1 \\ Gonzaga & 22.79 & 2 & 2 & 1 & 2 \\ Duke & 22.31 & 3 & 11 & 6 & 5 \\ Michigan State & 20.54 & 4 & 9 & 7 & 7 \\ Baylor & 20.44 & 5 & 5 & 5 & 3 \\ Arizona & 19.39 & 6 & - & 14 & 19 \\ San Diego State & 18.65 & 7 & 6 & 4 & 6 \\ West Virginia & 18.43 & 8 & 24 & 17 & 10 \\ Ohio State & 18.22 & 9 & 19 & 16 & 8 \\ Dayton & 18.07 & 10 & 3 & 3 & 4 \\ \hline \end{tabular} \label{tab:end-of-season-ranks-top-men} \end{table} \noindent The large difference between strengths for the top men's and women's team is due to the difference in team parity between the two leagues, i.e., the gap in strength between the stronger and weaker women's teams is much larger than the gap between the stronger and weaker men's teams. \subsection{Probabilities of Making March Madness Field for 2019-2020 Season} \label{sec:adjustment} The cancellation of the 2020 NCAA basketball post-season prevented the completion of a majority of conference tournaments, as well as the release of final March Madness brackets to the public. At the time of cancellation, there were 20 men's and 18 women's automatic bids still undecided. Knowing the results of the (partially) completed conference tournaments allows for estimation of the probabilities of making the March Madness field as outlined in Section \ref{sec:making}. We use regular season data as well as conference tournament progress to update every team's chances of making the tournament at the time of cancellation. Table \ref{tab:women-winners} shows the tournament winners of completed conference tournaments for NCAA women's basketball. These teams have probability 1 of making the March Madness field. \begin{table}[h] \caption{Conference champions for 2019-2020 women's basketball season} \label{tab:women-winners} \centering \begin{tabular}{c|c} \hline Conference & Winner \\ \hline Atlantic-10 & Dayton \\ ACC & North Carolina St. \\ American & Connecticut \\ Big East & DePaul \\ Big Ten & Maryland \\ Horizon & IUPUI \\ Ivy League & Princeton$^*$ \\ Mountain West & Boise St. \\ Ohio Valley & Southeast Missouri St. \\ Pac-12 & Oregon \\ SEC & South Carolina \\ Southern & Samford \\ Summit & South Dakota \\ WCC & Portland \\ \hline \end{tabular} \end{table} \noindent While the Ivy League conference tournament was cancelled, Princeton was awarded an automatic bid to the 2019-2020 March Madness tournament based on their regular season performance. With the additional information provided by the outcomes of the completed conference tournaments, there are five different situations for teams as it relates to making the March Madness tournament: {\singlespacing \begin{enumerate} \item A team has already made the tournament. \item A team must win their conference tournament or relies on a small number of teams ranked below them winning their respective conference tournament to make the tournament. \item A team has already been eliminated from their conference tournament and relies on a small number of teams ranked below them winning their respective conference tournament to make the tournament. \item A team must win their conference tournament to make the tournament. \item A team cannot make the tournament. \end{enumerate} } \noindent Table \ref{tab:women-situations} shows the situations for women's teams ranked from thirty-three to sixty-four. \begin{table}[h] \centering \caption{Situations for women's bubble teams} \resizebox{\textwidth}{!}{ \begin{tabular}{c|c} \hline Situation & Teams \\ \hline 1 & Iowa St., Texas, Drake, James Madison, Missouri St., Alabama, TCU, Arizona St., Oklahoma St. \\ 2 & Kansas St. \\ 3 & Marquette, LSU, North Carolina \\ 4 & West Virginia, Oklahoma \\ 5 & all other bubble teams \\ \hline \end{tabular} } \label{tab:women-situations} \end{table} \noindent When using the rankings constructed with regular season data and model \eqref{eqn:lin-model-sports}, the Big 12 conference tournament was the only undecided tournament involving bubble teams, resulting in Kansas State\@ being the sole team in Situation 2 and West Virginia and Oklahoma as the only two teams in Situation 4. Table \ref{tab:make-tourn-probs} shows the March Madness tournament field probabilities for teams in Situations 2, 3 and 4, constructed with \eqref{eqn:cf-wp} and conformal win probability. Probabilities of making the tournament for the men's teams in Situations 2, 3, and 4 are shown in Supplementary Materials. While not listed in Table \ref{tab:make-tourn-probs}, there is a large number of teams ranked below sixty-four that also fall into Situation 4. \begin{table}[h] \caption{Probabilities of making NCAA tournament field for women's bubble teams for 2019-2020 season.} \label{tab:make-tourn-probs} \centering \begin{tabular}{c|c|c|c} \hline Team & Situation & Overall Rank & Probability \\ \hline Marquette & 3 & 41 & 0.999 \\ LSU & 3 & 42 & 0.990 \\ North Carolina & 3 & 43 & 0.874 \\ Kansas St. & 2 & 44 & 0.471 \\ West Virginia & 4 & 50 & 0.005 \\ Oklahoma & 4 & 62 & 0.005 \\ \hline \end{tabular} \end{table} \subsection{March Madness Win Probabilities} Even with the results of the completed conference tournaments, the number of potential tournament brackets remains extremely large. Thus, we forgo the enumeration of all potential brackets and instead focus on three exemplar brackets and three expert brackets to generate March Madness win probabilities. The first two brackets represent two extremes. Bracket 1 maximizes tournament parity, selecting the strongest remaining team from each conference tournament bracket, while Bracket 2 selects the weakest remaining team. Bracket 3 is constructed randomly, selecting teams based on their conference tournament win probabilities. We compare these brackets, and the March Madness win probabilities for the top teams included in these brackets, to those generated by subject matter experts. For the women, we include brackets from basketball expert Michelle Smith \citep{smithbracket2020}, \cite{csmbracket2020} and \cite{rtrpibracket2020}. Table \ref{tab:women-bracket-results} shows the different bracket win probabilities for the top ten women's teams, ranked using the ranking method outlined in Section \ref{sec:sports-app}. Exemplar bracket results for the men's 2019-2020 season are shown in Supplementary Materials, with brackets generated by NCAA basketball experts Andy Katz \citep{katzbracket2020}, Joe Lunardi \citep{lunardiracket2020} and Jerry Palm \citep{palmbracket2020}. Figure \ref{fig:bracket-wp-range} shows the ranges of win probabilities across all exemplar brackets for the top 25 teams. Figure \ref{fig:bracket-expert-wp-w} shows a comparison of win probabilities across the expert generated brackets. Figure \ref{fig:women-cdf} compares cumulative NCAA tournament win probabilities across brackets for the top 25 women's teams. The cumulative NCAA tournament win probabilities for the top 25 men's teams are included in Supplementary Materials. \begin{table}[h] \centering \caption{March Madness win probabilities given exemplar brackets for top ranked women's teams.} \resizebox{.8\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c} \hline Team & Bracket 1 & Bracket 2 & Bracket 3 & Smith & CSM & RTRPI \\ \hline Baylor & 0.289 & 0.289 & 0.289 & 0.277 & 0.303 & 0.221 \\ South Carolina & 0.278 & 0.277 & 0.278 & 0.267 & 0.276 & 0.304 \\ Oregon & 0.212 & 0.212 & 0.212 & 0.220 & 0.195 & 0.208 \\ Maryland & 0.124 & 0.125 & 0.124 & 0.143 & 0.125 & 0.171 \\ Connecticut & 0.069 & 0.069 & 0.069 & 0.069 & 0.073 & 0.071 \\ Mississippi St. & 0.008 & 0.008 & 0.008 & 0.006 & 0.007 & 0.007 \\ Indiana & 0.005 & 0.005 & 0.005 & 0.003 & 0.004 & 0.002 \\ Stanford & 0.005 & 0.005 & 0.005 & 0.005 & 0.007 & 0.006 \\ Louisville & 0.002 & 0.002 & 0.002 & 0.003 & 0.002 & 0.002 \\ Oregon St. & 0.002 & 0.002 & 0.002 & 0.001 & 0.001 & 0.001 \\ \hline \end{tabular} } \label{tab:women-bracket-results} \end{table} In general, tournament probabilities do not change drastically across brackets. However, we do see larger probability ranges associated with the top women's teams. Specifically, the tournament win probability for Baylor, the highest ranked team with respect to our ranking, drops to 0.221 with the RTRPI expert bracket, as opposed to 0.288 and 0.303 for the Smith and CSM brackets, respectively. Additionally, the overall tournament win probability for South Carolina increases to 0.304 with the RTRPI bracket. Figure \ref{fig:rbr-wp-w} shows round-by-round win probabilities for Baylor and South Carolina for each of the expert brackets. We see that Baylor's RTRPI round-by-round win probability becomes lower than South Carlolina's after the second round, dropping to 0.856, compared to South Carlolina's 0.927. The largest decrease occurs during the Elite Eight, where Baylor's probability of moving on from the Elite Eight (under the RTRPI bracket) is 0.600, compared to South Carolina's 0.819. This is due to Connecticut's placement in the same region as Baylor, with each team seeded as the 1-seed and 2-seed, respectively. In the other expert brackets, Connecticut was placed in the same region as Maryland. The other brackets keep the round-by-round win probabilities for these two teams relatively stable. \subsection{Win Probability Calibration} \label{sec:dfpi-effectiveness} In order to assess the win probability estimates generated using the methods outlined in Section \ref{sec:wp-methods}, we compare estimates for previous NCAA basketball seasons, including the shortened 2019-2020 season. We use the regular season games to estimate the team strengths and then construct win probabilities for each game of post-season play. Ideally, the estimated probability for an event occurring should be \textit{calibrated}. A perfectly calibrated model is one such that \begin{equation} \label{eqn:calibration} \textrm{E}_{\hat{p}} \Big[\bigg|P \Big( \hat{z} = z|\hat{p} = p \Big) - p \bigg| \Big] = 0, \end{equation} \noindent where $z$ is an observed outcome, $\hat{z}$ is the predicted outcome, $\hat{p}$ is a probability estimate for the predicted outcome, and $p$ is the true outcome probability \citep{guo2017calibration}. In the NCAA basketball case \eqref{eqn:calibration} implies that if we inspect, say, each game with an estimated probability of 40\% for home team victory, we should expect a home team victory in 40\% of the observed responses. We can assess calibration in practice by grouping similarly valued probability estimates into a single bin and then calculating the relative frequency of home team victories for observations within each bin. For visual comparison of calibration, Figure \ref{fig:calibration-all-byhome} shows a reliability plot for the win probability estimates generated using the methods outlined in Section \ref{sec:wp-methods} with bin intervals of width 0.025. From Figure \ref{fig:calibration-all-byhome} we can see that while the methods are comparable for higher win probability estimates, the conformal win probability approach is much better calibrated for lower win probability estimates. A majority of observed relative frequencies for conformal win probabilities fall closer to the dotted line, signifying better calibration than the other two methods. To provide a numerical interpretation of calibration, we compare the three probability estimation approaches mentioned in Section \ref{sec:wp-methods} using $\log$-loss \begin{equation} \log L(\hat{p}, z) = z\log(\hat{p}) + (1-z)\log(1-\hat{p}), \end{equation} \noindent which generates loss for each individual win probability estimate rather than a group of binned estimates. $\log$-loss has been shown to have strong empirical and theoretical properties as a loss function \citep{painsky2018universality, vovk2015fundamental}. Figure \ref{fig:log-loss} shows the \textit{relative} $\log$-loss, i.e., the ratio of the $\log$-loss for one method to the minimum $\log$-loss across all methods, broken up by season and league. We see that for all year-league combinations except for the women's 2015-2016 season and men's 2020-2021 season, conformal win probabilities performed better than the other two methods. Additionally, even when conformal win probabilities are not the best performing approach, they still result in a $\log$-loss within one percent of the best performing approach. Table \ref{tab:log-loss-all} shows the results for the entire collection of probability estimates for each league. \begin{table}[h] \centering \caption{Relative $\log$-loss for NCAA men's and women's basketball win probability estimates by league.} \label{tab:log-loss-all} \begin{tabular}{c|c|c|c} \hline & \multicolumn{3}{c}{Method} \\ \hline League & Conformal & Linear & Logistic \\ \hline Women & 1.00 & 1.01 & 1.02 \\ Men & 1.00 & 1.02 & 1.03 \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} The cancellation of March Madness in 2020 resulted in disappointment for many across the country, fans and athletes alike. We explored win probabilities as they relate to the NCAA tournament, delivering a closed-form calculation for probabilities of making the tournament, given a set of team strengths estimated from game outcomes. We introduced conformal win probabilities and compared to win probabilities derived from logistic regression and linear regression assuming normally distributed, independent, mean-zero errors. Conformal win probabilities were superior to those obtained from the other methods. For the application in this paper, we limited our discussion to model \eqref{eqn:lin-model-sports1}. Each of the win probability methods described in Section \ref{sec:wp-methods} can be applied to more complex models, so future work could focus on comparing these methods in a more complex setting. One example of a model we could assume is \begin{equation} y_{uvw} = \mu + \theta_{uw} - \theta_{vw} + \epsilon_{uvw}, \label{eqn:lin-model-sports-fused} \end{equation} \noindent where $\theta_{uw}$ is the strength of team $u$ during week $w$. \eqref{eqn:lin-model-sports-fused} is rank deficient, so we could consider a fused lasso approach \citep{tibshirani2005sparsity}, where the objective function is penalized by $\lambda \sum_u \sum_{w = 1}^{W-1} |\theta_{uw} - \theta_{uw+1}|$ to encourage the difference in parameter values from one period to the next to be small for each team. This approach allows for relative team strengths to change across a season, rather than estimating one average strength for each team over the course of the entire season. Additionally, we could incorporate team ``match-up" statistics, e.g., the difference between the teams' offensive or defensive efficiencies, rather than solely estimating a win probability based on the teams playing. The focus on event probabilities can also be extended to a betting scenario. In this paper, the event probability of interest was a win (or loss) for a specific team. This event corresponds to a ``moneyline" bet in sports betting, i.e., betting on a specific team to win a game. Another type of bet is the ``spread" bet, which accounts for differences in the strengths of two teams, either through the adjustment of a point spread or the odds associated with a particular team. The spread is chosen by bookmakers so that the total amount of money bet on the spread of the favorite is near that bet against favorite (as opposed to being representative of, say, the expected margin of victory). For example, suppose we have an upcoming contest between two teams, a favorite and an underdog, with a spread of negative three. A bettor taking the spread on the favorite would win the bet if the favorite wins by more than three points, while a bettor taking the spread against the favorite would win the bet if the underdog wins or loses by less than three points. In order to determine whether to bet on the favorite or the underdog in a spread bet, we can utilize conformal win probabilities. Specifically, calculating $\pi(-s,1/2)$, where $s$ is the spread for a game of interest, generates an estimate of the probability that the margin of victory (favorite score - underdog score) will be less than or equal to $-s$. One other major simplification we utilize in this paper is that estimated team strength does not change following the regular season. Thus, we eliminate the potential for teams to receive a higher (or lower) overall rank based on their conference tournament performance. While this simplifies the analysis, allowing for teams to move up or down in rank might more closely match the March Madness selection committee's actual process. \bibliographystyle{apalike}
{ "attr-fineweb-edu": 2.556641, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdTY4dbgg2ozrH-Jp
\section{Introduction} Using advanced sports analytics statistics and Machine Learning (or well-crafted mathematical) models, we can predict match outcomes for a variety of sports -- achieving predictive accuracies that are better than chance, a home-field advantage rule-of-thumb, or majority gut feeling. While this is an interesting (and in our opinion worthwhile) academic exercise, however, the question whether such work is actually useful becomes difficult to avoid. Or, to paraphrase a practitioner of sports betting: ``You should compare yourself to betting agencies and see whether you can make money!''. In this work, we intend to do exactly this: using the example of three US sports attracting large betting volumes: \begin{itemize} \item the main post-season tournament of university (NCAA) basketball -- \$ 9.2 billion bet (\$ 262 million legally), \item both regular and post-season of the National Basketball Association (NBA), and \item regular and post-season of the National Football League (NFL) -- Super Bowl alone \$ 4.1 billion bet (\$ 132 million legally) \end{itemize} we show not only predictive accuracies but also accumulated sports betting outcomes had we used their predictions to consistently place bets this year. We find rather varying outcomes, and, in particular, that very similar accuracies can lead to strongly diverging monetary payoffs. To explore this phenomenon further, we relate this to the way sports betting is handicapped. In the following section, we discuss sports betting, and in particular how money-lines should be interpreted and are calibrated. We then discuss our experimental set-up before discussing hypothetical betting outcomes for the NCAAB post-season, the NBA season, and NFL season, respectively. \section{Sports betting} To understand the following discussions, it is necessary to understand money-lines offered by operators of sports betting services (\emph{sports books}), and to have some insight into how those money lines are derived. US sports books offer two ways of betting on match outcomes: \begin{enumerate} \item \emph{Over-under}, where bettors attempt to correctly foresee the difference between points score. \item \emph{Money-line} betting, where bettors attempt to correctly divine the eventual winner of a match. \end{enumerate} Given that we have had weak results with trying to predict match scores in the past, we ignore the first setting for now, and focus on the second one, which allows us to relate binary predictions to monetary values. A money-line offered by a sports book for a particular match typically takes the form shown in the first row of Table \ref{tab:money-lines}. \begin{table} \begin{centering} \begin{scriptsize} \begin{tabular}{ccccc} Match-up& Favorite (FAV)& Underdog (DOG)& FAV-Line& DOG-Line\\\hline Detroit Pistons at Atlanta Hawks & ATL & DET &300 &240\\ Utah Jazz at Detroit Pistons&DET&UTH&110&-110\\ \end{tabular} \end{scriptsize} \caption{NBA money-line examples\label{tab:money-lines}} \end{centering} \end{table} For each match, a probable winner (the Favorite) is identified, making the other team the probable loser (Underdog). The associated lines indicate the possible pay-out: \begin{itemize} \item The FAV-Line indicates how much money one would \textbf{have to bet} to \emph{win} \$ 100. \item The DOG-Line indicates how much money one would \emph{win} if one \textbf{were to bet} \$ 100. \end{itemize} To make those two settings comparable, we can reformulate the FAV-Line since betting \$ 100 would net the bettor \$ $10000/$FAV-Line. For the first example given in Table \ref{tab:money-lines}, this means that Atlanta was considered the favorite and betting \$ 100 on them and winning would have paid out \$ 33.33. Detroit was expected to lose but if one had bet on them and they had defied predictions, one would have won \$ 240. Sports books do their best to calibrate those lines, trying to balance two attractions for bettors: \begin{enumerate} \item Betting on the favorite is less risky and therefore has a higher chance to pay out. \item Betting on the underdog and winning will lead to a higher absolute pay-out. \end{enumerate} Ideally, a match's handicap attracts bettors in such way that the wins that the sports book needs to pay out are offset by the losses of those who bet on the other team (minus some profit for the sports book itself). This can be most clearly seen in the second example in Table \ref{tab:money-lines}, a so-called \emph{Pick 'em}. This is a match where the sports book operators do not have enough information to reliably predict one team as winning so betting on either one gives the same pay-off: \$ $10000/110 = 90.90$. Given a large enough number of bettors, one would expect that roughly half bets on either team and since the sports book pays out \$ 91 for every \$ 100 bet, it would stand to make a profit of 9\%. \section{Experimental set-up} Since we are going to use the same general set-up in the succeeding sections, we describe it here. For each predictive setting, we have collected the money lines for all matches from \url{http://www.vegasinsider.com/}. The site lists the money-lines offered by the major sports books operating out of Las Vegas, Nevada, which occasionally differ slightly from each other. Additionally, money-lines vary with time, either due to the influx of new information (injuries, player arrests, coaches' announcements), or in reaction to bettors' behavior: too much interest in one team will lead to adjustment in favor of the other one. To avoid undue optimism when evaluating our predictors, we selected the most conservative line for each match. If a match is, for instance, listed once with FAV-Line=175, DOG-Line=155 and once with FAV-Line=165, DOG-Line=145, we will choose the latter since it would pay out less, no matter which prediction we make. We use our models' predictions to select on which team to place the bet, and assume that we bet \$ 100 on every match in the time period. Correctly predicting a win by the favorite increases the model's winnings by \$ $10000/$FAV-Line, correctly predicting an underdog's win by \$ DOG-Line, and correctly predicting the winner of a Pick 'em by \$ 90.90. Incorrectly predicting a match outcome decreases winnings by \$ 100. For the sake of convenience, we predict matches, and tally up winnings, per day. The preceding paragraph illustrates an important dynamic -- incorrectly predicting is always bad but not all correct predictions are equal: \begin{itemize} \item Correctly predicting underdog wins is the most attractive option and depending on the money-line can balance out several incorrect predictions. \item Correctly predicting Pick 'ems still gives a relatively high pay-out. \item Correctly predicting favorite wins, on the other hand, needs to happen at a high rate to make up for incorrect predictions. \end{itemize} \section{NCAAB predictions (and bets)} In our first setting, we consider the NCAAB post-season tournament, also referred to as ``March Madness'', for the interest and amount of sports betting it generates. This is the smallest of the settings we discuss since the tournament involved only 67 matches. We use the \emph{Adjusted Efficiencies} pioneered by Ken Pomeroy \cite{kenpom}, combined into a weighted average over the season, to encode teams, as well as season-level statistics such as the win percentage, margin of victory, point differential etc. For the full description of statistics, see \cite{zimmermann16sam}. We evaluate three classifiers: \emph{Na{\"i}ve Bayes} (NB), \emph{Multi-layer Perceptron} (ANN), and a simplified version of Ken Pomeroy's predictor based on the Pythagorean Expectation (KP). We referred to this classifier as ``simplified'' since we did not estimate the involved coefficients ourselves but based them on the discussions found on his blog. For the details of this classifier, see as well \cite{zimmermann16sam}. NB and ANN are used in their Weka \cite{weka} implementations, with default parameters, except that for NB \emph{Kernel estimator} is set to \emph{true}. Before we discuss the performance of our classifiers, we need to establish the baseline. This means basing ourselves on the money-lines offered by sports books by assuming that we always follow the lead of the money-line. Concretely, if the team designated as favorite wins, we count this as a correct predictions for ``Vegas'', if the underdog wins, an incorrect one, with winnings accrued as described above. The main problem for this evaluation is posed by Pick 'ems: since the money lines give no indication but we would have to make a prediction, this amounts to flipping a coin for each Pick 'em. In the best case, we get each of those coin flips right, in the worst case, every single one wrong. Since the difference between getting a Pick 'em right and wrong amounts to \$ 190.90 per match (the lost gain + the \$ 100 bet), this leads to a large difference over the course of a season. Typically, we would assume to get half the coin flips right, which we report as expected accuracy and pay-out in Table \ref{tab:ncaab-vegas}. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{2}{c}{w/o Pick 'ems} & \multicolumn{6}{|c}{w/ Pick 'ems (5)}\\ Accuracy & Pay-out & Best Acc. & Pay-out & Exp. Acc & Pay-out & Worst Acc. & Pay-out\\\hline 0.7419 & 30.26 & 0.7611 & 484.76 & 0.7313& 7.51 & 0.6865& -469.73 \end{tabular} \caption{Predictive accuracies and betting pay-outs for ``Vegas'' for the NCAAB post-season\label{tab:ncaab-vegas}} \end{table} We can see that always picking favorites would have gotten about 3/4 of the matches right, and paid out approximately \$ 30. Flipping coins on the Pick 'ems can lead to winnings of almost \$ 500 but also to losses of the same magnitude. Especially for so few (five) Pick 'ems, this is a very real risk. \begin{table} \centering \begin{tabular}{ccc} \begin{tabular}{l||c|c|c} Classifier & NB & ANN & KP\\\hline Accuracy & 0.6865 & 0.6417 & 0.7014\\ Pay-out & 293.52 & -605.92 & -231.34\\ \end{tabular} &\ \ \ \ & \begin{tabular}{l|c|c|c} Classifier & Favs & Dogs & Pick 'ems (of 5) \\\hline NB & 39 & 5 & 2 (0.4)\\ ANN & 38 & 2 & 3 (0.6)\\ KP & 43 & 0 & 4 (0.8)\\ \end{tabular} \end{tabular} \caption{Predictive accuracies and betting pay-outs for three predictive models(left), Correct predictions by money-line characterization (right) for the NCAAB.\label{tab:ncaab-results}} \end{table} The results for the predictive models are shown on the left-hand side of Table \ref{tab:ncaab-results}. Two things are immediately noticeable: 1) the relative high predictive accuracies -- the KP model performs almost as well as the expected ``Vegas'' result, and 2) that this high accuracy does not translate into a high pay-out. Indeed, while Na{\"i}ve Bayes performs 1.5 percentage points worse than KP, it shows solid gains (better than using moneylines to bet only on favorites), while KP loses money. We find some explanation for this phenomenon by looking at the right-hand side of Table \ref{tab:ncaab-results}. NB gets five upsets right, and even though KP is stronger in correctly predicting favorite wins and close Pick 'ems, this makes all the financial difference. The winnings curve for the different classifiers can be found in the appendix and shows that the winning behavior is rather erratic. Especially the ANN, which at some point posts winnings similar to the final outcome for NB, drops off into steep loss. But even the NB \emph{could} have returned twice of the final pay-out, a peak that is flanked by losses, however. \section{NBA predictions (and bets)} Our second setting concerns the NBA. We predicted matches for the 2016 regular and post-season, using NB, ANN, \emph{Random Forest} (RF),\footnote{Which we omitted for the NCAAB, as its accuracy is too weak.} as well as the simplified Ken Pomeroy model (KP). Teams were represented by the same statistics as for the NCAAB predictions. We did not predict the first two days of play since at that time a predictor would not have statistics for all teams. As in the preceding section, we need to establish the baseline, shown in Table \ref{tab:nba-vegas}. \begin{table}[ht] \centering \begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{8}{c}{Regular + post-season}\\ \multicolumn{2}{c}{w/o Pick 'ems} & \multicolumn{6}{|c}{w/ Pick 'ems (115)}\\ Accuracy & Pay-out & Best Acc. & Pay-out & Exp. Acc & Pay-out & Worst Acc. & Pay-out\\\hline 0.7121 & -2374.16 & 0.7375 & 9125.84 & 0.6937 & -1857.3 & 0.6492 & -12828.81\\ \end{tabular} \caption{Predictive accuracies and betting pay-outs for ``Vegas'' for the NBA\label{tab:nba-vegas}} \end{table} Following the money line over the course of the entire season, while ignoring the Pick 'ems, would lead to a very respectable accuracy but also to a monetary loss. At {\raise.17ex\hbox{$\scriptstyle\sim$}}1200 matches, the loss per match is only about \$ 1 yet over the course of the season this accrues. Getting half the Pick 'ems right does of course not improve this, even though the accuracy would stay high. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{nba-curves.png} \caption{Classifier winnings over the course of the season, NBA\label{nba-winnings}} \end{figure} Figure \ref{nba-winnings} plots the development of the different classifiers' winnings over the course of the season, the legend is annotated with predictive accuracies. None of them show a net positive payout, and with the exception of the KP model, they all drop rather low. Notably, they all recover to a certain degree, however, meaning that one could win money if one could determine \emph{when to start betting}. Plots showing the difference between the trough and the best result, and its magnitude, can be found in the appendix (Figures \ref{nba-winnings-nb}--\ref{nba-winnings-kp}). For the ANN and KP, the best result is in the post-season, for NB and the RF in the regular season, even though NB and KP have the same regular season accuracy. Table \ref{tab:nba-picks} shows why: while KP strongly outperforms NB in getting favorites right (as for the NCAAB), it underperforms when it comes to Pick 'ems. Pick 'ems are clearly the most difficult matches to predict, and with KP combining three estimated influences -- adjusted efficiencies, the coefficient in the Pythagorean Expectation, and the home-court adjustment -- small errors can spiral. The trough-peak difference aligns with the amount of underdog/pick 'em predictions. \begin{table} \begin{centering} \begin{tabular}{l|c|c|c||c|c|c} & \multicolumn{3}{c|}{Regular season}&\multicolumn{3}{|c}{Post-season}\\ Classifier & Favs & Dogs & Pick 'ems (109)&Favs & Dogs & Pick 'ems (6)\\\hline NB & 691 & 57 & 48 (0.44)&49 &5&1 (0.16)\\ ANN & 707 & 60 & 22 (0.20)&57&6&0\\ RF & 685 & 61 & 28 (0.26) &47&4&0\\ KP & 725 & 59 & 12 (0.11) &58&5&0\\ \end{tabular} \caption{Correct predictions by money-line characterization for the NBA\label{tab:nba-picks}} \end{centering} \end{table} \section{NFL predictions (and bets)} This season marked our first attempt at NFL predictions. As for basketball, the main question to answer concerns team representations. In basketball matches, individual events are \emph{possessions} that lead to either points, or a number of possibly possession-changing events. In American Football, on the other hand, individual events are \emph{Downs} and their outcome is mainly measured in \emph{yards gained} (or lost). While the more or less discrete results in basketball can be read off the final box score, the fluctuation of yards in football is less well captured. To address this, Football Outsiders have proposed \emph{Defense-adjusted Value Over Average} and \emph{Defense-adjusted Yards Above Replacement} \cite{foDVOA}, both of which consider the outcome of each down in relation to the league-wide average against a particular \emph{defensive alignment}. Since this requires access to and work with play-by-play statistics, we forwent this approach and instead evaluated several other statistics over past seasons: \begin{itemize} \item Basic Averages -- all the statistics available from a typical box score at \cite{footballReference} under "team stats", normalized for 65 possessions, and averaged in a weighted manner (recent games have more weight), both offensively (scored/gained/ committed) and defensively (allowed/caused). This follows similar reasoning as possession-based normalizing and averaging in basketball. \item Opp. Averages -- same as above but for the opponents that have been played so far. This is supposed to help gauge the competition. \item Adjusted Averages -- certain offensive and defensive statistics adjusted by mirror statistics of the respective opponents. That is basically the same idea as Ken Pomeroys adjusted efficiencies \cite{kenpom}. \item SRS -- the "simple rating system" information (SRS, SoS) as described at \cite{SRS}, with the difference that the averaging is weighted, so not divided by number of matches. \end{itemize} Page limitations prevent us from showing the full results of the evaluation here. We intend to write this down formally in the future but for the time being, the details can be found at \cite{sdmaz15NFL-representations}. After additional evaluation during the season, we settled on using Basic+Opponents' Averages for the NB, and Adjusted Averages for ANN and RF. We also evaluated the SRS. We did not predict the first week's matches since for those matches, we do not have statistics for the teams at that time. Again, we need to establish the baseline (see Table \ref{tab:nfl-vegas}). \begin{table}[ht] \centering \begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{8}{c}{Regular + post-season}\\ \multicolumn{2}{c}{w/o Pick 'ems} & \multicolumn{6}{|c}{w/ Pick 'ems (29)}\\ Accuracy & Pay-out & Best Acc. & Pay-out & Exp. Acc & Pay-out & Worst Acc. & Pay-out\\\hline 0.6441 & -1215.69 & 0.6852 & 1420.68 & 0.6294 & -1251.92 & 0.5697 & -4115.42 \end{tabular} \caption{Predictive accuracies and betting pay-outs for ``Vegas'' for the NFL\label{tab:nfl-vegas}} \end{table} The baseline again shows consistent behavior: the accuracy is relatively high but if one follows money-line predictions one loses -- not much per individual game but quite a bit in the aggregate. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{nfl-curves.png} \caption{Classifier winnings over the course of the season, NFL\label{nfl-winnings}} \end{figure} The results of the predictors, shown in Figure \ref{nfl-winnings}, are very interesting. The first thing to notice is that NB, using rather straight-forward statistics, achieves comparative accuracy to ``Vegas'' and a much better pay-out. In fact, its pay-out is better than that for the best-case ``Vegas''-scenario in the regular season. Even the ANN, with much lower accuracy, achieves a good pay-out. Additionally, we again see the influence of which picks to predict correctly at play: even though ANN and SRS have very similar accuracies, betting according to SRS would be a clear loss, and the difference can be explained by the fact that the ANN trades off accuracy on favorites against accuracy on underdogs (Table \ref{tab:nfl-picks}). \begin{table} \begin{centering} \begin{tabular}{l|c|c|c||c|c|c} & \multicolumn{3}{c|}{Regular season}&\multicolumn{3}{|c}{Post-season}\\ Classifier & Favs & Dogs & Pick 'ems (28)&Favs & Dogs & Pick 'ems (1)\\\hline NB & 115 & 25 & 14 (0.5)&4 &1&0\\ ANN & 98 & 29 & 15 (0.54) &5&0&1 (1.0)\\ RF & 107 & 16 & 16 (0.57) & 4&1&0\\ SRS & 111 & 18 & 14 (0.5) &4&0&1 (1.0)\\ \end{tabular} \caption{Correct predictions by money-line characterization for the NFL\label{tab:nfl-picks}} \end{centering} \end{table} A final monetary realization is that each predictor reaches a high point that comes before the end of the season. In fact, following NB all to the end of the regular season would mean forfeiting more than \$ 600, with losses for all models in the post-season. While for NBA predictions it seems to be important to know when to get \emph{in}, for the NFL is important to know when to get \emph{out} -- a decision that might be slightly easier to make. \section{Conclusion and outlook} The answer to the question posed in the title of the paper is a definitive ``Maybe!''. Once a model has been established, it can be used to place bets in a straight-forward manner. However, the NCAAB post-season contains few matches, leading to rather volatile pay-out. In the NBA, one can win but only after figuring out when to start betting. In the NFL results, finally, straight-forward use could indeed lead to a decent pay-off (admittedly, not attractive to professional gamblers), especially if one stops early enough. In all cases, the safest model seems to be a Na{\"i}ve Bayes predictor. We have tried to show one of the aspects that make a predictive model more or less well-suited for sports betting, by considering what kind of matches models predict well. In particular, a model that is not very strong in correctly predicting favorites but gets a large amount of Pick 'ems correct, or even better matches won by underdogs, would be a particular attractive tool, even if its straight-up accuracy is not impressive. We intend to explore this question further by relating models' performance to evaluations based on lift-charts and ROC-like discussions. We have the data needed for this exploration already available (and plotted) but page constraints prevent us from discussing it in this work. The final goal would of course be to shift the training of predictive models: away from maximizing predictive accuracy and towards maximizing pay-outs, which means getting border-line cases right instead of easy ones. A different direction consists of proposing which matches (not) to bet on and/or how much to bet, as has been done in \cite{DBLP:conf/scai/Langseth13,snyder2013actually} for soccer. Possible approaches include leveraging game theoretic approaches or reinforcement learning.\footnote{We thank the reviewers for this suggestion.}
{ "attr-fineweb-edu": 2.060547, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbaE5qsNCPep75XTH
\section{An Case Study on Rugby Dataset} \label{sec-rugby-case} We conduct a case study on Rugby Dataset~\cite{simonetto2017drawing} to demonstrate the effectiveness of our technique. It contains over 3000 tweets between the 12 Rugby teams in the Guinness Pro12 competition. Each tweet includes the involved teams and the accurate time stamp with a precision of one second. Fig.~\ref{fig-rugby-case-study} shows the timeslicing results generated by uniform timeslicing and the proposed nonuniform timeslicing approach. Both techniques divide the whole dynamic graph into the same number of intervals (i.e., 12) for a fair comparison. Uniform timeslicing divides the whole time range (Sept. 1st, 2014 to Oct. 23rd, 2015, 418 days in total) into 12 intervals of around 35 days each, as shown in Fig.~\ref{fig-rugby-case-study}a. The visual complexity across different intervals varies significantly. For example, Intervals 1, 2, 3 and 9 have sparse edges and Interval 9 even contains two disconnected graph components, revealing the infrequent interactions between the rugby teams in these time period. But some intervals like Intervals 11 and 12 of Fig.~\ref{fig-rugby-case-study}a have dense interactions between the rugby teams. There are several bursts in Intervals 11 and 12 (indicated by the top left line charts) and it is difficult to tell their order and structure, since uniform timeslicing does not take features of the data into consideration and often generates graphs with highly-aggregated edges for intervals of dense edges. On the contrary, the proposed nonuniform timeslicing approach generates a sequence of graph snapshots with a more balanced visual complexity in terms of the number of edges in each interval, as shown in Fig.~\ref{fig-rugby-case-study}b. It is still easy to recognize the overall trend of interactions among rugby teams with the help of the time range bars in the top left corner. For example, the long time range bars in Intervals 1, 2, 3 and 8 of Fig.~\ref{fig-rugby-case-study}b indicates that the interactions among the rugby teams in those time periods are infrequent. More specifically, Interval 8 (late May to late August) of Fig.~\ref{fig-rugby-case-study}b has the longest time range bar, which corresponds to the summer break where there are no fixtures. However, at the beginning of this interval, there is a burst (teal colored edges) which corresponds to the date of the Grand Final between Munster (\textit{mu}) and Glasgow (\textit{gl}) at the end of the season in 2015. The final is not easily visible in uniform timeslicing because uniform timeslicing does not accentuate it. \yongwang{More interesting findings can be revealed by the nonuniform timeslicing approach, when there are a series of bursting edge events. For example,} as the season begins, a number of bursts occur indicated by the short time range bars of Intervals 9-12 of Fig.~\ref{fig-rugby-case-study}b. The nonuniform timeslicing approach is able to better accentuate certain details. For example, ``scarlets\_rugby'' (Node \textit{sc}) communicated the most with the team ``dragonsrugby'' (Node \textit{dr}) in late August, then interacted the most with the team ``glasgowwarriors'' (Node \textit{gl}) in early September, and further switched to mainly contact the team ``ulsterrugby'' (Node \textit{ul}) in late September, as demonstrated by the thickest edges linked to Node \textit{sc} in Intervals 9-11 of Fig.~\ref{fig-rugby-case-study}b. August (Interval 9) corresponds to just before the beginning of the season. Posting activity around preseason fixtures involving Scarlets (\textit{sc}) and Dragons (\textit{dr}) as well as Edinburgh (\textit{ed}) and Ulster (\textit{ul}) are the two most prominent edges in this interval. Scarlets-Glasgow (Interval 10) and Scarlets-Ulster (Interval 11) correspond to the first two fixtures for Scarlets in the 2015-16 season and therefore are the first two bursts of activity. The order of these bursts is apparent because they are given separate timeslices in nonuniform timeslicing, which, however, are compacted within a single interval (Interval 11 of Fig.~\ref{fig-rugby-case-study}a) in nonuniform timeslicing. \section{Discussion} \section{Conclusion and Future work} In this paper, we present a nonuniform timeslicing approach for dynamic graph visualization, which can balance the visual complexity across different time intervals by assigning more intervals to the periods with bursting edges and less intervals to the periods with fewer edges. A case study on a real dynamic graph (i.e., the Rugby Dataset) shows that it can achieve similar visual complexity across different time intervals for a dynamic graph and better visualize \yongwangblue{the time ranges with edge bursts}. However, several aspects of the proposed nonuniform timeslicing approach still need further work. First, the number of intervals is empirically selected. Prior studies (e.g., \cite{sulo2010meaningful}) have explored empirical methods to determine the suitable number of intervals for graph mining tasks, but has not yet investigated from the perspective of graph visualization. \yongwangblue{Also, we define the visual complexity as the number of edges/events per timeslice. Other definitions of visual complexity for dynamic graph visualization can be further explored. Furthermore, our case study shows that our non-uniform timeslicing approach can better visualize time periods with bursting edges. However, it remains unclear which detailed graph analysis tasks can benefit from the non-uniform and uniform timeslicing approach, which is left as future research. } \section{Introduction}\label{sec:introduction} \firstsection{Introduction} \label{sec:introduction} \maketitle Graphs are widely used to represent the relations between different objects. \yongwang{ Many of these graphs dynamically change over time and are ubiquitous across various applications and disciplines, such as social networks, (tele-)communication networks, biological networks, international trade networks and others.} Therefore, the visualization of such dynamic graphs is of great importance in revealing their temporal evolution process and many dynamic graph visualization techniques have been proposed. According to the survey by Beck et al.~\cite{beck2017taxonomy}, \textit{small multiples}, i.e., showing a sequence of static graphs, is one of the most important and basic methods for dynamic graph visualization. Prior studies~\cite{archambault2011animation,farrugia2011effective} further demonstrated that small multiples are more effective than \textit{animation}, the other basic method of visualizing dynamic graphs. Sometimes dynamic graphs are \textit {event-based}. In an event based dynamic graph, edges and nodes appear as individual events across time at the given temporal resolution of the data. An important problem is the effective selection of timeslices from the data. In the graph drawing community, uniform timeslicing is often chosen due to its simplicity. When selecting uniform $s$ timeslices from dynamic graphs of $T$ time units, time is divided into intervals of $T/s$ and all events are projected down onto one plane for visualization. Uniform timeslicing has the advantage that each timeslice spans exactly the same interval of time. However, it does not take into account the underlying structure of the data. In graph mining, studies have demonstrated that the length of time intervals selected for each timeslice strongly influences the structures that can be automatically measured from the dynamic graph~\cite{krings2012effects,uddin2017optimal,karsai2014time,ribeiro2013quantifying} and affects the performance of graph mining algorithms~\cite{fish2015handling}. Prior work in the graph mining community has explored methods for timeslicing dynamic graphs effectively. Researchers have tried to identify appropriate window sizes for uniform timeslicing~\cite{uddin2017optimal,sulo2010meaningful} or have conducted nonuniform timeslicing~\cite{sun2007graphscope,soundarajan2016generating}. There appears to be no single timeslicing method that is optimal for all graph mining tasks~\cite{uddin2017optimal,devineni2017one,caceres2013temporal}. Some studies have shown that different time window sizes are necessary for different analysis tasks~\cite{fish2017supervised,fish2017task} and different periods of the whole dynamic graphs~\cite{devineni2017one}. In dynamic graph visualization, timeslice selection received little attention beyond dividing the data into uniform timeslices. \yongwang{More specifically, how to select timeslices that are data dependent for effective visualization of dynamic graphs still remains an open problem.} Uniform timeslicing implicitly assumes that all events will be uniformly distributed across time. However, events in a dynamic graph are rarely distributed in this way. For example, social media streams can have a burst of edges when a topic becomes important while other time periods have very few edges. \yongwangblue{Given the limited screen space, small multiples cannot afford a large number of timeslices and we need to carefully use the limited number of timeslices. However,} a uniform timeslicing of such data sets will make the bursting periods suffer from visual clutter and the sparse periods remain relatively empty. \yongwangblue{In this work, we propose a nonuniform timeslicing approach for dynamic graph visualization, which balances the visual complexity (number of edges/events per timeslice) across different intervals (Fig.~\ref{fig:teaser}). } The timeslicing is computed based on the events present in the dynamic graph. Given a temporal resolution of the data set (e.g., second, minute, hour), we use a form of histogram equalization to make those histogram bins uniformly distributed across time. To make viewers aware of the actual length of each interval, a horizontal bar is also shown beside the graph of each snapshot. The major contributions of this paper can be summarized as follows: \begin{compactitem} \item We propose a novel nonuniform timeslicing approach for visualizing dynamic graphs based on balanced visual complexity. \item We investigate the effectiveness of the proposed nonuniform timeslicing approach through a case studies, where the common uniform timeslicing approach is also compared. \end{compactitem} \section{The Proposed Method} In this work, we develop timeslicing methods to optimize the visualization of dynamic graphs. Specifically, we aim to have timeslices of uniform visual complexity and two methods will be introduced. \subsection {Nonuniform Timeslicing Methods} \label{sec-nonuniform-timeslicing} Our definition of visual complexity in this paper is based on the number of events (in our case edges) projected into one static graph $G_i$ of the timesliced dynamic graph. If the variance of the number of events between timeslices is small, all static representations of the graph have a similar number of events and are equally complex. Otherwise, a large variance will indicate that some timeslices are more visually cluttered, making it difficult to read the graph during bursts in the event stream. Thus, the goal is to find a nonuniform partition of $[0,T]$ whereby each graph $G_i$ has approximately the same number of events. We accomplish this via selecting nonuniform intervals of time $[t_{l-1}, t_l)$ for which the events projected onto the graph $G_l$ are equally distributed. The problem of computing a uniform distribution of events within each timeslice has a strong relationship with problems in image processing~\cite{histEqu}. All our methods to select a nonuniform timeslicing of a dynamic graph are inspired by image processing approaches originally designed to either enhance contrast or reduce errors in a digital image. In essence, \emph{bursts of edges in the event stream will be given more emphasis through additional timeslices, while areas of the dynamic graph where there are few events will have few timeslices to represent them}. In both visualization and graph mining community, a question that is often posed is what is the optimal number of timeslices that should be selected for a particular dataset. In a visualization context, we are frequently limited by the screen space available. Our approach is to select timeslices according to our definition of visual complexity given the budget of timeslices. To ensure that the number of timeslices does not have an effect on the layout and that our techniques are comparable, we use the DynNoSlice algorithm~\cite{simonetto2017drawing} to draw the graph once in the space time cube. All of our techniques are applied to this same drawing in 2D + time, making them comparable. \subsubsection{Equal Event Partitioning} The most basic way to ensure a uniform distribution of events is to place the events in order and count them until a specified number of events is reached. More specifically, given $|E|$ events in the dynamic graph and a budget of $k$ timeslices, we can simply create a new timeslice every $|E|/k$ events. Fig.~\ref{equEventPartFig} provides an overview of equal event partitioning. If this method is applied directly, the error can accumulate as fractional events cannot be assigned. Inspired by dithering~\cite{Floyd1976}, we propagate the negative or positive error closest to zero based on if we withhold from or assign an event to the next timeslice. The strength of this method for nonuniform timeslicing is that it is very simple to implement and does ensure a uniform distribution of events across all timeslices. But its main disadvantage is that it does not use any information about the temporal resolution of the dataset and only considers events in sequence. It may also combine edges that are distant in time into one timeslicing. Thus, a histogram equalization based approach is further proposed. \subsubsection{Histogram Equalization on Events} In image processing, histogram equalization can be used to enhance the contrast of images~\cite {histEqu}. Histogram equalization considers a histogram of all intensity values of a greyscale image (for example $[0,255]$) and transforms the histogram by rebinning it, so that the difference between the number of elements in each bin is reduced. Intuitively, the algorithm reduces or removes bins where the histogram values are low and devote more bins to areas where the histogram values are high, resulting in an image of higher contrast. We adapt histogram equalization to process streams of edges in dynamic graphs as shown in Fig.~\ref{histEquFig}. The algorithm starts by considering a histogram where the bins are set to a temporal resolution of the dataset greater or equal to the finest temporal resolution. The histogram represents the number of events occurring at given times across $[0,T]$. Given this histogram with $B+1$ bins, where $E_i$ is defined to be the number of events that occur in bin $i$, we can define the probability distribution as $p(i)$, where: \begin {equation} p(i) = |E_i|/|E| \end {equation} The cumulative distribution function $P(i)$ can then be defined as \begin{equation} P(i) = \sum_{j = 0}^i p(j) \end{equation} We can now apply a form of histogram equalization to transform the histogram of events into a new histogram of events $s_0, s_1, s_2, \dots s_B$: \begin {equation} s_i = \lfloor (T - 1) \sum_{j=0}^i p(j)\rfloor =\lfloor (T - 1) P(i)\rfloor \end {equation} This transformed version of $[0,T]$ accentuates bursts in the event stream and diminishes areas of low activity. If one were to watch the graph as a video, areas of bursty activity in the graph would be played in slow motion while areas of inactivity would be played in fast forward. In our approach, we uniformly sample this transformed histogram into $k$ intervals, devoting more timeslices to areas of high activity, as shown in Fig.~\ref{fig:teaser}. According to our experiments, if a fine-grained temporal resolution is used, the timeslicing results of histogram equalization of events and equal event partitioning are quite similar. However, if the data is recorded at coarser resolutions (e.g., month or year), histogram equalization better preserves the data granularity. Therefore, only histogram equalization of events is used in this paper. \subsection{Visualization} The graph drawing of dynamic graphs is not the focus of this paper, so we directly use DynNoSlice~\cite{simonetto2017drawing}, which allows us to use the same space-time cube to draw and compare the graph visualization results by uniform and nonuniform timeslicings. As the intervals of events for nonuniform timeslicing are not of equal duration by definition, we add a small glyph, consisting of a bar and line chart, to explicitly show the time range and edge event frequency of each interval, as shown in Fig.~\ref{fig-rugby-case-study}. To further reveal the detailed time information of each edge, a color mapping from teal to brown is used to encode the time order using a color time flatting approach~\cite{17Bach} in each interval. For edges representing multiple edge events, their color is mapped to the median time of the events and the edge width indicates the number of events. \section{Problem Definition and Notations} We formally define a dynamic graph and nonuniform timeslicing of dynamic graphs according to prior work~\cite{simonetto2017drawing}. For a dynamic graph defined on a node set $V$ and edge set $E \subseteq V \times V$, an edge in this dynamic graph $e_{ij}$ that appears at a time $t$ for a duration $d$ is a \textit{temporal edge}, denoted as $(e_{ij}, t, d)$, where $e_{ij} \in E$, $i,j \in V$, $t \in [0, T]$. In this paper, we choose a fixed small duration $d$ for each edge, which can also be referred as $(e_{ij}, t)$. Therefore, a \textit{dynamic graph} is a set of time-stamped edges that are ordered by their time stamp $t$, as defined as follows: \begin{equation} \langle V,E,T \rangle = \{(e_{ij},t) | E \subseteq V \times V, t \in [0,T] \}. \end{equation} Our definition does not consider timeslices as a basic unit of the dynamic graph. The \textit{temporal resolution} of a dynamic graph is the minimum, positive distance in time between two events based on the accuracy of the time measurements. For example, edges could have an accuracy down to a day or down to a second. This temporal resolution is an important factor in our approach. A \textit{timesliced dynamic graph} $\Gamma = (G_1, G_2, ..., G_k)$ is a sequence of static graphs computed on $\langle V,E,T \rangle$ by dividing $[0,T]$ into intervals and projecting all temporal edges in a given interval down onto a 2D plane. Therefore, a \textit{timeslicing} $S$ on the time $[0,T]$ is: \begin{equation} S = [0, t_1), [t_1, t_2), [t_2, t_3), ..., [t_{k-1}, T], \end{equation} and \begin{equation} G_l = (V_l, E_l), \end{equation} \begin{equation} V_l \subseteq V, E_l = \{(e_{ij},t) | t_{l-1} \leq t < t_{l} \}, l = 1, 2, ..., k. \end{equation} where $t_0 = 0$, $t_k = T$, $E_{l}$ represents all the edge instances within the $l$-th time interval and $k$ is the total number of time intervals we want to use for showing a dynamic graph. If all time intervals $[t_{l-1}, t_l)$ have uniform duration, it is a \textit{uniform timeslicing}. Otherwise, it is a \textit {nonuniform timeslicing}. \section{Related Work} This work is related to prior research on appropriate timeslicing of dynamic graphs and dynamic graph visualization. {\bf Timeslicing of Dynamic Graphs:} Prior studies in the graph mining field have been run to find ideal timeslicing methods for dynamic graphs to improve the performance of algorithms for detecting structure in dynamic graphs. These methods can generally be classified into three categories: \textit{change point detection, minimizing the variance of a graph metric and task-oriented approaches}. The methods based on change point detection evaluate the similarity between graphs of consecutive time units and detect \textit{change points} along time to divide the whole time range~\cite{sun2007graphscope}. The variance-based approaches~\cite{soundarajan2016generating,uddin2017optimal,sulo2010meaningful} mainly determine the suitable timeslicing through minimizing the variance of certain graph metrics such as node degree, node positional dynamicity, etc. Other approaches~\cite{fish2017supervised,fish2017task} determine optimal timeslicing by using the accuracy of different graph mining algorithms (e.g., anomaly detection and link prediction). In the visualization community, a fixed interval (e.g., one day, one month and one year) is often used to divide the graph into slices~\cite{van2016reducing,bach2014graphdiaries,shi2011dynamic,rufiange2013diffani}. However, we are not aware of methods that perform a nonuniform timeslicing for dynamic graph visualization based on graph structures across different intervals. \textbf{Dynamic Graph Visualization:} Dynamic graph visualization aims has been extensively explored in the past decades~\cite{beck2017taxonomy,17Bach,kerracher2014design,kerracher2015task,Temporal_Multivatiate_visualisation}. Animation is the most natural way to visualize dynamic graphs, as it directly maps the evolution of the graph to an animation result~\cite{beck2017taxonomy}. Prior work of this type mainly attempted to preserve the mental map (i.e. the stability of the drawing) in dynamic graph visualizations~\cite{12ArchambaultGD,archambault2016can}, which are conducted through spring algorithms on the aggregated graph~\cite{huang1998line,01Diehl,02Diehl} or linking strategies across time~\cite{erten2003graphael,forrester2004graphael,baur2008dynamic,11Mader,simonetto2017drawing}. However, animation is often less effective for long dynamic graphs~\cite{tversky2002animation}, as viewers need to memorize the dynamic evolution of a graph and check back and forth to compare different graph snapshots~\cite{bach2014graphdiaries}. The small multiples visualization is the other major way to visualize the temporal evolution of dynamic graphs, which shows a sequence of static representations of the graph at different time intervals~\cite{kerracher2014design,kerracher2015task,beck2017taxonomy}. Prior work has shown that the small multiples visualization is more effective than animation~\cite{archambault2011animation,farrugia2011effective} in terms of a quick exploration of the temporal evolution of dynamic graphs. Its major limitation is the visual scalability due to the limited space~\cite{beck2017taxonomy}. Our approach, belonging to the small multiples visualization, assigns nonuniform time ranges for each snapshot based on the visual complexity, which partially mitigates the visual scalability issue of small multiples. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{equalEventPartition.pdf} \vspace{-1em} \caption {Equal Event Partitioning. The edges are sorted in order of earliest to latest. Given a number of timeslices, in this case $3$, a target number of events is selected per timeslice ($\frac{17}{3}$ or $5.66$). The algorithm counts off events, independent of timeslices, in order to fill the timeslices. Equal event partitioning distributes the error evenly through time instead of giving all the error to the last timeslice.} \label{equEventPartFig} \vspace{-0.5em} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{unequalised.pdf} \includegraphics[width=0.95\linewidth]{equalised.pdf} \includegraphics[width=0.95\linewidth]{histSliced.pdf} \vspace{-1em} \caption{Histogram Equalization of Events. Histogram Equalization is adapted to event streams for dynamic graphs. The event distribution is transformed by histogram equalization to accentuate bursts. Regular timeslices are taken in the transformed space. In the untransformed space, this results in a nonuniform timeslicing that accentuates bursts in the data set and skips over areas of low activity.} \label{histEquFig} \vspace{-1.2em} \end{figure} \section{Introduction} \input{intro} \input{related_work} \input{prob_def} \input{method} \input{case_study} \input{conclusion} \acknowledgments{ \yongwangblue{This work is partially supported by grant RGC GRF 16241916.} } \bibliographystyle{abbrv-doi}
{ "attr-fineweb-edu": 2.199219, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUbAA4eIZijWO-llCS
\section{Introduction\label{sub:intro_one}} \subsection{Modelling and predicting competitive sports\label{sub:intro_one}} Competitive sports refers to any sport that involves two teams or individuals competing against each other to achieve higher scores. Competitive team sports includes some of the most popular and most watched games such as football, basketball and rugby. Such sports are played both in domestic professional leagues such as the National Basketball Association, and international competitions such as the FIFA World Cup. For football alone, there are over one hundred fully professional leagues in 71 countries globally. It is estimated that the Premier League, the top football league in the United Kingdom, attracted a (cumulative) television audience of 4.7 billion viewers in the last season~\citep{PremierLeagueAudience}. The outcome of a match is determined by a large number of factors. Just to name a few, they might involve the competitive strength of each individual player in both teams, the smoothness of collaboration between players, and the team's strategy of playing. Moreover, the composition of any team changes over the years, for example because players leave or join the team. The team composition may also change within the tournament season or even during a match because of injuries or penalties. Understanding these factors is, by the prediction-validation nature of the scientific method, closely linked to predicting the outcome of a pairing. By Occam's razor, the factors which empirically help in prediction are exactly those that one may hypothesize to be relevant for the outcome. Since keeping track of all relevant factors is unrealistic, of course one cannot expect a certain prediction of a competitive sports outcome. Moreover, it is also unreasonable to believe that all factors can be measured or controlled, hence it is reasonable to assume that unpredictable, or non-deterministic statistical ``noise'' is involved in the process of generating the outcome (or subsume the unknowns as such noise). A good prediction will, hence, not exactly predict the outcome, but will anticipate the ``correct'' odds more precisely. The extent to which the outcomes are predictable may hence be considered as a surrogate quantifier of how much the outcome of a match is influenced by ``skill'' (as surrogated by determinism/prediction), or by ``chance''\footnote{We expressly avoid use of the word ``luck'' as in vernacular use it often means ``chance'', jointly with the belief that it may be influenced by esoterical, magical or otherwise metaphysical means. While in the suggested surrogate use, it may well be that the ``chance'' component of a model subsumes possible points of influence which simply are not measured or observed in the data, an extremely strong corpus of scientific evidence implies that these will not be metaphysical, only unknown - two qualifiers which are obviously not the same, despite strong human tendencies to believe the contrary.} (as surrogated by the noise/unknown factors). Phenomena which can not be specified deterministically are in fact very common in nature. Statistics and probability theory provide ways to make inference under randomness. Therefore, modelling and predicting the results of competitive team sports naturally falls into the area of statistics and machine learning. Moreover, any interpretable predictive model yields a possible explanation of what constitutes factors influencing the outcome. \subsection{History of competitive sports modelling} Research of modeling competitive sports has a long history. In its early days, research was often closely related to sports betting or player/team ranking~\citep{griffith1949odds,isaacs1953optimal}. The two most influential approaches are due to~\citet{bradley1952rank} and~\citet{elo1978rating}. The Bradley-Terry and \'{E}l\H{o} models allow estimation of player rating; the \'{E}l\H{o} system additionally contains algorithmic heuristics to easily update a player's rank, which have been in use for official chess rankings since the 1960s. The \'{E}l\H{o} system is also designed to predict the odds of a player winning or losing to the opponent. In contemporary practice, Bradley-Terry and \'{E}l\H{o} type models are broadly used in modelling of sports outcomes and ranking of players, and it has been noted that they are very close mathematically. In more recent days, relatively diverse modelling approaches originating from the Bayesian statistical framework ~\citep{maher1982modelling,dixon1997modelling,glickman1998state}, and also some inspired by machine learning principles~\citep{liu2010beating,hucaljuk2011predicting,odachowski2012using} have been applied for modelling competitive sports. These models are more expressive and remove some of the Bradley-Terry and \'{E}l\H{o} models' limitations, though usually at the price of interpretability, computational efficiency, or both. A more extensive literature overview on existing approaches will be given later in Section~\ref{sub:A-brief-summary}, as literature spans multiple communities and, in our opinion, a prior exposition of the technical setting and simultaneous straightening of thoughts benefits the understanding and allows us to give proper credit and context for the widely different ideas employed in competitive sports modelling. \subsection{Aim of competitive sports modelling} In literature, the study of competitive team sports may be seen to lie between two primary goals. The first goal is to design models that make good predictions for future match outcome. The second goal is to understand the key factors that influence the match outcome, mostly through retrospective analysis~\citep{pollard1986home,rue2000prediction}. As explained above, these two aspects are intrinsically connected, and in our view they are the two facets of a single problem: on one hand, proposed influential factors are only scientifically valid if confirmed by falsifiable experiments such as predictions on future matches. If the predictive performance does not increase when information about such factors enters the model, one should conclude by Occam's razor that these factors are actually irrelevant\footnote{... to distinguish/characterize the observations, which in some cases may plausibly pertain to restrictions in set of observations, rather than to causative relevance. Hypothetical example: age of football players may be identified as unimportant for the outcome - which may plausibly be due to the fact that the data contained no players of ages 5 or 80, say, as opposed to player age being unimportant in general. Rephrased, it is only unimportant for cases that are plausible to be found in the data set in the first place.}. On the other hand, it is plausible to assume that predictions are improved by making use of relevant factors (also known as ``features'') become available, for example because they are capable of explaining unmodelled random effects (noise). In light of this, the main problem considered in this work is the (validable and falsifiable) \textit{prediction} problem, which in machine learning terminology is also known as the supervised learning task. \subsection{Main questions and challenges in competitive sports outcomes prediction\label{sec:Questions}} Given the above discussion, the major challenges may be stated as follows:\\ On the {\bf methodological} side, what are suitable models for competitive sports outcomes? Current models are not at the same time interpretable, easily computable, allow to use feature information on the teams/players, and allow to predict scores or ternary outcomes. It is an open question how to achieve this in the best way, and this manuscript attempts to highlight a possible path. The main technical difficulty lies in the fact that off-shelf methods do not apply due to the structured nature of the data: unlike in individual sports such as running and swimming where the outcome depends only on the given team, and where the prediction task may be dealt with classical statistics and machine learning technology (see~\citep{blythe2015prediction} for a discussion of this in the context of running), in competitive team sports the outcome may be determined by potentially complex interactions between two opposing teams. In particular, the performance of any team is not measured directly using a simple metric, but only in relation to the opposing team's performance.\\ On the side of {\bf domain applications}, which in this manuscript is premier league football, it is of great interest to determine the relevant factors determining the outcome, the best way to predict, and which ranking systems are fair and appropriate. All these questions are related to predictive modelling, as well as the availability of suitable amounts of quality data. Unfortunately, the scarcity of features available in systematic presentation places a hurdle to academic research in competitive team sports, especially when it comes to assessing important factors such as team member characteristics, or strategic considerations during the match. Moreover, closely linked is also the question to which extent the outcomes are determined by ``chance'' as opposed to ``skill''. Since if on one hypothetical extreme, results would prove to be completely unpredictable, there would be no empirical evidence to distinguish the matches from a game of chance such as flipping a coin. On the other hand, importance of a measurement for predicting would strongly suggest its importance for winning (or losing), though without an experiment not necessarily a causative link. We attempt to address these questions in the case of premier league football within the confines of readily available data. \subsection{Main contributions} Our main contributions in this manuscript are the following: \begin{itemize} \item[(i)] We give what we believe to be the first comprehensive {\bf literature review} of state-of-art competitive sports modelling that comprises the multiple communities (Bradley-Terry models, \'{E}l\H{o} type models, Bayesian models, machine learning) in which research so far has been conducted mostly separately. \item[(ii)] We present a {\bf unified Bradley-Terry-\'{E}l\H{o} model} which combines the statistical rigour of the Bradley-Terry models with fitting and update strategies similar to that found in the \'{E}l\H{o} system. Mathematically only a small step, this joint view is essential in a predictive/supervised setting as it allows efficient training and application in an on-line learning situation. Practically, this step solves some problems of the \'{E}l\H{o} system (including ranking initialization and choice of K-factor), and establishes close relations to logistic regression, low-rank matrix completion, and neural networks. \item[(iii)] This unified view on Bradley-Terry-\'{E}l\H{o} allows us to introduce classes of joint extensions, {\bf the structured log-odds models}, which unites desirable properties of the extensions found in the disjoint communities: probabilistic prediction of scores and wins/draws/losses, batch/epoch and on-line learning, as well as the possibility to incorporate features in the prediction, without having to sacrifice structural parsimony of the Bradley-Terry models, or simplicity and computational efficiency of \'{E}l\H{o}'s original approach. \item[(iv)] We validate the practical usefulness of the structured log-odds models in synthetic experiments and in {\bf answering domain questions on English Premier League data}, most prominently on the importance of features, fairness of the ranking, as well as on the ``chance''-``skill'' divide. \end{itemize} \subsection{Manuscript structure} Section~\ref{sec:Background-and-Related} gives an overview of the mathematical setting in competitive sports prediction. Building on the technical context, Section~\ref{sub:A-brief-summary} presents a more extensive review of the literature related to the prediction problem of competitive sports, and introduces a joint view on Bradley-Terry and \'{E}l\H{o} type models. Section~\ref{sec:Methods} introduces the structured log-odds models, which are validated in empirical experiments in Section~\ref{sec:Experiments}. Our results and possible future directions for research are discussed in section~\ref{sec:Summary-and-Conclusion}. \subsection*{Authors' contributions} This manuscript is based on ZQ's MSc thesis, submitted September 2016 at University College London, written under supervision of FK. FK provided the ideas of re-interpretation and possible extensions of the \'{E}l\H{o} model. Literature overview is jointly due to ZQ an FQ, and in parts follows some very helpful pointers by I.~Kosmidis (see below). Novel technical ideas in Sections~\ref{sub:Extensions-of-structured} to~\ref{sub:Regularized-log-odds-matrix}, and experiments (set-up and implementation) are mostly due to ZQ. The present manuscript is a substantial re-working of the thesis manuscript, jointly done by FK and ZQ. \subsection*{Acknowledgements} We are thankful to Ioannis Kosmidis for comments on an earlier form of the manuscript, for pointing out some earlier occurrences of ideas presented in it but not given proper credit, as well as relevant literature in the ``Bradley-Terry'' branch. \newpage \section{The Mathematical-Statistical Setting\label{sec:Background-and-Related}} This section formulates the prediction task in competitive sports and fixes notation, considering as an instance of supervised learning with several non-standard structural aspects being of relevance. \subsection{Supervised prediction of competitive outcomes\label{sub:The-challenges-of}} We introduce the mathematical setting for outcome prediction in competitive team sports. As outlined in the introductory Section \ref{sub:intro_one}, three crucial features need to be taken into account in this setting: \begin{itemize} \item[(i)] The outcome of a pairing cannot be exactly predicted prior to the game, even with perfect knowledge of all determinates. Hence it is preferable to predict a \textit{probabilistic} estimate for all possible match outcomes (win/draw/loss) rather than \textit{deterministically} choosing one of them. \item[(ii)] In a pairing, two teams play against each other, one as a home team and the other as the away or guest team. Not all pairs may play against each other, while others may play multiple times. As a mathematically prototypical (though inaccurate) sub-case one may consider all pairs playing exactly once, which gives the observations an implicit \textit{matrix structure} (row = home team, column = away team). Outcome labels and features crucially depend on the teams constituting the pairing. \item[(iii)] Pairings take place over time, and the expected outcomes are plausibly expected to change with (possibly hidden) characteristics of the teams. Hence we will model the \textit{temporal dependence} explicitly to be able to take it into account when building and checking predictive strategies. \end{itemize} \subsubsection{The Generative Model.} Following the above discussion, we will fix a generative model as follows: as in the standard supervised learning setting, we will consider a generative joint random variable $(X,Y)$ taking values in $\mathcal{X}\times \mathcal{Y}$, where $\mathcal{X}$ is the set of features (or covariates, independent variables) for each \emph{pairing}, while $\mathcal{Y}$ is the set of labels (or outcome variables, dependent variables). In our setting, we will consider only the cases $\mathcal{Y} = \{\text{win},\,\text{lose}\}$ and $\mathcal{Y} = \{\text{win},\,\text{lose},\,\text{draw}\}$, in which case an observation from $\mathcal{Y}$ is a so-called \emph{match outcome}, as well as the case $\mathcal{Y} = \ensuremath{\mathbb{N}}^2$, in which case an observation is a so-called \emph{final score} (in which case, by convention, the first component of $\mathcal{Y}$ is of the home team), or the case of \emph{score differences} where $\mathcal{Y} = \ensuremath{\mathbb{N}}$ (in which case, by convention, a positive number is in favour of the home team). From the official rule set of a game (such as football), the match outcome is uniquely determined by a score or score difference. As all the above sets $\mathcal{Y}$ are discrete, predicting $\mathcal{Y}$ will amount to supervised \emph{classification} (the score difference problem may be phrased as a regression problem, but we will abstain from doing so for technical reasons that become apparent later). The random variable $X$ and its domain $\mathcal{X}$ shall include information on the teams playing, as well as on the time of the match. We will suppose there is a set $\mathcal{I}$ of teams, and for $i,j\in \mathcal{I}$ we will denote by $(X_{ij},Y_{ij})$ the random variable $(X,Y)$ conditioned on the knowledge that $i$ is the home team, and $j$ is the away team. Note that information in $X_{ij}$ can include any knowledge on either single team $i$ or $j$, but also information corresponding uniquely to the pairing $(i,j)$. We will assume that there are $Q:=\# \mathcal{I}$ teams, which means that the $X_{ij}$ and $Y_{ij}$ may be arranged in $(Q\times Q)$ matrices each. Further there will be a set $\mathcal{T}$ of time points at which matches are observed. For $t\in \mathcal{T}$ we will denote by $(X(t),Y(t))$ or $(X_{ij}(t),Y_{ij}(t))$ an additional conditioning that the outcome is observed at time point $t$. Note that the indexing $X_{ij}(t)$ and $Y_{ij}(t)$ formally amounts to a double conditioning and could be written as $X|I = i, J = j, T = t$ and $Y|I = i, J = j, T = t$, where $I,J,T$ are random variables denoting the home team, the away team, and the time of the pairing. Though we do believe that the index/bracket notation is easier to carry through and to follow (including an explicit mirroring of the the ``matrix structure'') than the conditional or ``graphical models'' type notation, which is our main reason for adopting the former and not the latter. \subsubsection{The Observation Model.} By construction, the generative random variable $(X,Y)$ contains all information on having any pairing playing at any time, However, observations in practice will concern two teams playing at a certain time, hence observations in practice will only include independent samples of $(X_{ij}(t),Y_{ij}(t))$ for some $i,j\in \mathcal{I}, t\in \mathcal{T}$, and never full observations of $(X,Y)$ which can be interpreted as a latent variable. Note that the observations can be, in-principle, correlated (or unconditionally dependent) if the pairing $(i,j)$ or the time $t$ is not made explicit (by conditioning which is implicit in the indices $i,j,t$). An important aspect of our observation model will be that whenever a value of $X_{ij}(t)$ or $Y_{ij}(t)$ is observed, it will always come together with the information of the playing teams $(i,j)\in\mathcal{I}^2$ and the time $t\in\mathcal{T}$ at which it was observed. This fact will be implicitly made use of in description of algorithms and validation methodology. (formally this could be achieved by explicitly exhibiting/adding $\mathcal{I}\times \mathcal{I} \times \mathcal{T}$ as a Cartesian factor of the sampling domains $\mathcal{X}$ or $\mathcal{Y}$ which we will not do for reasons of clarity and readability) Two independent batches of data will be observed in the exposition. We will consider: \begin{align*} \mbox{a training set}\;\mathcal{D} &:= \{(X^{(1)}_{i_1j_1}(t_1),Y^{(1)}_{i_1j_1}(t_1)),\dots,(X^{(N)}_{i_Nj_N}(t_N),Y^{(N)}_{i_Nj_N}(t_N))\}\\ \mbox{a test set}\;\mathcal{T} &:= \{(X^{(1*)}_{i^*_1j^*_1}(t^*_1),Y^{(1*)}_{i^*_1j^*_1}(t^*_1)),\dots,(X^{(M*)}_{i^*_Mj^*_M}(t^*_M),Y^{(M*)}_{i^*_Mj^*_M}(t^*_M))\}\\ \end{align*} where $(X^{(i)},Y^{(i)})$ and $(X^{(i*)},Y^{(i*)})$ are i.i.d.~samples from $(X,Y)$ Note that unfortunately (from a notational perspective), one cannot omit the superscripts $\kappa$ as in $X^{(\kappa)}$ when defining the samples, since the figurative ``dies'' should be cast anew for each pairing taking place. In particular, if all games would consist of a single pair of teams playing where the results are independent of time, they would all be the same (and not only identically distributed) without the super-index, i.e., without distinguishing different games as different samples from $(X,Y)$. \subsubsection{The Learning Task.} As set out in the beginning, the main task we will be concerned with is predicting future outcomes given past outcomes and features, observed from the process above. In this work, the features will be assumed to change over time slowly. It is \emph{not} our primary goal to identify the hidden features in $(X,Y)$, as they are never observed and hence not accessible as ground truth which can validate our models. However, these will be of secondary interest and considered empirically validated by a well-predicting model. More precisely, we will describe methodology for learning and validating predictive models of the type $$f: \mathcal{X}\times \mathcal{I} \times \mathcal{I} \times \mathcal{T} \rightarrow \operatorname{Distr} (\mathcal{Y}),$$ where $\operatorname{Distr} (\mathcal{Y})$ is the set of (discrete probability) distributions on $\mathcal{Y}$. That is, given a pairing $(i,j)$ and a time point $t$ at which the teams $i$ and $j$ play, and information of type $x=X_{ij}(t)$, make a probabilistic prediction $f(x,i,j,t)$ of the outcome. Most algorithms we discuss will \emph{not} use added information in $\mathcal{X}$, hence will be of type $f:\mathcal{I} \times \mathcal{I} \times \mathcal{T} \rightarrow \operatorname{Distr} (\mathcal{Y})$. Some will disregard the time in $\mathcal{T}$. Indeed, the latter algorithms are to be considered scientific baselines above which any algorithm using information in $\mathcal{X}$ and/or $\mathcal{T}$ has to improve. The models $f$ above will be learnt on a training set $\mathcal{D}$, and validated on an independent test set $\mathcal{T}$ as defined above. In this scenario, $f$ will be a random variable which may implicitly depend on $\mathcal{D}$ but will be independent of $\mathcal{T}$. The learning strategy - which is $f$ depending on $\mathcal{D}$ - may take any form and is considered in a full black-box sense. In the exposition, it will in fact take the form of various parametric and non-parametric prediction algorithms. The goodness of such an $f$ will be evaluated by a loss $L:\operatorname{Distr} (\mathcal{Y})\times \mathcal{Y}\rightarrow \ensuremath{\mathbb{R}}$ which compares a probabilistic prediction to the true observation. The best $f$ will have a small expected generalization loss $$\varepsilon (f|i,j,t) := \ensuremath{\mathbb{E}}_{(X,Y)}\left[L\left(f(X_{ij}(t),i,j,t),Y_{ij}(t)\right)\right]$$ at any future time point $t$ and for any pairing $i,j$. Under mild assumptions, we will argue below that this quantity is estimable from $\mathcal{T}$ and only mildly dependent on $t,i,j$. Though a good form for $L$ is not a-priori clear. Also, it is unclear under which assumptions $\varepsilon (f|t)$ is estimable, due do the conditioning on $(i,j,t)$ in the training set. These special aspects of the competitive sports prediction settings will be addressed in the subsequent sections. \subsection{Losses for probablistic classification} In order to evaluate different models, we need a criterion to measure the goodness of probabilistic predictions. The most common error metric used in supervised classification problems is the prediction accuracy. However, the accuracy is often insensitive to probabilistic predictions. For example, on a certain test case model A predicts a win probability of 60\%, while model B predicts a win probability of 95\%. If the actual outcome is not win, both models are wrong. In terms of prediction accuracy (or any other non-probabilistic metric), they are equally wrong because both of them made one mistake. However, model B should be considered better than model A since it predicted the ``true'' outcome with higher accuracy. Similarly, if a large number of outcomes of a fair coin toss have been observed as training data, a model that predicts 50\% percent for both outcomes on any test data point should be considered more accurate than a model that predicts 100\% percent for either outcome 50\% of the time. There exists two commonly used criteria that take into account the probabilistic nature of predictions which we adopt. The first one is the Brier score (Equation~\ref{eq:BS} below) and the second is the log-loss or log-likelihood loss (Equation~\ref{eq:oob_log} below). Both losses compare a distribution to an observation, hence mathematically have the signature of a function $\operatorname{Distr} (\mathcal{Y})\times \mathcal{Y}\rightarrow \ensuremath{\mathbb{R}}$. By (very slight) abuse of notation, we will identify distributions on (discrete) $\mathcal{Y}$ with its probability mass function; for a distribution $p$, for $y\in \mathcal{Y}$ we write $p_y$ for mass on the observation $y$ (= the probability to observe $y$ in a random experiment following $p$). With this convention, log-loss $L_\ell$ and Brier loss $L_{\text{Br}}$ are defined as follows: \begin{eqnarray} L_\ell:& (p,y)\mapsto& - \log p_y \label{eq:BS}\\ L_{\text{Br}}:& (p,y)\mapsto& (1-p_y)^2 + \sum_{y\in \mathcal{Y}\setminus \{y\}} p_y^2\label{eq:oob_log} \end{eqnarray} The log-loss and the Brier loss functions have the following properties: \begin{enumerate} \item[(i)] the Brier Score is only defined on $\mathcal{Y}$ with an addition/subtraction and a norm defined. This is not necessarily the case in our setting where it may be that $\mathcal{Y} = \{\text{win},\,\text{lose},\,\text{draw}\}$. In literature, this is often identified with $\mathcal{Y} = \{1,0,-1\}$, though this identification is arbitrary, and the Brier score may change depending on which numbers are used. On the other hand, the log-loss is defined for any $\mathcal{Y}$ and remains unchanged under any renaming or renumbering of a discrete $\mathcal{Y}$. \item[(ii)] For a joint random variable $(X,Y)$ taking values in $\mathcal{X}\times \mathcal{Y}$, it can be shown that the expected losses $\ensuremath{\mathbb{E}}\left[ L_\ell(f(X),Y) \right]$ are minimized by the ``correct'' prediction $f: x\mapsto \left(p_y = P(Y=y|X=x)\right)_{y\in \mathcal{Y}}$. \end{enumerate} The two loss functions usually are introduced as empirical losses on a test set $\mathcal{T}$, i.e., $$\widehat{\varepsilon}_\mathcal{T}(f) = \frac{1}{\# \mathcal{T}}\sum_{(x,y)\in \mathcal{T}} L_*(x,y).$$ The empirical log-loss is the (negative log-)likelihood of the test predictions. The empirical Brier loss, usually called the ``Brier score'', is a straightforward translation of the mean squared error used in regression problems to the classification setting, as the expected mean squared error of predicted confidence scores. However, in certain cases, the Brier score is hard to interpret and may behave in unintuitive ways~\citep{jewson2004problem}, which may partly be seen as a phenomenon caused by above-mentioned lack of invariance under class re-labelling. Given this and the interpretability of the empirical log-loss as a likelihood, we will use the log-loss as principal evaluation metric in the competitive outcome prediction setting. \subsection{Learning with structured and sequential data\label{sub:Working-with-sequential}} The dependency of the observed data on pairing and time makes the prediction task at hand non-standard. We outline the major consequences for learning and model validation, as well as the implicit assumptions which allow us to tackle these. We will do this separately for the pairing and the temporal structure, as these behave slightly differently. \subsubsection{Conditioning on the pairing} Match outcomes are observed for given pairings $(i,j)$, that is, each feature-label-pair will be of form $(X_{ij},Y_{ij})$, where as above the subscripts denote conditioning on the pairing. Multiple pairings may be observed in the training set, but not all; some pairings may never be observed. This has consequences for both learning and validating models.\\ For {\bf model learning}, it needs to be made sure that the pairings to be predicted \emph{can} be predicted from the pairings observed. With other words, the label $Y^*_{ij}$ in the test set that we want to predict is (in a practically substantial way) dependent on the training set $\mathcal{D} = \{(X^{(1)}_{i_1j_1},Y^{(1)}_{i_1j_1}),\dots,(X^{(N)}_{i_Nj_N},Y^{(N)}_{i_Nj_N}) \}$. Note that smart models will be able to predict the outcome of a pairing even if it has not been observed before, and even if it has, it will use information from other pairings to improve its predictions For various parametric models, ``predictability'' can be related to completability of a data matrix with $Y_{ij}$ as entries. In section~\ref{sec:Methods}, we will relate \'{E}l\H{o} type models to low-rank matrix completion algorithms; completion can be understood as low-rank completion, hence predictability corresponds to completability. Though, exactly working completability out is not the main is not the primary aim of this manuscript, and for our data of interest, the English Premier League, all pairings are observed in any given year, so completability is not an issue. Hence we refer to~\cite{kiraly2015algebraic} for a study of low-rank matrix completability. General parametric models may be treated along similar lines.\\ For model-agnostic {\bf model validation}, it should hold that the expected generalization loss $$\varepsilon (f|i,j) := \ensuremath{\mathbb{E}}_{(X,Y)}\left[L\left(f(X_{ij},i,j),Y_{ij}\right)\right]$$ can be well-estimated by empirical estimation on the test data. For league level team sports data sets, this can be achieved by having multiple years of data available. Since even if not all pairings are observed, usually the set of pairings which \emph{is} observed is (almost) the same in each year, hence the pairings will be similar in the training and test set if whole years (or half-seasons) are included. Further we will consider an average over all observed pairings, i.e., we will compute the empirical loss on the training set $\mathcal{T}$ as $$\widehat {\varepsilon} (f) := \frac{1}{\# \mathcal{T}}\sum_{(X_{ij},Y_{ij})\in \mathcal{T}} L\left(f(X_{ij},i,j),Y_{ij}\right)$$ By the above argument, the set of all observed pairings in any given year is plausibly modelled as similar, hence it is plausible to conclude that this empirical loss estimates some expected generalization loss $$\varepsilon(f) := \ensuremath{\mathbb{E}}_{X,Y,I,J}\left[L\left(f(X_{IJ},I,J),Y_{IJ}\right)\right]$$ where $I,J$ (possibly dependent) are random variables that select teams which are paired. Note that this type of aggregate evaluation does not exclude the possibility that predictions for single teams (e.g., newcomers or after re-structuring) may be inaccurate, but only that the ``average'' prediction is good. Further, the assumption itself may be violated if the whole league changes between training and test set. \subsubsection{Conditioning on time} As a second complication, match outcome data is gathered through time. The data set might display temporal structure and correlation with time. Again, this has consequences for learning and validating the models.\\ For {\bf model learning}, models should be able to intrinsically take into account the temporal structure - though as a baseline, time-agnostic models should be tried. A common approach for statistical models is to assume a temporal structure in the latent variables that determine a team's strength. A different and somewhat ad-hoc approach proposed by \citet{dixon1997modelling} is to assign lower weights to earlier observations and estimate parameter by maximizing the weighted log-likelihood function. For machine learning models, the temporal structure is often encoded with handcrafted features. Similarly, one may opt to choose a model that can be updated as time progresses. A common ad-hoc solution is to re-train the model after a certain amount of time (a week, a month, etc), possibly with temporal discounting, though there is no general consensus about how frequently the retraining should be performed. Further there are genuinely updating models, so-called on-line learning models, which update model parameters after each new match outcome is revealed.\\ For {\bf model evaluation}, the sequential nature of the data poses a severe restriction: Any two data points were measured at certain time points, and one can not assume that they are not correlated through time information. That such correlation exists is quite plausible in the domain application, as a team would be expected to perform more similarly at close time points than at distant time points. Also, we would like to make sure that we fairly test the models for their prediction accuracy - hence the validation experiment needs to mimic the ``real world'' prediction process, in which the predicted outcomes will be in the temporal future of the training data. Hence the test set, in a validation experiment that should quantify goodness of such prediction, also needs to be in the temporal future of the training set. In particular, the common independence assumption that allows application of re-sampling strategies such as the K-fold cross-validation method~\citep{stone1974cross}, which guarantees the expected loss to be estimated by the empirical loss, is violated. In the presence of temporal correlation, the variance of the error metric may be underestimated, and the error metric itself will, in general, be mis-estimated. Moreover, the validation method will need to accommodate the fact that the model may be updated on-line during testing. In literature, model-independent validation strategies for data with temporal structure is largely an unexplored (since technically difficult) area. Nevertheless, developing a reasonable validation method is crucial for scientific model assessment. A plausible validation method is introduced in section \ref{sub:Tunning-and-validation} in detail. It follows similar lines as the often-seen ``temporal cross-validation'' where training/test splits are always temporal, i.e., the training data points are in the temporal past of the test data points, for multiple splits. An earlier occurrence of such a validation strategy may be found in~\cite{hyndman2014forecasting}. This strategy comes without strong estimation guarantees and is part heuristic; the empirical loss will estimate the generalization loss as long as statistical properties do not change as time shifts forward, for example under stationarity assumptions. While this implicit assumption may be plausible for the English Premier League, this condition is routinely violated in financial time series, for example. \newpage \section{Approaches to competitive sports prediction\label{sub:A-brief-summary}} In this section, we give a brief overview over the major approaches to prediction in competitive sports found in literature. Briefly, these are: \begin{enumerate} \item[(a)] The Bradley-Terry models and extensions. \item[(b)] The \'{E}l\H{o} model and extensions. \item[(c)] Bayesian models, especially latent variable models and/or graphical models for the outcome and score distribution. \item[(d)] Supervised machine learning type models that use domain features for prediction. \end{enumerate} (a) The {\bf Bradley-Terry} model is the most influential statistical approach to ranking based on competitive observations~\citep{bradley1952rank}. With its original applications in psychometrics, the goal of the class of Bradley-Terry models is to estimate a hypothesized rank or skill level from observations of pairwise competition outcomes (win/loss). Literature in this branch of research is, usually, primarily concerned not with prediction, but estimation of a ``true'' rank or skill, existence of which is hypothesized, though prediction of (binary) outcome probabilities or odds is well possible within the paradigm. A notable exception is the work of~\cite{stanescu2011rating} where the problem is in essence formulated as supervised prediction, similar to our work. Mathematically, Bradley-Terry models may be seen as log-linear two-factor models that, at the state-of-art are usually estimated by (analytic or semi-analytic) likelihood maximization~\citep{hunter2004mm}. Recent work has seen many extensions of the Bradley-Terry models, most notably for modelling of ties~\cite{rao1967ties}, making use of features~\cite{firth2012bradley} or for explicit modelling the time dependency of skill~\cite{cattelan2013dynamic}.\\ (b) The {\bf \'{E}l\H{o} system} is one of the earliest attempts to model competitive sports and, due to its mathematical simplicity, well-known and widely-used by practitioners~\citep{elo1978rating}. Historically, the \'{E}l\H{o} system is used for chess rankings, to assign a rank score to chess players. Mathematically, the \'{E}l\H{o} system only uses information about the historical match outcomes. The \'{E}l\H{o} system assigns to each team a parameter, the so-called \'{E}l\H{o} rating. The rating reflects a team's competitive skills: the team with higher rating is stronger. As such, the \'{E}l\H{o} system is, originally, not a predictive model or a statistical model in the usual sense. However, the \'{E}l\H{o} system also gives a probabilistic prediction for the \textit{binary} match outcome based on the ratings of two teams. After what appears to have been a period of parallel development that is still partly ongoing, it has been recently noted by members of the Bradley-Terry community that the \'{E}l\H{o} prediction heuristic is mathematically equivalent to the prediction via the simple Bradley-Terry model~\citep[see][, section 2.1]{coulom2007computing}.\\ The \'{E}l\H{o} ratings are learnt via an update rule that is applied whenever a new outcome is observed. This suggested update strategy is inherently algorithmic and later shown to be closely related to on-line learning strategies in neural network; to our knowledge it appears first in \'{E}l\H{o}'s work and is not found in the Bradley-Terry strain.\\ (c) The {\bf Bayesian paradigm} offers a natural framework to model match outcomes probabilistically, and to obtain probabilistic predictions as the posterior predictive distribution. Bayesian parametric models also allow researchers to inject expert knowledge through the prior distribution. The prediction function is naturally given by the posterior distribution of the scores, which can be updated as more observations become available. Often, such models explicitly model not only the outcome but also the score distribution, such as Maher's model~\cite{maher1982modelling} which models outcome scores based on independent Poisson random variables with team-specific means. \citet{dixon1997modelling} extend Maher's model by introducing a correlation effect between the two final scores. More recent models also include dynamic components to model temporal dependence~\citep{glickman1998state,rue2000prediction,crowder2002dynamic}. Most models of this type only use historical match outcomes as features, see \citet{constantinou2012pi} for an exception.\\ (d) More recently, the method-agnostic {\bf supervised machine learning paradigm} has been applied to prediction of match outcomes~\cite{liu2010beating,hucaljuk2011predicting,odachowski2012using}. The main rationale in this branch of research is that the best model is not known, hence a number of off-shelf predictors are tried and compared in a benchmarking experiment. Further, these models are able to make use of features other than previous outcomes easily. However, usually, the machine learning models are trained in-batch, i.e., not following a dynamic update or on-line learning strategy, and they need to be re-trained periodically to incorporate new observations.\\ In this manuscript, we will re-interpret the \'{E}l\H{o} model and its update rule as the simplest case of a structured extension of predictive logistic (or generalized linear) regression models, and the canonical gradient ascent update of its likelihood - hence, in fact, giving it a parametric form not entirely unlike the models mentioned in (b), In the subsequent sections, this will allow us to complement it with the beneficial properties of the machine learning approach (c), most notably the addition of possibly complex features, paired with the \'{E}l\H{o} update rule which can be shown generalize to an on-line update strategy. More detailed literature and technical overview is given given in the subsequent sections. The \'{E}l\H{o} model and its extensions, as well as its novel parametric interpretation, are reviewed in Section~\ref{sub:The-Elo-model}. Section \ref{sub:Statistical-and-latent} reviews other parametric models for predicting final scores. Section \ref{sub:Feature-based-machine-learning} reviews the use of machine learning predictors and feature engineering for sports prediction. \subsection{The Bradley-Terry-\'{E}l\H{o} models\label{sub:The-Elo-model}} This section reviews the Bradley-Terry models, the \'{E}l\H{o} system, and closely related variants. We give the above-mentioned joint formulation, following the modern rationale of considering as a ``model'' not only a generative specification, but also algorithms for training, predicting and updating its parameters. As the first seems to originate with the work of~\cite{bradley1952rank}, and the second in the on-line update heuristic of~\cite{elo1978rating}, we argue that for giving proper credit, it is probably more appropriate to talk about Bradley-Terry-\'{E}l\H{o} models (except in the specific hypothesis testing scenario covered in the original work of Bradley and Terry). Later, we will attempt to understand the \'{E}l\H{o} system as an on-line update of a structured logistic odds model. \subsubsection{The original formulation of the \'{E}l\H{o} model} We will first introduce the original version of the \'{E}l\H{o} model, following~\citep{elo1978rating}. As stated above, its original form which is still applied for determining the official chess ratings (with minor domain-specific modifications), is neither a statistical model nor a predictive model in the usual sense. Instead, the original version is centered around the ratings $\theta_i$ for each team $i$. These ratings are updated via the \'{E}l\H{o} model rule, which we explain (for sake of clarity) for the case of no draws: After observing a match between (home) team $i$ and (away) team $j$, the ratings of teams $i$ and $j$ are updated as \begin{eqnarray} \theta_{i} & \leftarrow&\theta_{i}+K\left[S_{ij}-p_{ij}\right]\label{eq:elo_update}\\ \theta_{j} & \leftarrow&\theta_{j}-K\left[S_{ij}-p_{ij}\right]\nonumber \end{eqnarray} where $K$, often called ``the K factor'', is an arbitrarily chosen constant, that is, a model parameter usually set per hand. $S_{ij}$ is $1$ if team/player $i$ has been observed to win, and $0$ otherwise. Further, $p_{ij}$ is the probability of $i$ winning against $j$ which is predicted from the ratings prior to the update by \begin{equation} p_{ij}=\sigma(\theta_{i}-\theta_{j})\label{eq:elo_prob} \end{equation} where $\sigma: x\mapsto \left(1+\exp(-x)\right)^{-1}$ is the logistic function (which has a sigmoid shape, hence is also often called ``the sigmoid''). Sometimes a home team parameter $h$ is added to account for home advantage, and the predictive equation becomes \begin{equation} p_{ij}=\sigma(\theta_{i}-\theta_{j} + h)\label{eq:Elo_gen} \end{equation} \'{E}l\H{o}'s update rule (Equation~\ref{eq:elo_update}) makes sense intuitively because the term $(S_{ij}-p_{ij})$ can be thought of as the discrepancy between what is expected, $p_{ij}$, and what is observed, $S_{ij}$. The update will be larger if the current parameter setting produces a large discrepancy. However, a concise theoretical justification has not been articulated in literature. If fact, \'{E}l\H{o} himself commented that ``the logic of the equation is evident without algebraic demonstration''~\citep{elo1978rating} - which may be true in his case, but not satisfactory in an applied scientific nor a theoretical/mathematical sense. As an initial issue, it has been noted that the whole model is invariant under joint re-scaling of the $\theta_i$, and the parameters $K,h$, as well as under arbitrary choice of zero for the $\theta_i$ (i.e., adding of a fixed constant $c\in\ensuremath{\mathbb{R}}$ to all $\theta_i$). Hence, fixed domain models will usually choose zero and scale arbitrarily. In chess rankings, for example, the formula includes additional scaling constants of the form $p_{ij}=\left(1+10^{-(\theta_{i}-\theta_{j})/400}\right)^{-1}$; scale and zero are set through fixing some historical chess players' rating, which happens to set the ``interesting'' range in the positive thousands\footnote{A common misunderstanding here is that no \'{E}l\H{o} ratings below zero may occur. This is, in-principle, wrong, though it may be extremely unlikely in practice if the arbitrarily chosen zero is chosen low enough.}. One can show that there are no more parameter redundancies, hence scaling/zeroing turns out not to be a problem if kept in mind. However, three issues are left open in this formulation: \begin{enumerate} \item[(i)] How the ratings for players/teams are determined who have never played a game before. \item[(ii)] The choice of the constant/parameter $K$, the ``K-factor''. \item[(iii)] If a home parameter $h$ is present, its size. \end{enumerate} These issues are usually addressed in everyday practice by (more or less well-justified) heuristics. The parametric and probabilistic supervised setting in the following sections yields more principled ways to address this. step (i) will become unnecessary by pointing out a batch learning method; the constant $K$ in (ii) will turn out to be the learning rate in a gradient update, hence it can be cross-validated or entirely replaced by a different strategy for learning the model. Parameters such as $h$ in (iii) will be interpretable as a logistic regression coefficient. See for this the discussions in Sections~\ref{sub:Training-structured-log-odds},~\ref{sub:Training-structured-log-odds.batch} for (i),(ii), and Section~\ref{sec:specialcases} for (iii). \subsubsection{Bradley-Terry-\'{E}l\H{o} models\label{sub:The-probabilistic-interpretation}} As outlined in the initial discussion, the class of Bradley-Terry models introduced by~\citep{bradley1952rank} may be interpreted as a proper statistical model formulation of the \'{E}l\H{o} prediction heuristic. Despite their close mathematical vicinity, it should be noted that classically Bradley-Terry and \'{E}l\H{o} models are usually applied and interpreted differently, and consequently fitted/learnt differently: while both models estimate a rank or score, the primary (historical) purpose of the Bradley-Terry is to estimate the rank, while the \'{E}l\H{o} system is additionally intended to supply easy-to-compute updates as new outcomes are observed, a feature for which it has historically paid for by lack of mathematical rigour. The \'{E}l\H{o} system is often invoked to predict future outcome probabilities, while the Bradley-Terry models usually do not see predictive use (despite their capability to do so, and the mathematical equivalence of both predictive rules). However, as mentioned above and as noted for example by~\citep{coulom2007computing}, a joint mathematical formulation can be found, and as we will show, the different methods of training the model may be interpreted as variants of likelihood-based batch or on-line strategies. The parametric formulation is quite similar to logistic regression models, or generalized linear models, in that we will use a link function and define a model for the outcome odds. Recall, the odds for a probability $p$ are $\operatorname{odds}(p) := p/(1-p)$, and the logit function is $\operatorname{logit}: x\mapsto \log\operatorname{odds}(x) = \log x - \log(1-x)$ (sometimes also called the ``log-odds function'' for obvious reasons). A straightforward calculation shows that $\operatorname{logit}^{-1} = \sigma$, or equivalently, $\sigma(\operatorname{logit}(x)) = x$ for any $x$, i.e., the logistic function is the inverse of the logit (and vice versa $\operatorname{logit}(\sigma(x)) = x$ by the symmetry theorem for the inverse function). Hence we can posit the following two equivalent equations in latent parameters $\theta_i$ as \emph{definition} of a predictive model: \begin{eqnarray} p_{ij} & = &\sigma(\theta_{i}-\theta_{j}) \label{eq:elo_prob2}\\ \operatorname{logit}(p_{ij})& = &\theta_{i}-\theta_{j}\nonumber \end{eqnarray} That is, $p_{ij}$ in the first equation is interpreted as a predictive probability; i.e., $Y_{ij}\sim \mbox{Bernoulli} (p_{ij})$. The second equation interprets this prediction in terms of a generalized linear model with a response function that is linear in the $\theta_i$. We will write $\theta$ for the vector of $\theta_i$; hence the second equation could also be written, in vector notation, as $\operatorname{logit}(p_{ij}) = \left\langle e_i - e_j, \theta \right\rangle$. Hence, in particular, the matrix with entries $\operatorname{logit}(p_{ij})$ has rank (at most) two. Fitting the above model means estimating its latent variables $\theta$. This may be done by considering the \emph{likelihood} of the latent parameters $\theta_i$ given the training data. For a single observed match outcome $Y_{ij}$, the log-likelihood of $\theta_i$ and $\theta_j$ is $$\ell (\theta_i,\theta_j|Y_{ij}) = Y_{ij}\log (p_{ij}) + (1-Y_{ij})\log (1-p_{ij}),$$ where the $p_{ij}$ on the right hand side need to be interpreted as functions of $\theta_i,\theta_j$ (namely, as in equation~\ref{eq:elo_prob2}). We call $\ell (\theta_i,\theta_j|Y_{ij})$ the \emph{one-outcome} log-likelihood as it is based on a single data point. Similarly, if multiple training outcomes $\mathcal{D} = \{Y_{i_1j_1}^{(1)},\dots,Y_{i_Nj_N}^{(N)}\}$ are observed, the log-likelihood of the vector $\theta$ is $$\ell (\theta|\mathcal{D}) = \sum_{k=1}^N \left[Y^{(k)}_{i_kj_k}\log (p_{i_kj_k}) + (1-Y_{i_kj_k}^{(k)})\log (1-p_{i_kj_k})\right]$$ We will call $\ell (\theta|\mathcal{D})$ the \emph{batch log-likelihood} as the training set contains more than one data point. The derivative of the one-outcome log-likelihood is $$\frac{\partial}{\partial \theta_i} \ell (\theta_i,\theta_j|Y_{ij}) = Y_{ij} (1- p_{ij}) - (1-Y_{ij}) p_{ij} = Y_{ij} - p_{ij},$$ hence the $K$ in the \'{E}l\H{o} update rule (see equation~\ref{eq:elo_update}) may be updated as a gradient ascent rate or learning coefficient in an on-line likelihood update. We also obtain a batch gradient from the batch log-likelihood: $$\frac{\partial}{\partial \theta_i} \ell (\theta|\mathcal{D}) = \left[Q_{i} - \sum_{(i,j)\in G_i} p_{ij}\right],$$ where, $Q_{i}$ is team $i$'s number of wins minus number of losses observed in $\mathcal{D}$, and $G_i$ is the (multi-)set of (unordered) pairings team $i$ has participated in $\mathcal{D}$. The batch gradient directly gives rise to a batch gradient update $$\theta_{i}\leftarrow\theta_{i}+K\cdot\left[Q_{ij}-\sum_{(i,j)\in G_i} p_{ij}\right].$$ Note that the above model highlights several novel, interconnected, and possibly so far unknown (or at least not jointly observed) aspects of Bradley-Terry and \'{E}l\H{o} type models: \begin{enumerate} \item[(i)] The \'{E}l\H{o} system can be seen as a learning algorithm for a logistic odds model with latent variables, the Bradley-Terry model (and hence, by extension, as a full fit/predict specification of a certain one-layer neural network). \item[(ii)] The Bradley-Terry and \'{E}l\H{o} model may simultaneously be interpreted as Bernoulli observation models of a rank two matrix. \item[(iii)] The gradient of the Bradley-Terry model's log-likelihood gives rise to a (novel) batch gradient and a single-outcome gradient ascent update. A single iteration per-sample of the latter (with a fixed update constant) is \'{E}l\H{o}'s original update rule. \end{enumerate} These observations give rise to a new family of models: the structured log-odds models that will be discussed in Section~\ref{sec:Methods} and~\ref{sub:The-structured-log-odds}, together with concomitant gradient update strategies of batch and on-line type. This joint view also makes extensions straightforward, for example, the ``home team parameter'' $h$ in the common extension $p_{ij}=\sigma(\theta_{i}-\theta_{j} + h)$ of the \'{E}l\H{o} system may be interpreted as Bradley-Terry model with an intercept term, with log-odds $\operatorname{logit}(p_{ij}) = \left\langle e_i - e_j, \theta \right\rangle + h$, that is updated by the one-outcome \'{E}l\H{o} update rule. Since more generally, the structured log-odds models arise by combining the parametric form of the Bradley-Terry model with \'{E}l\H{o}'s update strategy, we also argue for synonymous use of the term ``Bradley-Terry-\'{E}l\H{o} models'' whenever Bradley-Terry models are updated batch, or epoch-wise, or whenever they are, more generally, used in a predictive, supervised, or on-line setting. \subsubsection{Glickman's Bradley-Terry-\'{E}l\H{o} model} For sake of completeness and comparison, we discuss the probabilistic formulation of~\citet{glickman1995comprehensive}. In this fully Bayesian take on the Bradley-Terry-\'{E}l\H{o} model, it is assumed that there is a latent random variable $Z_{i}$ associating with team $i$. The latent variables are statistically independent and they follow a specific generalized extreme value (GEV) distribution: \[ Z_{i}\sim\text{GEV}(\theta_{i},\thinspace1,\thinspace0) \] where the mean parameter $\theta_{i}$ varies across teams, and the other two parameters are fixed at one and zero. The density function of $\text{GEV}(\mu,\thinspace1,\thinspace0)$, $\mu\in\mathbb{R}$ is \[ p(x|\mu)=\exp\left(-(x-\mu)\right)\cdot\exp\left(-\exp\left(-(x-\mu)\right)\right) \] The model further assumes that team $i$ wins over team $j$ in a match if and only if a random sample ($Z_{i}$, $Z_{j}$) from the associated latent variables satisfies $Z_{i}>Z_{j}$. It can be shown that the difference variables $(Z_{i}-Z_{j})$ then happen to follow a logistic distribution with mean $\theta_{1}-\theta_{2}$ and scale parameter 1, see~\citep{resnick2013extreme}. Hence, the (predictive) winning probability for team $i$ is eventually given by \'{E}l\H{o}'s original equation~\ref{eq:elo_prob} which is equivalent to the Bradley-Terry-odds. In fact, the arguably strange parametric form for the distribution $f$ of the $Z_i$ makes the impression of being chosen for this particular, singular reason. We argue, that Glickman's model makes unnecessary assumptions through the latent random variables $Z_i$ which furthermore carry an unnatural distribution . This is certainly true in the frequentist interpretation, as the parametric model in Section~\ref{sub:The-probabilistic-interpretation} is not only more parsimonious as it does not assume a process that generates the $\theta_i$, but also it avoids to assume random variables that are never directly observed (such as the $Z_i$). This is also true in the Bayesian interpretation, where a prior is assumed on the $\theta_i$ which then indirectly gives rise to the outcome via the $Z_i$. Hence, one may argue by Occam's razor, that modelling the $Z_i$ is unnecessary, and, as we believe, may put obstacles on the path to the existing and novel extensions in Section~\ref{sec:Methods} that would otherwise appear natural. \subsubsection{Limitations of the Bradley-Terry-\'{E}l\H{o} model and existing remedies\label{sub:Limitations-Elo}} We point out some limitations of the original Bradley-Terry and \'{E}l\H{o} models which we attempt to address in Section~\ref{sec:Methods}. \paragraph{Modelling draws} The original Bradley-Terry and \'{E}l\H{o} models do not model the possibility of a draw. This might be reasonable in official chess tournaments where players play on until draws are resolved. However, in many competitive sports a significant number of matches end up as a draw - for example, in the English Premier League about twenty percent of the matches. Modelling the possibility of draw outcome is therefore very relevant. One of the first extensions of the Bradley-Terry model, the ternary outcome model by~\citet{rao1967ties}, was suggested to address exactly this shortcoming. The strategy for modelling draws in the joint framework, closely following this work, is outlined in Section~\ref{sub:Modeling-tenary-outcomes}. \paragraph{Using final scores in the model} The Bradley-Terry-\'{E}l\H{o} model only takes into account the binary outcome of the match. In sports such as football, the final scores for both teams may contain more information. Generalizations exist to tackle this problem. One approach is adopted by the official FIFA Women's football ranking~\citep{FIFAwoman}, where the actual outcome of the match is replaced by the \textquotedbl{}Actual Match Percentage\textquotedbl{}, a quantity that depends on the final scores. FiveThirtyEight, an online media, proposed another approach~\citep{FiveThirtyEightELO}. It introduces the ``Margin of Victory Multiplier'' in the rating system to adjust the K-factor for different final scores. In a survey paper,~\citet{lasek2013predictive} showed empirical evidence that rating methods that take into account the final scores often outperform those that do not. However, it is worth noticing that the existing methods often rely on heuristics and their mathematical justifications are often unpublished or unknown. We describe a principled way to incorporate final scores in Section~\ref{sub:Using-score-difference} into the framework, following ideas of~\citet{dixon1997modelling}. \paragraph{Using additional features} The Bradley-Terry-\'{E}l\H{o} model only takes into account very limited information. Apart from previous match outcomes, the only feature it uses is the identity of home and away teams. There are many other potentially useful features. For example, whether the team is recently promoted from a lower-division league, or whether a key player is absent from the match. These features may help make better prediction if they are properly modeled. In Section~\ref{sub:covariate}, we extend the Bradley-Terry-\'{E}l\H{o} model to a logistic odds model that can also make use of features, along lines similar to the feature-dependent models of~\citet{firth2012bradley}. \subsection{Domain-specific parametric models\label{sub:Statistical-and-latent}} We review a number of parametric and Bayesian models that have been considered in literature to model competitive sports outcomes. A predominant property of this branch of modelling is that the final scores are explicitly modelled. \subsubsection{Bivariate Poisson regression and extensions} \citet{maher1982modelling} proposed to model the final scores as independent Poisson random variables. If team $i$ is playing at home field against team $j$, then the final scores $S_{i}$ and $S_{j}$ follows \begin{eqnarray*} S_{i} & \sim & \text{Poisson}(\alpha_{i}\beta_{j}h)\\ S_{j} & \sim & \text{Poisson}(\alpha_{j}\beta_{i}) \end{eqnarray*} where $\alpha_{i}$ and $\alpha_{j}$ measure the 'attack' rates, and $\beta_{i}$ and $\beta_{j}$ measure the 'defense' rates of the teams. The parameter $h$ is an adjustment term for home advantage. The model further assumes that all historical match outcomes are independent. The parameters are estimated from maximizing the log-likelihood function of all historical data. Empirical evidence suggests that the Poisson distribution fits the data well. Moreover, the Poisson distribution can be derived as the expected number of events during a fixed time period at a constant risk. This interpretation fits into the framework of competitive team sports. \citet{dixon1997modelling} proposed two modifications to Maher's model. First, the final scores $S_{i}$ and $S_{j}$ are allowed to be correlated when they are both less than two. The model employs a free parameter $\rho$ to capture this effect. The joint probability function of $S_{i},S_{j}$ is given by the bivariate Poisson distribution \ref{eq:DC_prob}: \begin{equation} P(S_{i}=s_{i},S_{j}=s_{j})=\tau_{\lambda,\mu}(s_{i},s_{j})\frac{\lambda^{s_{i}}\exp(-\lambda)}{s_{i}!}\cdot\frac{\lambda^{s_{j}}\exp(-\mu)}{s_{j}!}\label{eq:DC_prob} \end{equation} where \begin{eqnarray*} \lambda & = & \alpha_{i}\beta_{j}h\\ \mu & = & \alpha_{j}\beta_{i} \end{eqnarray*} and \[ \tau_{\lambda,\mu}(s_{i},s_{j})=\begin{cases} 1-\lambda\mu\rho & if\,s_{i}=s_{j}=0,\\ 1+\lambda\rho & if\,s_{i}=0,\,s_{j}=1,\\ 1+\mu\rho & if\,s_{i}=1,\,s_{j}=0,\\ 1-\rho & if\,s_{i}=s_{j}=1,\\ 1 & otherwise. \end{cases} \] The function $\tau_{\lambda,\mu}$ adjusts the probability function so that drawing becomes less likely when both scores are low. The second modification is that the Dixon-Coles model no longer assumes match outcomes are independent through time. The modified log-likelihood function of all historical data is represented as a weighted sum of log-likelihood of individual matches illustrated in equation \ref{eq:weighted_log_lik}, where $t$ represents the time index. The weights are heuristically chosen to decay exponentially through time in order to emphasize more recent matches. \begin{equation} \ell=\sum_{t=1}^{T}\exp(-\xi t)\log\left[P(S_{i}(t)=s_{i}(t),\,S_{j}(t)=s_{j}(t))\right]\label{eq:weighted_log_lik} \end{equation} The parameter estimation procedure is the same as Maher's model. Estimates are obtained from batch optimization of modified log-likelihood. \citet{karlis2003analysis} explored several other possible parametrization of the bivariate Poisson distribution including those proposed by ~\citet{kocherlakota1992bivariate}, and \citet{johnson1997discrete}. The authors performed a model comparison between Maher's independent Poisson model and various bivariate Poisson models based on AIC and BIC. However, the comparison did not include the Dixon-Coles model. \citet{goddard2005regression} performed a more comprehensive model comparison based on their forecasting performance. \subsubsection{Bayesian latent variable models} \citet{rue2000prediction} proposed a Bayesian parametric model based on the bivariate Poisson model. In addition to the paradigm change, there are three major modifications on the parameterization. First of all, the distribution for scores are truncated: scores greater than four are treated as the same category. The authors argued that the truncation reduces the extreme case where one team scores many goals. Secondly, the final scores $S_{i}$ and $S_{j}$ are assumed to be drawn from a mixture model: \[ P(S_{i}=s_{i},S_{j}=s_{j})=(1-\epsilon)P_{DC}+\epsilon P_{Avg} \] The component $P_{DC}$ is the truncated version of the Dixon-Coles model, and the component $P_{Avg}$ is a truncated bivariate Poisson distribution (\ref{eq:DC_prob}) with $\mu$ and $\lambda$ equal to the average value across all teams. Thus, the mixture model encourages a reversion to the mean. Lastly, the attack parameters $\alpha$ and defense parameters $\beta$ for each team changes over time following a Brownian motion. The temporal dependence between match outcomes are reflected by the change in parameters. This model does not have an analytical posterior for parameters. The Bayesian inference procedure is carried out via Markov Chain Monte Carlo method. \citet{crowder2002dynamic} proposed another Bayesian formulation of the bivariate Poisson model based on the Dixon-Coles model. The parametric form remains unchanged, but the attack parameters $\alpha_{i}$'s and defense parameter $\beta_{i}'s$ changes over time following an AR(1) process. Again, the model does not have an analytical posterior. The authors proposed a fast variational inference procedure to conduct the inference. \citet{baio2010bayesian} proposed a further extension to the bivariate Poisson model proposed by~\citet{karlis2003analysis}. The authors noted that the correlation between final scores are parametrized explicitly in previous models, which seems unnecessary in the Bayesian setting. In their proposed model, both scores are \textit{conditionally} independent given an unobserved latent variable. This hierarchical structure naturally encodes the \textit{marginal} dependence between the scores. \subsection{Feature-based machine learning predictors\label{sub:Feature-based-machine-learning}} In recent publications, researchers reported that machine learning models achieved good prediction results for the outcomes of competitive team sports. The strengths of the machine learning approach lie in the model-agnostic and data-centric modelling using available off-shelf methodology, as well as the ability to incorporate features in model building. In this branch of research, the prediction problems are usually studied as a supervised classification problem, either binary (home team win/lose or win/other), or ternary, i.e., where the outcome of a match falls into three distinct classes: home team win, draw, and home team lose. \citet{liu2010beating} applied logistic regression, support vector machines with different kernels, and AdaBoost to predict NCAA football outcomes. For this prediction problem, the researchers hand crafted 210 features. \citet{hucaljuk2011predicting} explored more machine learning predictors in the context of sports prediction. The predictors include naïve Bayes classifiers, Bayes networks, LogitBoost, k-nearest neighbors, Random forest, and artificial neural networks. The models are trained on 20 features derived from previous match outcomes and 10 features designed subjectively by experts (such as team's morale). \citet{odachowski2012using} conducted a similar study. The predictors are commercial implementations of various Decision Tree and ensembled trees algorithms as well as a hand-crafted Bayes Network. The models are trained on a subset of 320 features derived form the time series of betting odds. In fact, this is the only study so far where the predictors have no access to previous match outcomes. \citet{kampakis2014using} explored the possibility of predicting match outcome from Tweets. The authors applied naïve Bayes classifiers, Random forests, logistic regression, and support vector machines to a feature set composed of 12 match outcome features and a number of Tweets features. The Tweets features are derived from unigrams and bigrams of the Tweets. \subsection{Evaluation methods used in previous studies\label{sub:Evaluation-methods-used}} In all studies mentioned in this section, the authors validated their new model on a real data set and showed that the new model performs better than an existing model. However, complication arises when we would like to aggregate and compare the findings made in different papers. Different studies may employ different validation settings, different evaluation metrics, and different data sets. We report on this with a focus on the following, methodologically crucial aspects: \begin{enumerate} \item[(i)] Studies may or may not include a well-chosen benchmark for comparison. If this is not done, then it may not be concluded that the new method outperforms the state-of-art, or a random guess. \item[(ii)] Variable selection or hyper-parameter tuning procedures may or may not be described explicitly. This may raise doubts about the validity of conclusions, as ``hand-tuning'' parameters is implicit overfitting, and may lead to underestimate the generalization error in validation. \item[(iii)] Last but equally importantly, some studies do not report the error measure on evaluation metrics (standard deviation or confidence interval). In these studies, we cannot rule out the possibility that the new model is outperforming the baselines just by chance. \end{enumerate} In table~\ref{tab:Evaluation-methods-used}, we summarize the benchmark evaluation methodology used in previous studies. One may remark that the size of testing data sets vary considerably across different studies, and most studies do not provide a quantitative assessment on the evaluation metric. We also note that some studies perform the evaluation on the training data (i.e., in-sample). Without further argument, these evaluation results only show the goodness-of-fit of the model on the training data, as they do not provide a reliable estimate of the expected predictive performance (on unseen data). \begin{landscape} \begin{table}[H] \begin{centering} \begin{tabular}{|>{\centering}p{4.5cm}|c|c|c|>{\centering}p{5.2cm}|c|c|c|c|} \hline {\small{}Study} & {\small{}Validation} & {\small{}Tuning} & {\small{}Task} & {\small{}Metrics} & {\small{}Baseline} & {\small{}Error} & {\small{}Train} & Test\tabularnewline \hline \hline \citet{lasek2013predictive} & On-line & Yes & Binary & Brier score, Binomial divergence & Yes & Yes & NA & 979\tabularnewline \hline \citet{maher1982modelling} & In-sample & No & Scores & $\chi^{2}$ statistic & No & No & 5544 & NA\tabularnewline \hline \citet{dixon1997modelling} & No & No & Scores & Non-standard & No & No & NA & NA\tabularnewline \hline \citet{karlis2003analysis} & In-sample & Bayes & Scores & AIC, BIC & No & No & 615 & NA\tabularnewline \hline \citet{goddard2005regression} & Custom & Bayes & Scores & log-loss & No & No & 6930 & 4200\tabularnewline \hline \citet{rue2000prediction} & Custom & Bayes & Scores & log-loss & Yes & No & 280 & 280\tabularnewline \hline \citet{crowder2002dynamic} & On-line & Bayes & Tenary & Accuracy & No & No & 1680 & 1680\tabularnewline \hline \citet{baio2010bayesian} & Hold-out & Bayes & Scores & Not reported & No & No & 4590 & 306\tabularnewline \hline \citet{liu2010beating} & Hold-out & No & Binary & Accuracy & Yes & No & 480 & 240\tabularnewline \hline \citet{hucaljuk2011predicting} & Custom & Yes & Binary & Accuracy, F1 & Yes & No & 96 & 96\tabularnewline \hline \citet{odachowski2012using} & 10-fold CV & No & Tenary & Accuracy & Yes & No & 1116 & 1116\tabularnewline \hline \citet{kampakis2014using} & LOO-CV & No & Binary & Accuracy, Cohen's kappa & No & Yes & NR & NR\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Evaluation methods used in previous studies: the column \textquotedbl{}Validation\textquotedbl{} lists the validation settings (\textquotedbl{}Hold-out\textquotedbl{} uses a hold out test set, \textquotedbl{}10-fold CV\textquotedbl{} refers means 10-fold cross validation, \textquotedbl{}LOO-CV\textquotedbl{} means leave-one-out cross validation, \textquotedbl{}On-line\textquotedbl{} means that on-line prediction strategies are used and validation is with a rolling horizon, \textquotedbl{}In-sample\textquotedbl{} means prediction is validated on the same data the model was computed on, \textquotedbl{}Custom\textquotedbl{} refers to a customized evaluation method); the column \textquotedbl{}Tuning\textquotedbl{} lists whether the hyper-parameter tuning method is reported. The Bayesian methods' parameters are ``tuned'' by the usual Bayesian update; \textquotedbl{}Task\textquotedbl{} specifies the prediction task, Binary/Ternary = Binary/Ternary classification, Scores = prediction of final scores; the column \textquotedbl{}Metric\textquotedbl{} lists the evaluation metric(s) reported; \textquotedbl{}Baseline\textquotedbl{} specifies whether baselines are reported; \textquotedbl{}Error\textquotedbl{} specifies whether estimated error of the evaluation metric is reported; \textquotedbl{}Test\textquotedbl{} specifies the number of data points in the test set; \textquotedbl{}Train\textquotedbl{} specifies the number of data points in the training set. For both training and test set, ``NA'' means that the number does not apply in the chosen set-up, for example because there was no test set; ``NR'' means that it does apply and should have been reported but was not reported in the reference.\label{tab:Evaluation-methods-used}} \end{table} \end{landscape} \newpage \section{Extending the Bradley-Terry-\'{E}l\H{o} model\label{sec:Methods}} In this section, we propose a new family of models for the outcome of competitive team sports, the structured log-odds models. We will show that both Bradley-Terry and \'{E}l\H{o} models belong to this family (section \ref{sub:The-structured-log-odds}), as well as logistic regression. We then propose several new models with added flexibility (section \ref{sub:Extensions-of-structured}) and introduce various training algorithms (section \ref{sub:Training-structured-log-odds} and \ref{sub:Regularized-log-odds-matrix}). \subsection{The structured log-odds model\label{sub:The-structured-log-odds}} Recall our principal observations obtained from the joint discussion of Bradley-Terry and \'{E}l\H{o} models in Section~\ref{sub:The-probabilistic-interpretation}: \begin{enumerate} \item[(i)] The \'{E}l\H{o} system can be seen as a learning algorithm for a logistic odds model with latent variables, the Bradley-Terry model (and hence, by extension, as a full fit/predict specification of a certain one-layer neural network). \item[(ii)] The Bradley-Terry and \'{E}l\H{o} model may simultaneously be interpreted as Bernoulli observation models of a rank two matrix. \item[(iii)] The gradient of the Bradley-Terry model's log-likelihood gives rise to a (novel) batch gradient and a single-outcome gradient ascent update. A single iteration per-sample of the latter (with a fixed update constant) is \'{E}l\H{o}'s original update rule. \end{enumerate} We collate these observations in a mathematical model, and highlight relations to well-known model classes, including the Bradley-Terry-\'{E}l\H{o} model, logistic regression, and neural networks. \subsubsection{Statistical definition of structured log-odds models\label{sub:Motivation-and-definition}} In the definition below, we separate added assumptions and notations for the general set-up, given in the paragraph ``Set-up and notation'', from model-specific assumptions, given in the paragraph ``model definition''. Model-specific assumptions, as usual, need not hold for the ``true'' generative process, and the mismatch of the assumed model structure to the true generative process may be (and should be) quantified in a benchmark experiment. \paragraph{Set-up and notation.} We keep the notation of Section~\ref{sec:Background-and-Related}; for the time being, we assume that there is no dependence on time, i.e., the observations follow a generative joint random variable $(X_{ij},Y_{ij})$. The variable $Y_{ij}$ models the outcomes of a pairing where home team $i$ plays against away team $j$. We will further assume that the outcomes are binary home team win/lose = 1/0, i.e., $Y_{ij}\sim\operatorname{Bernoulli} (p_{ij})$. The variable $X_{ij}$ models features relevant to the pairing. From it, we may single out features that pertain to a single team $i$, as a variable $X_i$. Without loss of generality (for example, through introduction of indicator variables), we will assume that $X_{ij}$ takes values in $\ensuremath{\mathbb{R}}^n$, and $X_i$ takes values in $\ensuremath{\mathbb{R}}^m$. We will write $X_{ij,1},X_{ij,2},\dots, X_{ij,n}$ and $X_{i,1},\dots, X_{i,m}$ for the components. The two restrictive assumptions (independence of time, binary outcome) are temporary and are made for expository reasons. We will discuss in subsequent sections how these assumptions may be removed. We have noted that the double sub-index notation easily allows to consider $p_*$ in matrix form. We will denote by $\boldsymbol{P}$ to the (real) matrix with entry $p_{ij}$ in the $i$-th row and $j$-th column. Similarly, we will denote by $\boldsymbol{Y}$ the matrix with entries $Y_{ij}$. We do not fix a particular ordering of the entries in $\boldsymbol{P},\boldsymbol{Y}$ as the numbering of teams does not matter, however the indexing needs to be consistent across $\boldsymbol{P},\boldsymbol{Y}$ and any matrix of this format that we may define later. A crucial observation is that the entries of the matrix $\boldsymbol{P}$ can be plausibly expected to not be arbitrary. For example, if team $i$ is a strong team, we should expect $p_{ij}$ to be larger for all $j$'s. We can make a similar argument if we know team $i$ is a weak team. This means the entries in matrix $\boldsymbol{P}$ are not completely independent from each other (in an algebraic sense); in other words, the matrix $\boldsymbol{P}$ can be plausibly assumed to have an inherent structure. Hence, prediction of $\boldsymbol{Y}$ should be more accurate if the correct structural assumption is made on $\boldsymbol{P}$, which will be one of the cornerstones of the structured log-odds models. For mathematical convenience (and for reasons of scientific parsimony which we will discuss), we will not directly endow the matrix $\boldsymbol{P}$ with structure, but the matrix $\boldsymbol{L}:= \operatorname{logit} (\boldsymbol{P}),$ where as usual and as in the following, univariate functions are applied entry-wise (e.g., $\boldsymbol{P} = \sigma(\boldsymbol{L})$ is also a valid statement and equivalent to the above). \paragraph{Model definition.} We are now ready to introduce the structured log-odds models for competitive team sports. As the name says, the main assumption of the model is that the log-odds matrix $L$ is a structured matrix, alongside with the other assumptions of the Bradley-Terry-\'{E}l\H{o} model in Section~\ref{sub:The-probabilistic-interpretation}. More explicitly, all assumptions of the structured log-odds model may be written as \begin{eqnarray} \boldsymbol{Y} & \sim & \text{Bernoulli}(\boldsymbol{P}) \nonumber \\ \boldsymbol{P} & = & \sigma(\boldsymbol{L}) \label{eq:model_summary} \\ \boldsymbol{L} & & \mbox{satisfies certain structural assumptions} \nonumber \end{eqnarray} where we have not made the structural assumptions on $\boldsymbol{L}$ explicit yet. The matrix $\boldsymbol{L}$ may depend on $X_{ij},X_i$, though a sensible model may be already obtained from a constant matrix $\boldsymbol{L}$ with restricted structure. We will show that the Bradley-Terry and \'{E}l\H{o} models are of this subtype. \paragraph{Structural assumptions for the log-odds.} We list a few structural assumptions that may or may not be present in some form, and will be key in understanding important cases of the structured log-odds models. These may be applied to $\boldsymbol{L}$ as a constant matrix to obtain the simplest class of log-odds models, such as the Bradley-Terry-\'{E}l\H{o} model as we will explain in the subsequent section.\\ {\bf Low-rankness.} A common structural restriction for a matrix (and arguably the most scientifically or mathematically parsimonious one) is the assumption of low rank: namely, that the rank of the matrix of relevance is less than or equal to a specified value $r$. Typically, $r$ is far less than either size of the matrix, which heavily restricts the number of (model/algebraic) degrees of freedom in an $(m\times n)$ matrix from $mn$ to $r(m+n-r)$. The low-rank assumption essentially reflects a belief that the unknown matrix is determined by only a small number of factors, corresponding to a small number of prototypical rows/columns, with the small number being equal to $r$. By the singular value decomposition theorem, any rank $r$ matrix $A\in \ensuremath{\mathbb{R}}^{m\times n}$ may be written as $$A = \sum_{k=1}^r \lambda_k\cdot u^{(k)}\cdot \left(v^{(k)}\right)^\top,\quad\mbox{or, equivalently,}\quad A_{ij} = \sum_{k=1}^r \lambda_k\cdot u^{(k)}_i \cdot v^{(k)}_j$$ for some $\lambda_k\in \ensuremath{\mathbb{R}}$, pairwise orthogonal $u^{(k)}\in \ensuremath{\mathbb{R}}^m$, pairwise orthogonal $v^{(k)}\in \ensuremath{\mathbb{R}}^n$; equivalently, in matrix notation, $A = U\cdot \Lambda \cdot V^\top$ where $\Lambda\in \ensuremath{\mathbb{R}}^{r\times r}$ is diagonal, and $U^\top U = V^\top V = I$ (and where $U\in \ensuremath{\mathbb{R}}^{m\times r}, V \in \ensuremath{\mathbb{R}}^{n\times r}$, and $u^{(k)}, v^{(k)}$ are the rows of $U,V$).\\ {\bf Anti-symmetry.} A further structural assumption is symmetry or anti-symmetry of a matrix. Anti-symmetry arises in competitive outcome prediction naturally as follows: if all matches were played on neutral fields (or if home advantage is modelled separately), one should expect that $p_{ij}=1-p_{ji}$, which means the probability for team $i$ to beat team $j$ is the same regardless of where the match is played (i.e., which one is the home team). Hence, $$\boldsymbol{L}_{ij} = \operatorname{logit} p_{ij} = \log \frac{p_{ij}}{1-p_{ij}} = \log \frac{1-p_{ji}}{p_{ji}} = -\operatorname{logit} p_{ji} = -\boldsymbol{L}_{ji},$$ that is, $\boldsymbol{L}$ is an anti-symmetric matrix, i.e., $\boldsymbol{L} = - \boldsymbol{L}^\top$.\\ {\bf Anti-symmetry and low-rankness.} It is known that any real antisymmetric matrix always has even rank~\citep{eves1980elementary}. That is, if a matrix is assumed to be low-rank and anti-symmetric simultaneously, it will have rank $0$ or $2$ or $4$ etc. In particular, the simplest (non-trivial) anti-symmetric low-rank matrices have rank $2$. One can also show that any real antisymmetric matrix $A\in\ensuremath{\mathbb{R}}^{n\times n}$ with rank $2r'$ can be decomposed as \begin{equation} A=\sum_{k=1}^{r'} \lambda_k\cdot \left(u^{(k)}\cdot \left(v^{(k)}\right)^{\top}-v^{(k)}\cdot \left(u^{(k)}\right)^{\top}\right) , \quad\mbox{or, equivalently,}\quad A_{ij} = \sum_{k=1}^{r'} \lambda_k\cdot \left(u^{(k)}_i \cdot v^{(k)}_j-u^{(k)}_j \cdot v^{(k)}_i\right)\label{eq:anti_decomp} \end{equation} for some $\lambda_k\in \ensuremath{\mathbb{R}}$, pairwise orthogonal $u^{(k)}\in \ensuremath{\mathbb{R}}^m$, pairwise orthogonal $v^{(k)}\in \ensuremath{\mathbb{R}}^n$; equivalently, in matrix notation, $A = U\cdot \Lambda \cdot V^\top - V\cdot \Lambda \cdot U^\top$ where $\Lambda\in \ensuremath{\mathbb{R}}^{r\times r}$ is diagonal, and $U^\top U = V^\top V = I$ (and where $U, V \in \ensuremath{\mathbb{R}}^{n\times r}$, and $u^{(k)}, v^{(k)}$ are the rows of $U,V$).\\ {\bf Separation.} In the above, in general, the factors $u^{(k)},v^{(k)}$ give rise to interaction constants (namely: $u^{(k)}_i\cdot v^{(k)}_j$) that are specific to the pairing. To obtain interaction constants that only depend on one of the teams, one may additionally assume that one of the factors is constant, or a vector of ones (without loss of generality from the constant vector). Similarly, a matrix with constant entries corresponds to an effect independent of the pairing. \paragraph{Learning/fitting of structured log-odds models} will be discussed in Section~\ref{sub:Training-structured-log-odds}. after we have established a number of important sub-cases and the full formulation of the model. In a brief preview summary, it will be shown that the log-likelihood function has in essence the same form for all structured log-odds models. Namely, for any parameter $\theta$ on which $\boldsymbol{P}$ or $\boldsymbol{L}$ may depend, it holds for the (one-outcome log-likelihood) that $$\ell (\theta|Y_{ij}) = Y_{ij}\log (p_{ij}) + (1-Y_{ij})\log (1-p_{ij}) = Y_{ij} \boldsymbol{L}_{ij} + \log(1-p_{ij}).$$ Similarly, for its derivative one obtains $$\frac{\partial \ell (\theta|Y_{ij})}{\partial \theta} = \frac{Y_{ij}}{p_{ij}}\cdot \frac{\partial p_{ij}}{\partial \theta} - \frac{1-Y_{ij}}{1-p_{ij}}\cdot \frac{\partial p_{ij}}{\partial \theta},$$ where the partial derivatives on the right hand side will have a different form for different structural assumptions, while the general form of the formula above is the same for any such assumption. Section~\ref{sub:Training-structured-log-odds} will expand on this for the full model class. \subsubsection{Important special cases \label{sec:specialcases}} We highlight a few important special types of structured log-odds models that we have already seen, or that are prototypical for our subsequent discussion:\\ {\bf The Bradley-Terry-model} and via identification the \'{E}l\H{o} system are obtained under the structural assumption that $\boldsymbol{L}$ is anti-symmetric and of rank 2 with one factor vector of ones. Namely, recalling equation \ref{eq:elo_prob2}, we recognize that the log-odds matrix $\boldsymbol{L}$ in the Bradley-Terry model is given by $\boldsymbol{L}_{ij}=\theta_{i}-\theta_{j}$, where $\theta_{i}$ and $\theta_{j}$ are the \'{E}l\H{o} ratings. Using the rule of matrix multiplication, one can verify that this is equivalent to $$ \boldsymbol{L}=\theta\cdot\ensuremath{\mathbbm{1}}^{\top}-\ensuremath{\mathbbm{1}}\cdot\theta^{\top} $$ where $\ensuremath{\mathbbm{1}}$ is a vector of ones and $\theta$ is the vector of \'{E}l\H{o} ratings. For general $\theta$, the log-odds matrix will have rank two (general = except if $\theta_i=\theta_j$ for all $i,j$). \\ By the exposition above, making the three assumptions is equivalent to positing the Bradley-Terry or \'{E}l\H{o} model. Two interesting observations may be made: First, the ones-vector being a factor entails that the winning chance depends only on the difference between the team-specific ratings $\theta_i,\theta_j$, without any further interaction term. Second, the entry-wise exponential of $\boldsymbol{L}$ is a matrix of rank (at most) one.\\ {\bf The popular \'{E}l\H{o} model with home advantage} is obtained from the Bradley-Terry-\'{E}l\H{o} model under the structural assumption that $\boldsymbol{L}$ is a sum of low-rank matrix and a constant; equivalently, from an assumption of rank 3 which is further restricted by fixing some factors to each other or to vectors of ones. More precisely, from equation~\ref{eq:Elo_gen}, one can recognize that for the \'{E}l\H{o} model with home advantage, the log-odds matrix decomposes as $$ \boldsymbol{L}=\theta\cdot\ensuremath{\mathbbm{1}}^{\top}-\ensuremath{\mathbbm{1}}\cdot\theta^{\top}+h\cdot \ensuremath{\mathbbm{1}}\cdot\ensuremath{\mathbbm{1}}^{\top} $$ Note that the log-odds matrix is no longer antisymmetric due to the constant term with home advantage parameter $h$ that is (algebraically) independent of the playing teams. Also note that the anti-symmetric part, i.e., $\frac{1}{2}(\boldsymbol{L} + \boldsymbol{L}^\top)$, is equivalent to the constant-free \'{E}l\H{o} model's log-odds, while the symmetric part, i.e., $\frac{1}{2}(\boldsymbol{L} - \boldsymbol{L}^\top),$ is exactly the new constant home advantage term.\\ {\bf More factors: full two-factor Bradley-Terry-\'{E}l\H{o} models} may be obtained by dropping the separation assumption from either Bradley-Terry-\'{E}l\H{o} model, i.e., keeping the assumption of anti-symmetric rank two, but allowing an arbitrary second factor not necessarily being the vector of ones. The team's competitive strength is then determined by two interacting factors $u$, $v$, as \begin{equation} \boldsymbol{L} =u\cdot v^{\top}-v\cdot u^{\top}\label{eq:fac2_log_odds}. \end{equation} Intuitively, this may cover, for example, a situation where the benefit from being much better may be smaller (or larger) than being a little better, akin to a discounting of extremes. If the full two-factor model predicts better than the Bradley-Terry-\'{E}l\H{o} model, it may certify for different interaction in different ranges of the \'{E}l\H{o} scores. A home advantage factor (a constant) may or may not be added, yielding a model of total rank 3.\\ {\bf Raising the rank: higher-rank Bradley-Terry-\'{E}l\H{o} models} may be obtained by model by relaxing assumption of rank 2 (or 3) to higher rank. We will consider the next more expressive model, of rank four. The \emph{rank four Bradley-Terry-\'{E}l\H{o} model} which we will consider will add a full anti-symmetric rank two summand to the log-odds matrix, which hence is assumed to have the following structure: \begin{equation} \boldsymbol{L}=u\cdot v^{\top}-v\cdot u^{\top}+\theta\cdot\ensuremath{\mathbbm{1}}^{\top}-\ensuremath{\mathbbm{1}}\cdot\theta^{\top}\label{eq:rank4_log_odds} \end{equation} The team's competitive strength is captured by three factors $u$, $v$ and $\theta$; note that we have kept the vector of ones as a factor. Also note that setting either of $u,v$ to $\ensuremath{\mathbbm{1}}$ would \emph{not} result in a model extension as the resulting matrix would still have rank two. The rank-four model may intuitively make sense if there are (at least) two distinguishable qualities determining the outcome - for example physical fitness of the team and strategic competence. Whether there is evidence for the existence of more than one factor, as opposed to assuming just a single one (as a single summary quantifier for good vs bad) may be checked by comparing predictive capabilities of the respective models. Again, a home advantage factor may be added, yielding a log-odds matrix of total rank 5. We would like to note that a mathematically equivalent model, as well as models with more factors, have already been considered by~\citet{stanescu2011rating}, though without making explicit the connection to matrices which are of low rank, anti-symmetric or structured in any other way.\\ {\bf Logistic regression} may also be obtained as a special case of structured log-odds models. In the simplest form of logistic regression, the log-odds matrix is a linear functional in the features. Recall that in the case of competitive outcome prediction, we consider pairing features $X_{ij}$ taking values in $\ensuremath{\mathbb{R}}^n$, and team features $X_i$ taking values in $\ensuremath{\mathbb{R}}^m$. We may model the log-odds matrix as a linear functional in these, i.e., model that $$ \boldsymbol{L}_{ij} = \langle \lambda^{(ij)}, X_{ij}\rangle + \langle \beta^{(i)}, X_{i}\rangle + \langle \gamma^{(j)}, X_{j}\rangle + \alpha, $$ where $\lambda^{(ij)}\in \ensuremath{\mathbb{R}}^n, \beta^{(i)},\gamma^{(j)}\in \ensuremath{\mathbb{R}}^m, \alpha\in \ensuremath{\mathbb{R}}$. If $\lambda^{(ij)} = 0$, we obtain a simple two-factor logistic regression model. In the case that there is only two teams playing only with each other, or (the mathematical correlate of) a single team playing only with itself, the standard logistic regression model is recovered. Conversely, a way to obtain the Bradley-Terry model as a special case of classical logistic regression is as follows: consider the indicator feature $X_{ij}:= e_i - e_j$. With a coefficient vector $\beta$, the logistic odds will be $\boldsymbol{L}_{ij}=\langle \beta, X_{ij}\rangle = \beta_{i}-\beta_{j}$. In this case, the coefficient vector $\beta$ corresponds to a vector of \'{E}l\H{o} ratings. Note that in the above formulation, the coefficient vectors $\lambda^{(ij)}, \beta^{(i)}$ are explicitly allowed to depend on the teams. If we further allow $\alpha$ to depend on both teams, the model includes the Bradley-Terry-\'{E}l\H{o} models above as well; we could also make the $\beta$ depend on both teams. However, allowing the coefficients to vary in full generality is not very sensible, and as for the constant term which may yield the \'{E}l\H{o} model under specific structural assumptions, we need to endow all model parameters with structural assumptions to prevent combinatorial explosion of parameters and overfitting. These subtleties in incorporating features, and more generally how to combine features with hidden factors will be discussed in the separate, subsequent Section~\ref{sub:covariate}. \subsubsection{Connection to existing model classes} Close connections to three important classes of models become apparent through the discussion in the previous sections:\\ {\bf Generalized Linear Models} generalize both linear and log-linear models (such as the Bradley-Terry model) through so-called link functions, or more generally (and less classically) link distributions, combined with flexible structural assumptions on the target variable. The generalization aims at extending prediction with linear functionals through the choice of link which is most suitable for the target~\citep[for an overview, see][]{mccullagh1989generalized}. Particularly relevant for us are generalized linear models for ordinal outcomes which includes the ternary (win/draw/lose) case, as well as link distributions for scores. Some existing extensions of this type, such as the ternay outcome model of~\cite{rao1967ties} and the score model of~\citep{maher1982modelling}, may be interpreted as specific choices of suitable linking distributions. How these ideas may be used as a component of structured log-odds models will be discussed in Section~\ref{sub:Extensions-of-structured}.\\ {\bf Neural Networks} (vulgo ``deep learning'') may be seen as a generalization of logistic regression which is mathematically equivalent to a single-layer network with softmax activation function. The generalization is achieved through functional nesting which allows for non-linear prediction functionals, and greatly expands the capability of regression models to handle non-linear features-target-relations \citep[for an overview, see][]{schmidhuber2015deep}. A family of ideas which immediately transfers to our setting are strategies for training and model fitting. In particular, on-line update strategies as well as training in batches and epochs yields a natural and principled way to learn Bradley-Terry-\'{E}l\H{o} and log-odds models in an on-line setting or to potentially improve its predictive power in a static supervised learning setting. A selection of such training strategies for structured log-odds models will be explored in Section~\ref{sub:Training-structured-log-odds}. This will not include variants of stochastic gradient descent which we leave to future investigations. It is also beyond the scope of this manuscript to explore the implications of using multiple layers in a competitive outcome setting, though it seems to be a natural idea given the closeness of the model classes which certainly might be worth exploring in further research.\\ {\bf Low-rank Matrix Completion} is the supervised task of filling in some missing entries of a low-rank matrix, given others and the information that the rank is small. Many machine learning applications can be viewed as estimation or completion of a low-rank matrix, and different solution strategies exist~\citep{CanRec09,CandesTao,NegWai11,KMO10,Mek09,so2007theory,vounou2010discovering,kiraly2015algebraic}. The feature-free variant of structured log-odds models (see Section~\ref{sub:Motivation-and-definition}) may be regarded as a low-rank matrix completion problem: from observations of $Y_{ij}\sim\operatorname{Bernoulli}(\sigma(\boldsymbol{L}_{ij})),$ for $(i,j)\in E$ where the set of observed pairings $E$ may be considered as the set of observed positions, estimate the underlying low-rank matrix $\boldsymbol{L}$, or predict $Y_{k\ell}$ for some $(k,\ell)$ which is possibly not contained in $E$. One popular low-rank matrix completion strategy in estimating model parameters or completing missing entries uses the idea of replacing the discrete rank constraint by a continuous spectral surrogate constraint, penalizing not rank but the nuclear norm ( = trace norm = 1-Schatten-norm) of the matrix modelled to have low rank~\citep[an early occurrence of this idea may be found in][]{SreShr05}. The advantage of this strategy is that no particular rank needs to be a-priori assumed, instead the objective implicitly selects a low rank through a trade-off with model fit. This strategy will be explored in Section~\ref{sub:Regularized-log-odds-matrix} for the structured log-odds models. Further, identifiability of the structured log-odds models is closely linked to the question whether a given entry of a low-rank matrix may be reconstructed from those which have been observed. Somewhat straightforwardly, one may see that reconstructability in the algebraic sense, see~\citep{kiraly2015algebraic}, is a necessary condition for identifiability under respective structure assumptions. However, even though many results of~\citep{kiraly2015algebraic} directly generalize, completability of anti-symmetric low-rank matrices with or without vectors of ones being factors has not been studied explicitly in literature to our knowledge, hence we only point this out as an interesting avenue for future research. We would like to note that a more qualitative and implicit mention of this, in the form of noticing connection to the general area of collaborative filtering, is already made in~\cite[Section~6.3 of][]{paterek2012predicting}, in reference to the multi-factor models studied by~\citet{stanescu2011rating}. \subsection{Predicting non-binary labels with structured log-odds models\label{sub:Extensions-of-structured}} In Section~\ref{sub:The-structured-log-odds}, we have not introduced all aspects of structured log-odds models in favour of a clearer exposition. In this section, we discuss these aspects that are useful for the domain application more precisely, namely: \begin{enumerate} \item[(i)] How to use features in the prediction. \item[(ii)] How to model ternary match outcomes (win/draw/lose) or score outcomes. \item[(iii)] How to train the model in an on-line setting with a batch/epoch strategy. \end{enumerate} For point (i) ``using features'', we will draw from the structured log-odds models' closeness to logistic regression; the approach to (ii) ``general outcomes'' may be treated by choosing an appropriate link function as with generalized linear models; for (iii), parallels may be drawn to training strategies for neural networks. \subsubsection{The structured log-odds model with features\label{sub:covariate}} As highlighted in Section~\ref{sec:specialcases}, pairing features $X_{ij}$ taking values in $\ensuremath{\mathbb{R}}^n$, and team features $X_i$ taking values in $\ensuremath{\mathbb{R}}^m$ may be incorporated by modelling the log-odds matrix as \begin{equation} \boldsymbol{L}_{ij} = \langle \lambda^{(ij)}, X_{ij}\rangle + \langle \beta^{(i)}, X_{i}\rangle + \langle \gamma^{(j)}, X_{j}\rangle + \alpha_{ij}, \label{eq:logft-oneentry} \end{equation} where $\lambda^{(ij)}\in \ensuremath{\mathbb{R}}^n, \beta^{(i)},\gamma^{(j)}\in \ensuremath{\mathbb{R}}^m, \alpha_{ij}\in \ensuremath{\mathbb{R}}$. Note that differently from the simpler exposition in Section~\ref{sec:specialcases}, we allow all coefficients, including $\alpha_{ij}$, to vary with $i$ and $j$. Though, allowing $\lambda^{(ij)}$ and $\beta^{(i)},\gamma^{(j)}$ to vary completely freely may lead to over-parameterisation or overfitting, similarly to an unrestricted (full rank) log-odds matrix of $\alpha_{ij}$ in the low-rank \'{E}l\H{o} model, especially if the number of distinct observed pairings is of similar magnitude as the number of total observed outcomes. Hence, structural restriction of the degrees of freedom may be as important for the feature coefficients as for the constant term. The simplest such assumption is that all $\lambda^{(ij)}$ are equal, all $\beta^{(i)}$ are equal, and all $\gamma^{(j)}$ are equal, i.e., assuming that $$ \boldsymbol{L}_{ij} = \langle \lambda, X_{ij}\rangle + \langle \beta, X_{i}\rangle + \langle \gamma, X_{j}\rangle + \alpha_{ij}, $$ for some $\lambda\in \ensuremath{\mathbb{R}}^n, \beta,\gamma\in \ensuremath{\mathbb{R}}^m,$ and where $\alpha_{ij}$ may follow the assumptions of the feature-free log-odds models. This will be the main variant which will refer to as the structured log-odds model with features. However, the assumption that constants are independent of the pairing $i,j$ may be too restrictive, as it may be plausible that, for example, teams of different strength profit differently from or are impaired differently by the same circumstance, e.g., injury of a key player. To address such a situation, it is helpful to re-write Equation~\ref{eq:logft-oneentry} in matrix form: $$ \boldsymbol{L} = \boldsymbol{\lambda} \circ_3 \boldsymbol{X} + \boldsymbol{\beta} \cdot \boldsymbol{X}_*^\top + \boldsymbol{X}_*\cdot \boldsymbol{\gamma}^\top + \boldsymbol{\alpha}, $$ where $\boldsymbol{X}_*$ is the matrix whose rows are the $X_i$, where $\boldsymbol{\beta}$ and $\boldsymbol{\gamma}$ are matrices whose rows are the $\beta^{(i)},\gamma^{(j)}$, and where $\boldsymbol{\alpha}$ is the matrix with entries $\alpha_{ij}$. The symbols $\boldsymbol{\lambda}$ and $\boldsymbol{X}$ denote tensors of degree 3 (= 3D-arrays) whose $(i,j,k)$-th elements are $\lambda^{(ij)}_k$ and $X_{ij,k}$. The symbol $\circ_3$ stands for the index-wise product of degree-3-tensors which eliminates the third index and yields a matrix, i.e., $$\left(\boldsymbol{\lambda} \circ_3 \boldsymbol{X}\right)_{ij} = \sum_{k=1}^n \lambda^{(ij)}_k\cdot X_{ij,k}.$$ A natural parsimony assumption for $\boldsymbol{\gamma},\boldsymbol{\beta},\boldsymbol{\alpha}$, and $\boldsymbol{\lambda}$ is, again, that of low-rank. For the matrices, $\boldsymbol{\gamma},\boldsymbol{\beta},\boldsymbol{\alpha}$, one can explore the same structural assumptions as in Section~\ref{sub:Motivation-and-definition}: low-rankness and factors of one are reasonable to assume for all three, while anti-symmetry seems natural for $\boldsymbol{\alpha}$ but not for $\boldsymbol{\beta},\boldsymbol{\gamma}$. A low tensor rank (Tucker or Waring) appears to be a reasonable assumption for $\boldsymbol{\lambda}$. As an ad-hoc definition of tensor (decomposition) rank of $\boldsymbol{\lambda}$, one may take the minimal $r$ such that there is a decomposition into real vectors $u^{(i)},v^{(i)},w^{(i)}$ such that $$\boldsymbol{\lambda}_{ijk} = \sum_{\ell=1}^r u^{(\ell)}_i\cdot v^{(\ell)}_j\cdot w^{(\ell)}_k.$$ Further reasonable assumptions are anti-symmetry in the first two indices, i.e., $\boldsymbol{\lambda}_{ijk} = - \boldsymbol{\lambda}_{jik}$, as well as some factors $u^{(\ell)}, v^{(\ell)}$ being vectors of ones. Exploring these possible structural assumptions on the coefficients of features in experiments is possibly interesting both from a theoretical and practical perspective, but beyond the scope of this manuscript. Instead, we will restrict ourselves to the case of $\boldsymbol{\lambda} = 0$, of $\boldsymbol{\beta}$ and $\boldsymbol{\gamma}$ having the same entry each, and $\boldsymbol{\alpha}$ following one of the low-rank assumptions in structural assumptions as in Section~\ref{sub:Motivation-and-definition} as in the feature-free model. We would like to note that variants of the Bradley-Terry model with features have already been proposed and implemented in the \texttt{BradleyTerry2} package for R~\cite{firth2012bradley}, though isolated from other aspects of the Bradley-Terry-\'{E}l\H{o} model class such as modelling draws, or structural restrictions on hidden variables or the coefficient matrices and tensors, and the \'{E}l\H{o} on-line update. \subsubsection{Predicting ternary outcomes\label{sub:Modeling-tenary-outcomes}} This section addresses the issue of modeling draws raised in \ref{sub:Limitations-Elo}. When it is necessary to model draws, we assume that the outcome of a match is an ordinal random variable of three so-called levels: win $\succ$ draw $\succ$ lose. The draw is treated as a middle outcome. The extension of structured log-odds model is inspired by an extension of logistic regression: the Proportional Odds model. The Proportional Odds model is a well-known family of models for ordinal random variables~\citep{mccullagh1980regression}. It extends the logistic regression to model ordinary target variables. The model parameterizes the logit transformation of the cumulative probability as a linear function of features. The coefficients associated with feature variables are shared across all levels, but there is an intercept term $\alpha_{k}$ which is specific to a certain level. For a generic feature-label distribution $(X,Y)$, where $X$ takes values in $\ensuremath{\mathbb{R}}^n$ and $Y$ takes values in a discrete set $\mathcal{Y}$ of ordered levels, the proportional odds model may be written as \[ \log\left(\frac{P(Y \succ k)}{P(Y \preceq k)}\right)=\alpha_{k}+\langle \beta, X\rangle \] where $\beta\in\ensuremath{\mathbb{R}}^n, \alpha_k\in \ensuremath{\mathbb{R}}$, and $k\in \mathcal{Y}$. The model is called Proportional Odds model because the odds for any two different levels $k$, $k'$, given an observed feature set, are proportional with a constant that does not depend on features; mathematically, \[ \left(\frac{P(Y\succ k)}{P(Y\preceq k)}\right)/\left(\frac{P(Y\succ k')}{P(Y\preceq k')}\right)=\exp(\alpha_{k}-\alpha_{k'}) \] Using a similar formulation in which we closely follow~\citet{rao1967ties}, the structured log-odds model can be extended to model draws, namely by setting \begin{eqnarray*} \log\left(\frac{P(Y_{ij}=\text{win})}{P(Y_{ij}=\text{draw})+P(Y_{ij}=\text{lose})}\right) & = & \boldsymbol{L}_{ij}\\ \log\left(\frac{P(Y_{ij}=\text{draw})+P(Y_{ij}=\text{win})}{P(Y_{ij}=\text{lose})}\right) & = & \boldsymbol{L}_{ij}+\phi \end{eqnarray*} where $\boldsymbol{L}_{ij}$ is the entry in structured log-odds matrix and $\phi$ is a free parameter that affects the estimated probability of a draw. Under this formulation, the probabilities for different outcomes are given by \begin{eqnarray*} P(Y_{ij}=\text{win}) & = & \sigma(\boldsymbol{L}_{ij})\\ P(Y_{ij}=\text{lose}) & = & \sigma(-\boldsymbol{L}_{ij}-\phi)\\ P(Y_{ij}=\text{draw}) & = & \sigma(-\boldsymbol{L}_{ij})-\sigma(-\boldsymbol{L}_{ij}-\phi) \end{eqnarray*} Note that this may be seen as a choice of ordinal link distribution in a ``generalized'' structured odds model, and may be readily combined with feature terms as in Section~\ref{sub:covariate}. \subsubsection{Predicting score outcomes\label{sub:Using-score-difference}} Several models have been considered in Section \ref{sub:Limitations-Elo} that use score differences to update the \'{E}l\H{o} ratings. In this section, we derive a principled way to predict scores, score differences and/or learn from scores or score differences. Following the analogy to generalized linear models, we will be able to tackle this by using a suitable linking distribution, the model can utilize additional information in final scores. The simplest natural assumption one may make on scores is obtained from assuming a dependent scoring process, i.e., both home and away team's scores are Poisson-distributed with a team-dependent parameter and possible correlation. This assumption is frequently made in literature~\citep{maher1982modelling,dixon1997modelling,crowder2002dynamic} and eventually leads to a (double) Poisson regression when combined with structured log-odds models. The natural linking distributions for differences of scores are Skellam distributions which are obtained as difference distributions of two (possibly correlated) Poisson distributions~\citep{skellam1945frequency}, as it has been suggested by~\citet{karlis2009bayesian}. In the following, we discuss only the case of score differences in detail, predicting both team's score distributions can be obtained similarly as predicting the correlated Poisson variables with the respective parameters instead of the Skellam difference distribution. We first introduce some notation. As a difference of Poisson distributions whose support is $\ensuremath{\mathbb{N}}$, the support of a Skellam distribution is the set of integers $\ensuremath{\mathbb{Z}}$. The probability mass function of Skellam distributions takes two positive parameters $\mu_{1}$ and $\mu_{2}$, and is given by \[ P(z|\mu_{1},\mu_{2})=e^{-(\mu_{1}+\mu_{2})}\left(\frac{\mu_{1}}{\mu_{2}}\right)^{z/2}I_{|z|}(2\sqrt{\mu_{1}\mu_{2}}) \] where $I_{\alpha}$ is the modified Bessel function of first kind with parameter $\alpha$, given by \[ I_{\alpha}(x):=\sum_{k=0}^{\infty}\frac{1}{k!\cdot \Gamma(\alpha+k+1)}\cdot \left(\frac{x}{2}\right)^{2k + \alpha} \] If random variables $Z_{1}$ and $Z_{2}$ follow Poisson distributions with mean parameters $\lambda_{1}$ and $\lambda_{2}$ respectively, and their correlation is $\rho=\operatorname{Corr} (Z_1,Z_2)$, then their difference $\tilde{Z}=Z_{1}-Z_{2}$ follows a Skellam distribution with mean parameters $\mu_{1}=\lambda_{1}-\rho\sqrt{\lambda_{1}\lambda_{2}}$ and $\mu_{2}=\lambda_{2}-\rho\sqrt{\lambda_{1}\lambda_{2}}$. Now we are ready to extend the structured log-odds model to incorporate historical final scores. We will use a Skellam distribution as the linking distribution: we assume that the score difference of a match between team $i$ and team $j$, that is, $Y_{ij}$ (taking values in $\mathcal{Y} = \ensuremath{\mathbb{Z}}$), follows a Skellam distribution with (unknown) parameter $\exp(\boldsymbol{L}_{ij})$ and $\exp(\boldsymbol{L}'_{ij})$. Note that hence there are now \emph{two} structured $\boldsymbol{L},\boldsymbol{L}'$ , each of which may be subject to constraints such as in Section~\ref{sub:Motivation-and-definition}, or constraints connecting them to each other, and each of which may depend on features as outlined in Section~\ref{sub:covariate}. A simple (and arguably the simplest sensible) structural assumption is that $\boldsymbol{L}^\top= \boldsymbol{L}'$, is rank two, with factors of ones, as follows: $$\boldsymbol{L} = \ensuremath{\mathbbm{1}}\cdot u^\top + v\cdot \ensuremath{\mathbbm{1}}^\top;$$ equivalently, that $\exp(\boldsymbol{L})$ has rank one and only non-negative entries. As mentioned above, features such as home advantage may be added to the structured parameter matrix $\boldsymbol{L}$ or $\boldsymbol{L}'$ using the way introduced in Section~\ref{sub:covariate}. Also note that the above yields a strategy to make ternary predictions while training on the scores. Namely, a prediction for ternary match outcomes may simply be derived from predicted score differences $\tilde{Y}_{ij}$, through defining \begin{eqnarray*} P(\text{win}) & = & P({Y}_{ij}>0)\\ P(\text{draw}) & = & P({Y}_{ij}=0)\\ P(\text{lose}) & = & P({Y}_{ij}<0) \end{eqnarray*} In contrast to the direct method in Section~\ref{sub:Modeling-tenary-outcomes}, the probability of draw can now be calculated without introducing an additional cut-off parameter. \subsection{Training of structured log-odds models\label{sub:Training-structured-log-odds}} In this section, we introduce batch and on-line learning strategies for structured log-odds models, based on gradient descent on the parametric likelihood. The methods are generic in the sense that the exact structural assumptions of the model will affect the exact form of the log-likelihood, but not the main algorithmic steps. \subsubsection{The likelihood of structured log-odds models} We derive a number of re-occurring formulae for the likelihood of structured log-odds models. For this, we will subsume all structural assumptions on $\boldsymbol{L}$ in the form of a parameter $\theta$ on which $\boldsymbol{L}$ may depend, say in the cases mentioned in Section~\ref{sec:specialcases}. In each case, we consider $\theta$ to be a real vector of suitable length. The form of the learning step(s) is slightly different depending on the chosen link function/distribution, hence we start with our derivations in the case of binary prediction, where $\mathcal{Y} = \{1,0\}$, and discuss ternary and score outcomes further below.\\ In the case of {\bf binary prediction}, it holds for the (one-outcome log-likelihood) that \begin{align*} \ell (\theta|X_{ij},Y_{ij})& = Y_{ij}\log (p_{ij}) + (1-Y_{ij})\log (1-p_{ij})\\ & = Y_{ij} \boldsymbol{L}_{ij} + \log(1-p_{ij}) = Y_{ij} \boldsymbol{L}_{ij} - \boldsymbol{L}_{ij} + \log(p_{ij}). \end{align*} Similarly, for its derivative one obtains \begin{eqnarray} \frac{\partial \ell (\theta|X_{ij},Y_{ij})}{\partial \theta} & = & \frac{\partial}{\partial\theta} \left[Y_{ij}\log p_{ij}+\left(1-Y_{ij}\right)\log(1-p_{ij})\right] \nonumber \\ & = & \left[\frac{Y_{ij}}{p_{ij}} - \frac{1-Y_{ij}}{1-p_{ij}}\right]\cdot \frac{\partial p_{ij}}{\partial \theta} \label{eq:derivative}\\ & = & \left[Y_{ij}-p_{ij}\right]\cdot\frac{\partial}{\partial\theta}\boldsymbol{L}_{ij} \nonumber \end{eqnarray} where we have used definitions for the first equality, the chain rule for the second, and for the last equality that $$\frac{\partial }{\partial x} \sigma (x) = \sigma(x) (1-\sigma(x)),\;\mbox{hence}\;\; \frac{\partial }{\partial x} p_{ij} = p_{ij}(1-p_{ij})\frac{\partial }{\partial x} \boldsymbol{L}_{ij}.$$ In all the above, derivatives with respect to $\theta$ are to be interpreted as (entry-wise) vector derivatives; equivalently, the equations hold for any coordinate of $\theta$ in place of $\theta$. As an important consequence of the above, the derivative of the log-likelihood almost has the same form (\ref{eq:derivative}) for different model variants, and differences only occur in the gradient term $\frac{\partial}{\partial\theta_{i}}L_{ij}$; the term $\left[Y_{ij}-p_{ij}\right]$ may be interpreted as a prediction residual, with $p_{ij}$ depending on $X_{ij}$ for a model with features. This fact enables us to obtain unified training strategies for a variety of structured log-odds models.\\ For {\bf multiple class prediction} as in the ordinal or score case, the above generalizes relatively straightforwardly. The one-outcome log-likelihood is given as \begin{align*} \ell (\theta|X_{ij},Y_{ij})& = \sum_{y\in \mathcal{Y}} Y_{ij}[y] \log p_{ij}[y] \end{align*} where, abbreviatingly, $p_{ij}[y] = P(Y_{ij} = y)$, and $Y_{ij}[y]$ is one iff $Y_{ij}$ takes the value $y$, otherwise zero. For the derivative of the log-likelihood, one hence obtains \begin{eqnarray} \frac{\partial \ell (\theta|X_{ij},Y_{ij})}{\partial \theta} & = & \frac{\partial}{\partial\theta} \sum_{y\in \mathcal{Y}} Y_{ij}[y] \log (p_{ij}[y]) \nonumber \\ & = & \sum_{y\in \mathcal{Y}} \frac{Y_{ij}[y]}{p_{ij}[y]}\cdot \frac{\partial p_{ij}[y]}{\partial \theta} \nonumber\\ & = & \sum_{y\in \mathcal{Y}}\left[Y_{ij}[y]\cdot (1-p_{ij}[y])\right]\cdot\frac{\partial}{\partial\theta}\boldsymbol{L}_{ij}[y], \nonumber \end{eqnarray} where $\boldsymbol{L}_{ij}[y]:= \operatorname{logit} p_{ij}[y]$. This is in complete analogy to the binary case, except for the very final cancellation which does not occur. If $Y_{ij}$ is additionally assumed to follow a concrete distributional form (say Poisson or Skellam), the expression may be further simplified. In the subsequent sections, however, we will continue with the binary case only, due to the relatively straightforward analogy through the above. In either case, we note the similarity with back-propagation in neural networks, where the derivatives $\frac{\partial}{\partial \theta}\boldsymbol{L}_{ij}[y]$ correspond to a ``previous layer''. Though we would like to note that differently from the standard multilayer perceptron, additional structural constraints on this layer are encoded through the structural assumptions in the structured log-odds model. Exploring the benefit of such constraints in general neural network layers is beyond the scope of this manuscript, but a possibly interesting avenue to explore. \subsubsection{Batch training of structured log-odds models\label{sub:Training-structured-log-odds.batch} \label{sub:Two-stage-training-method}} We now consider the case where a batch of multiple training outcomes $\mathcal{D} = \left\{\left(X_{i_1j_1}^{(1)},Y_{i_1j_1}^{(1)}\right),\dots,\left(X_{i_1j_1}^{(1)},Y_{i_Nj_N}^{(N)}\right)\right\}$ have been are observed, and we would like to train the model parameters the log-likelihood, compare the discussion in Section~\ref{sub:The-probabilistic-interpretation}. In this case, the batch log-likelihood of the parameters $\theta$ and its derivative take the form \begin{eqnarray} \ell (\theta|\mathcal{D}) &= &\sum_{k=1}^N \ell \left(\theta \middle|\left(X_{i_kj_k}^{(k)},Y_{i_kj_k}^{(k)}\right)\right)\\\nonumber &= &\sum_{k=1}^N \left[ Y_{ij}^{(k)}\log \left(p_{ij}^{(k)}\right) + \left(1-Y_{ij}^{(k)}\right)\log \left(1-p_{ij}^{(k)}\right)\right]\\\nonumber \frac{\partial}{\partial\theta}\ell (\theta|\mathcal{D}) &= &\sum_{k=1}^N \left[Y_{i_kj_k}^{(k)}-p_{i_kj_k}^{(k)}\right]\cdot\frac{\partial}{\partial\theta}\boldsymbol{L}_{i_kj_k}^{(k)}\label{eqn:batch_update} \end{eqnarray} Note that in general, both $p_{ij}^{(k)}$ and $\boldsymbol{L}_{ij}^{(k)}$ will depend on the respective features $X_{i_kj_k}^{(k)}$ and the parameters $\theta$, which is not made explicit for notational convenience. The term $\left[Y_{i_kj_k}^{(k)}-p_{i_kj_k}^{(k)}\right]$ may again be interpreted as a sample of prediction residuals, similar to the one-sample case. By the maximum likelihood method, the maximizer $\widehat{\theta} := \operatornamewithlimits{argmax}_{\theta} \; \ell (\theta|\mathcal{D})$ is an estimate for the generative $\theta$. In general, unfortunately, an analytic solution will not exist; nor will the optimization be convex, not even for the Bradley-Terry-\'{E}l\H{o} model. Hence, gradient ascent and/or non-linear optimization techniques need to be employed. An interesting property of the batch optimization is that a-priori setting a ``K-factor'' is not necessary. While it may re-enter as the learning rate in a gradient ascent strategy, such parameters may be tuned in re-sampling schemes such as k-fold cross-validation. It also removes the need for a heuristic that determines new players' ratings (or more generally: factors), as the batch training procedure may simply be repeated with such players' outcomes included. \subsubsection{On-line training of structured log-odds models}\label{sub:Training-structured-log-odds.online} In practice, the training data accumulate through time, so we need to re-train the model periodically in order to capture new information. That is, we would like to address the situation where training data $X_{ij}(t),Y_{ij}(t)$ are observed at subsequent different time points. The above-mentioned vicinity of structured log-odds models to neural networks and standard stochastic gradient descent strategies directly yields a family of possible batch/epoch on-line strategies for structured log-odds models. To be more mathematically precise (and noting that the meaning of batch and epoch is not consistent across literature): Let $\mathcal{D}=\left\{\left(X^{(1)}_{i_1j_1}(t_1),Y^{(1)}_{i_1j_1}(t_1)\right),\dots, \left(X^{(N)}_{i_Nj_N}(t_N),Y^{(N)}_{i_Nj_N}(t_N)\right)\right\}$ be the observed training data points, at the (not necessarily distinct) time points $\mathcal{T} = \{t_1,\dots, t_N\}$ (hence $\mathcal{T}$ can be a multi-set). We will divide the time points into blocks $\mathcal{T}_0,\dots, \mathcal{T}_B$ in a sequential way, i.e., such that $\cup_{i=0}^B \mathcal{T}_i = \mathcal{T}$, and for any two distinct $k,\ell$, either $x<y$ for all $x\in\mathcal{T}_k,y\in\mathcal{T}_\ell$, or $x>y$ for all $x\in\mathcal{T}_k,y\in\mathcal{T}_\ell$. These time blocks give rise to the training data \emph{batches} $\mathcal{D}_i:=\{(x,y)\in \mathcal{D}\;:\; (x,y)\;\mbox{is observed at a time}\; t\in\mathcal{T}_i\}$. The cardinality of $\mathcal{D}_i$ is called the \emph{batch size} of the $i$-th batch. We single out the $0$-th batch as the ``initial batch''. The stochastic gradient descent update will be carried out, for the $i$-th batch, $\tau_i$ times. The $i$-th \emph{epoch} is the collection of all such updates using batch $\mathcal{D}_i$, and $\tau_i$ is called the \emph{epoch size} (of epoch $i$). Usually, all batches except the initial batch will have equal batch sizes and epoch sizes. The general algorithm for the parameter update is summarized as stylized pseudo-code as Algorithm~\ref{alg:batch_epoch_training}. \begin{algorithm} \begin{algorithmic}[0] \Require{learning rate $\gamma$} \State Randomly initialize parameters $\theta$ \For{$i = 0: B$} \State Read $\mathcal{D}_i$ \For{$j = 1: \tau_i$} \State Compute $\Delta:= \frac{\partial}{\partial\theta}\ell (\theta|\mathcal{D}_i)$ as in Equation~\ref{eqn:batch_update} \State $\theta \leftarrow \theta - \gamma\cdot \Delta$ \EndFor \State Write/output $\theta$, e.g., for prediction or forecasting \EndFor \end{algorithmic} \caption{Batch/epoch type on-line training for structured log-odds models\label{alg:batch_epoch_training}} \end{algorithm} Of course, any more sophisticated variant of stochastic gradient descent/ascent may be used here as well, though we did not explore such possibilities in our empirical experiments and leave this for interesting future investigations. Important such variants include re-initialization strategies, selecting the epoch size $\tau_i$ data-dependently by convergence criteria, or employing smarter gradient updates, such as with data-dependent learning rates. Note that the update rule applies for any structured log-odds model as long as $\frac{\partial}{\partial\theta}\ell (\theta|\mathcal{D}_i)$ is easily obtainable, which should be the case for any reasonable parametric form and constraints. Note that the online update rule may also be used to update, over time, structural model parameters such as home advantage and feature coefficients. Of course, some parameters may also be regarded as classical hyper-parameters and tuned via grid or random search on a validation set. There are multiple trade-offs involved in choosing the batches and epochs: \begin{enumerate} \item[(i)] Using more, possibly older outcomes vs emphasizing more recent outcomes. Choosing a larger epoch size will yield a parameter closer to the maximizer of the likelihood given the most recent batch(es). It is widely hypothesized that the team's performance changes gradually over time. If the factors change quickly, then more recent outcomes should be emphasized via larger epoch size. If they do not, then using more historical data via smaller epoch sizes is a better idea. \item[(ii)] Expending less computation for a smooth update vs expending more computation for a more accurate update. Choosing a smaller learning rate will avoid ``overshooting'' local maximizers of the likelihood, or oscillations, though it will make a larger epoch size necessary for convergence. \end{enumerate} We single out multiple variants of the above to investigate the above trade-off and empirical merits of different on-line training strategies: \begin{enumerate} \item[(i)] {\bf Single-batch max-likelihood}, where there is only the initial batch ($B=0$), and a very large number of epochs (until convergence of the log-likelihood). This strategy, in essence, disregards any temporal structure and is equivalent to the classical maximum likelihood approach under the given model assumptions. It is the ``no time structure'' baseline, i.e., it should be improved upon for the claim that there is temporal structure. \item[(ii)] {\bf Repeated re-training} is using re-training in regular intervals using the single-batch max-likelihood strategy. Strictly speaking not a special case of Algorithm~\ref{alg:batch_epoch_training}, this is a less sophisticated and possibly much more computationally expensive baseline. \item[(iii)] {\bf On-line learning} is Algorithm~\ref{alg:batch_epoch_training} with all batch and epoch sizes equal, parameters tuned on a validation set. This is a ``standard'' on-line learning strategy. \item[(iv)] {\bf Two-stage training}, where the initial batch and epoch size is large, and all other batch and epoch sizes are equal, parameters tuned on a validation set. This is single-batch max-likelihood on a larger corpus of not completely recent historical data, with on-line updates starting only in the recent past. The idea is to get an accurate initial guess via the larger batch which is then continuously updated with smaller changes. \end{enumerate} In this manuscript, the most recent model will only be used to predict the labels/outcomes in the most recent batch. \subsection{Rank regularized log-odds matrix estimation\label{sub:Regularized-log-odds-matrix}} All the structured log-odds models we discussed so far made explicit assumption about the structure of the log-odds matrix. An alternative way is to encourage the log-odds matrix to be more structured by imposing an implicit penalty on its complexity. In this way, there is no need to specify the structure explicitly. The trade-off between the log-odds matrix's complexity and its ability to explain observed data is tuned by validation on evaluation data set. The discussion will be based on the binary outcome model from Section~\ref{sub:The-structured-log-odds}. Without any further assumption about the structure of $\boldsymbol{L}$ or $\boldsymbol{P}$, the maximum likelihood estimate for each $p_{ij}$ is given by \[ \hat{p}_{ij}:=\frac{W_{ij}}{N_{ij}} \] where $W_{ij}$ is the number of matches in which team $i$ beats team $j$, and $N_{ij}$ is the total number of matches between team $i$ and team $j$. As we have assumed observations of wins/losses to be independent, this immediately yields $\hat{\boldsymbol{P}} := \boldsymbol{W}/\boldsymbol{N},$ as the maximum likelihood estimate for $\boldsymbol{P}$, where $\hat{\boldsymbol{P}}, \boldsymbol{W},\boldsymbol{N},$ are the matrices with $\hat{p}_{ij},{W_{ij}},{N_{ij}}$ as entries and division is entry-wise. Using the invariance of the maximum likelihood estimate under the bijective transformation $\boldsymbol{L}_{ij} = \operatorname{logit} (p_{ij})$, one obtains the maximum likelihood estimate for $\boldsymbol{L}_{ij}$ as \[ \hat{\boldsymbol{L}}_{ij}=\log\left(\frac{\hat{p}_{ij}}{1-\hat{p}_{ij}}\right)= \log W_{ij} - \log W_{ji}, \] or, more concisely, $\hat{\boldsymbol{L}} = \log \boldsymbol{W} - \log \boldsymbol{W}^\top$, where the $\log$ is entry-wise. We will call the matrix $\hat{\boldsymbol{L}}$ the empirical log-odds matrix. It is worth noticing that the empirical log-odds matrix gives the best explanation in a maximum-likelihood sense, \emph{in the absence of any further structural restrictions}. Hence, any log-odds matrix additionally restricted by structural assumptions will achieve a lower likelihood on the observed data. However, in practice the empirical log-odds matrix often has very poor predictive performance because the estimate tends to have very large variance whose asymptotic is governed by the number of times that entry is observed (which is practice is usually very small or even zero). This variance may be reduced by regularising the complexity of the estimated log-odds matrix. Common complexity measures of a matrix are its matrix norms \citet{srebro2005rank}. A natural choice is the nuclear norm or trace norm, which is a continuous surrogate for rank and has found a wide range of machine-learning applications including matrix completion \citep{candes2009exact,srebro2004maximum,pong2010trace}. Recall, the trace norm of an $(n\times n)$ matrix $A$ is defined as \[ \|A\|_{*}=\sum_{k=1}^{n}\sigma_{k} \] where $\sigma_{k}$ is the $k^{th}$ singular value of the matrix $A$. The close relation to the rank of $A$ stems from the fact that the rank is the number of non-zero singular values. When used in optimization, the trace norm behaves similar to the one-norm in LASSO type models, yielding convex loss functions and forcing some singular values to be zero. This principle can be used to obtain the following optimization program for regularized log-odds matrix estimation: \begin{align*} \min_{\boldsymbol{L}} &\;\; \| \hat{\boldsymbol{L}} - \boldsymbol{L} \|_{F}^{2} + \lambda\|\boldsymbol{L}\|_{*} \\ \mbox{s.t.}&\quad \boldsymbol{L}+\boldsymbol{L}^{\top}=0 \end{align*} The first term is a Frobenius norm ``error term'', equivalent to a squared loss $$\|\hat{\boldsymbol{L}}-\boldsymbol{L}\|_{F}^{2} = \sum_{i,j}(L_{ij}-\hat{L}_{ij})^{2},$$ instead of the log-likelihood function in order to ensure convexity of the objective function. There is a well-known bound on the trace of a matrix~\citep{srebro2004learning}: For any $X\in\mathbb{R}^{n\times m}$, and $t\in\mathbb{R}$, $||X||_{*}\leq t$ if and only if there exists $A\in\mathbb{S}^{n}$ and $B\in\mathbb{S}^{m}$ such that $\left[\begin{array}{cc} A & X\\ X^{\top} & B \end{array}\right]\succeq0$ and $\frac{1}{2}\left(\operatorname{Tr}(A)+\operatorname{Tr}(B)\right)<t$. Using this bound, we can introduce two auxiliary matrices $A$ and $B$ and solve an equivalent problem: \begin{align*} \min_{A,B,\boldsymbol{L}} &\;\; \|\hat{\boldsymbol{L}}-\boldsymbol{L}\|_{F}^{2}+\frac{\lambda}{2}\left(\operatorname{Tr}(A)+\operatorname{Tr}(B)\right) \\ \mbox{s.t.}&\quad \left[\begin{array}{cc} A & \boldsymbol{L}\\ \boldsymbol{L}^{\top} & B \end{array}\right]\succeq0 \\ \mbox{and}&\quad \boldsymbol{L}+\boldsymbol{L}^{\top}=0 \end{align*} This is a Quadratic Program with a positive semi-definite constraint and a linear equality constraint. It can be efficiently solved by the interior point method~\citet{vandenberghe1996semidefinite}, and alternative algorithms for large scale settings also exist~\citep{mishra2013low}. The estimation procedure can be generalized to model ternary match outcomes. Without any structural assumption, the maximum likelihood estimate for $p_{ij}[k]:=\text{P}(Y_{ij}=\text{k})$ is given by \[ \hat{p}_{ij}[k]\coloneqq\frac{W_{ij}[k]}{N_{ij}} \] where $Y_{ij}$ is the ternary match outcome between team $i$ and team $j$, and $k$ takes values in a discrete set of ordered levels. $W_{ij}[k]$ is the number of matches between $i$ and $j$ in which the outcome is $k$. $N_{ij}$ is the total number of matches between the two teams as before. We now define $$ \boldsymbol{L}_{ij}^{(1)}\coloneqq\log\left(\frac{p_{ij}[\mbox{win}]}{p_{ij}[\mbox{draw}] + p_{ij}[\mbox{lose}]}\right)\;\mbox{and}\; \boldsymbol{L}_{ij}^{(2)}\coloneqq\log\left(\frac{p_{ij}[\mbox{win}]+ p_{ij}[\mbox{draw}] }{p_{ij}[\mbox{lose}]}\right) $$ The maximum likelihood estimate for $\boldsymbol{L}_{ij}^{(1)}$ and $\boldsymbol{L}_{ij}^{(2)}$ can be obtained by replacing $p_{ij}[k]$ with the corresponding $\hat{p}_{ij}[k]$ in $\boldsymbol{L}_{ij}^{(1)}$, yielding maximum likelihood estimates $\hat{L}_{ij}^{(1)}$ and $\hat{L}_{ij}^{(2)}$. As in Section~\ref{sub:Modeling-tenary-outcomes}, we make an implicit assumption of proportional odds for which we will regularize, namely that $\boldsymbol{L}_{ij}^{(2)}=\boldsymbol{L}_{ij}^{(1)}+\phi$. For this, we obtain a new convex objective function \[ \min_{\boldsymbol{L},\phi} \|\hat{\boldsymbol{L}}^{(1)}-\boldsymbol{L}\|_{F}^{2}+\|\hat{\boldsymbol{L}}^{(2)}-\boldsymbol{L}-\phi\cdot \ensuremath{\mathbbm{1}}\cdot \ensuremath{\mathbbm{1}}^\top||_{F}^{2}+\lambda \|\boldsymbol{L}\|_{*}. \] The optimal value of $\boldsymbol{L}$ is a regularized estimate of $\boldsymbol{L}_{ij}^{(1)}$, and $\boldsymbol{L} + \phi\cdot \ensuremath{\mathbbm{1}}\cdot \ensuremath{\mathbbm{1}}^\top$ is a regularized estimate of $\boldsymbol{L}_{ij}^{(2)}$. The regularized log-odds matrix estimation method is quite experimental as we have not established a mathematical proof for the error bound. Further research is also needed to find an on-line update formula for this method. We leave these as open questions for future investigations. \section{Experiments\label{sec:Experiments}} We perform two sets of experiments to validate the practical usefulness of the novel structured log-odds models, including the Bradley-Terry-\'{E}l\H{o} model. More precisely, we validate \begin{enumerate} \item[(i)] in the synthetic experiments in Section~\ref{sub:Synthetic-data} that the (feature-free) higher-rank models in Section~\ref{sec:specialcases} outperform the standard Bradley-Terry-\'{E}l\H{o} model if the generative process is higher-rank. \item[(ii)] in real world experiments on historical English Premier League pairings, in Section~\ref{sub:Real-data-set}, structured log-odds models that use features as proposed in Section~\ref{sub:covariate}, and the two-stage training method as proposed in Section~\ref{sub:Training-structured-log-odds} outperform methods that do not. \end{enumerate} In either setting, the methods outperform naive baselines, and their performance is similar to predictions derived from betting odds. \subsection{Synthetic experiments\label{sub:Synthetic-data}} In this section, we present the experiment results over synthetic data sets. The goal of these experiments is to show that the newly proposed structured log-odds models perform better than the original \'{E}l\H{o} model when the data were generated following the new models' assumptions. The experiments also show the validity of the parameter estimation procedure. The synthetic data are generated according to the assumptions of the structured log-odds models (\ref{eq:model_summary}). To recap, the data generation procedure is the following. \begin{enumerate} \item The binary match outcome $y_{ij}$ is sampled from a Bernoulli distribution with success probability $p_{ij}$, \item The corresponding log-odds matrix $L$ has a certain structure, \item The match outcomes are sampled independently (there is no temporal effect)\label{enu:The-match-outcomes} \end{enumerate} As the first step in the procedure, we randomly generate a ground truth log-odds matrix with a certain structure. The structure depends on the model in question and the matrix generation procedure is different for different experiments. The match outcomes $y_{ij}$'s are sampled independently from the corresponding Bernoulli random variables with success probabilities $p_{ij}$ derived from the true log-odds matrix. For a given ground truth matrix, we generate a validation set and an independent test set in order to tune the hyper-parameter. The hyper-parameters are the \textit{K factor} for the structured log-odds models, and the \textit{regularizing strength $\lambda$} for regularized log-odds matrix estimation. We perform a grid search to tune the hyper-parameter. We choose the hyper-parameter to be the one that achieves the best log-likelihood on the validation set. The model with the selected hyper-parameter is then evaluated on the test set. This validation setting is sound because of the independence assumption (\ref{enu:The-match-outcomes}). The tuned model gives a probabilistic prediction for each match in the test set. Based on these predictions, we can calculate the mean log-likelihood or the mean accuracy on the test set. If two models are evaluated on the same test set, the evaluation metrics for the two models form a paired sample. This is because the metrics depend on the specific test set. In each experiment, we replicate the above procedure for many times. In each replication, a new ground truth log-odds matrix is generated, and the models are tuned and evaluated. Each replication hence produces a paired sample of evaluation metrics because the metrics for different models are conditional independent in the same replication. We would like to know which model performs better given the data generation procedure. This question can be answered by performing hypothesis testing on paired evaluation metrics produced by the replications. We will use the paired Wilcoxon test because of the violation of normality assumption. The experiments do not aim at comparing different training methods (\ref{sub:Training-structured-log-odds}). Hence, all models in an experiment are trained using the same method to enable an apple-to-apple comparison. In experiments \ref{sub:fac2_exp} and \ref{sub:Rank-four-exp}, the structured log-odds models and the Bradley-Terry-\'{E}l\H{o} model are trained by the online update algorithm. Experiment (\ref{sub:Regularized-log-odds-matrix-exp}) concerns about the regularized log-odds matrix estimation, whose online update algorithm is yet to be derived. Therefore, all models in section \ref{sub:Regularized-log-odds-matrix-exp} are trained using batch training method. The experiments all involve 47 teams \footnote{Forty-seven teams played in the English Premier league between 1993 and 2015}. Both validation and test set include four matches between each pair of teams. \subsubsection{Two-factor Bradley-Terry-\'{E}l\H{o} model\label{sub:fac2_exp}} This experiment is designed to show that the two-factor model is superior to the Bradley-Terry-\'{E}l\H{o} model if the true log-odds matrix is a general rank-two matrix. Components in the two factors $u$ and $v$ are independently generated from a Gaussian distribution with $\mu=1$ and $\sigma=0.7$. The true log-odds matrix is calculated as in equation \ref{eq:fac2_log_odds} using the generated factors. The rest of the procedure is carried out as described in section \ref{sub:Synthetic-data}. This procedure is repeated for two hundred times. The two hundred samples of paired mean accuracy and paired mean log-likelihood are visualized in figure \ref{fig:Acc_fac2} and \ref{fig:log_lik_fac2}. Each point represents an independent paired sample. Our hypothesis is that if the true log-odds matrix is a general rank-two matrix, the two-factor \'{E}l\H{o} model is likely to perform better than the original \'{E}l\H{o} model. We perform Wilcoxon test on the paired samples obtained in the experiments. The two-factor \'{E}l\H{o} model produces significantly better results in both metrics (one-sided p-value is 0.046 for accuracy and less than $2^{-16}$ for mean log-likelihood). \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{figures/fac_2_elo_accuracy} \par\end{centering} \caption{Each dot represents the testing accuracy in an experiment. The X-asis shows the accuracy achieved by the \'{E}l\H{o} model while the Y-axis shows the accuracy achieved by the two-factor \'{E}l\H{o}.\label{fig:Acc_fac2}} \end{figure} \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{figures/fac2_elo_mean_log_lik} \par\end{centering} \caption{Each dot represents the mean log-likelihood on testing data in an experiment. The X-asis shows the mean log-likelihood achieved by the \'{E}l\H{o} model while the Y-axis shows the mean log-likelihood achieved by the two-factor \'{E}l\H{o}.\label{fig:log_lik_fac2}} \end{figure} \subsubsection{Rank-four Bradley-Terry-\'{E}l\H{o} model\label{sub:Rank-four-exp}} These two experiments are designed to compare the rank-four \'{E}l\H{o} model to the two-factor \'{E}l\H{o} model when the true log-odds matrix is a rank-four matrix. The first experiment considers the scenario when all singular values of the true log-odds matrix are big. In this case, the best rank-two approximation to the true log-odds matrix will give a relatively large error because the third and fourth singular components cannot be recovered. The log-odds matrices considered in this experiment takes the following form \begin{equation} L=s_{1}\cdot u\cdot v^{\top}+s_{2}\cdot\theta\cdot\underline{1}^{\top}-s_{1}\cdot v\cdot u^{\top}-s_{2}\cdot\underline{1}\cdot\theta^{\top}\label{eq:rank4_exp} \end{equation} , where $s_{1}$ and $s_{2}$ are the two distinct singular values and $\underline{1}$ is parallel to the vector of ones, and vector $\underline{1}$ , $u$, $v$ and $\theta$ are orthonormal. This formulation is based on the decomposition of a real antisymmetric matrix stated in section \ref{sub:Motivation-and-definition}. The true log-odds matrix $L$ has four non-zero singular values $s_{1}$, $-s_{1}$, $s_{2}$ and $-s_{2}$. In the experiment, $s_{1}=25$ and $s_{2}=24$. The rest of the data generation and validation setting is the same as the experiments in section \ref{fig:log_lik_fac2}. The procedure is repeated for 100 times. We applied the paired Wilcoxon test to the 100 paired evaluation results. The test results support the hypothesis that the rank-four \'{E}l\H{o} model performs significantly better in both metrics (one-sided p-value is less than $2^{-16}$ for both accuracy and mean log-likelihood). In the second experiment, the components in factors $u$, $v$ and $\theta$ are independently generated from a Gaussian distribution with $\mu=1$ and $\sigma=0.7$. The log-odds matrix is then calculated using equation \ref{eq:rank4_log_odds} directly. The factors are no longer orthogonal and the second pair of singular values are often much smaller than the first pair. In this case, the best rank-two approximation will be close to the true log-odds matrix. The procedure is repeated for 100 times again using the same data generation and validation setting. Paired Wilcoxon test shows rank-four \'{E}l\H{o} model achieves significantly higher accuracy on the test data (one-sided p-value is 0.015), but the mean log-likelihood is not significantly different (p-value is 0.81). The results of the above two experiments suggest that the rank-four \'{E}l\H{o} model will have significantly better performance when the true log-odds matrix has rank four and it cannot be approximated well by a rank-two matrix. \subsubsection{Regularized log-odds matrix estimation\label{sub:Regularized-log-odds-matrix-exp}} In the following two experiments, we want to compare the regularized log-odds matrix estimation method with various structured log-odds models. To carry out regularized log-odds matrix estimation, we need to first get an empirical estimate of log-odds on the training set. Since there are only four matches between any pair of teams in the training data, the estimate of log-odds often turn out to be infinity due to division by zero. Therefore, I introduced a small regularization term in the estimation of empirical winning probability $\hat{p}=\frac{n_{win}+\epsilon}{n_{total}+2\epsilon}$, where $\epsilon$ is set to be 0.01. Then, we obtain the smoothed log-odds matrix by solving the optimization problem described in section \ref{sub:Regularized-log-odds-matrix}. A sequence of $\lambda$'s are fitted, and the best one is chosen according to the log-likelihood on the evaluation set. The selected model is then evaluated on the testing data set. Structured log-odds models with different structural assumptions are used for comparison. We consider the \'{E}l\H{o} model, two-factor \'{E}l\H{o} model, and rank-four \'{E}l\H{o} model. For each of the three models, we first tune the hyper-parameter on a further split of training data. Then, we evaluate the models with the best hyper-parameter on the evaluation set and select the best model. Finally, we test the selected model on the test set to produce evaluation metrics. This experiment setting imitates the real application where we need to select the model with best structural assumption. In order to compare fairly with the trace norm regularization method (which is currently a batch method), the structured log-odds models are trained with batch method and the selected model is not updated during testing. In the first experiment, it is assumed that the structure of log-odds matrix follows the assumption of the rank-four \'{E}l\H{o} model. The log-odds matrix is generated using equation (\ref{eq:rank4_exp}) with $s_{1}=25$ and $s_{2}=2.5$. The data generation and hypothesis testing procedure remains the same as previous experiments. Paired Wilcoxon test is performed to examine the hypothesis that regularized log-odds model produces higher out-of-sample log-likelihood. The testing result is in favour of this hypothesis (p-value is less than $10^{-10}$). In the second experiment, it is assumed that the structure of log-odds matrix follows the assumption of the \'{E}l\H{o} model (section \ref{sub:The-Elo-model}). The true \'{E}l\H{o} ratings are generated using a normal distribution with mean $0$ and standard deviation $0.8$. Paired Wilcoxon test shows that the out-of-sample likelihood is somewhat different between the tuned regularized log-odds model and trace norm regularization (two sided p-value = $0.09$). The experiments show that regularized log-odds estimation can adapt to different structures of the log-odds matrix by varying the regularization parameter. The performance on simulated data set is not worse than the tuned regularized log-odds model. \subsection{Predictions on the English Premier League\label{sub:Real-data-set}} \subsubsection{Description of the data set} The whole data set under investigation consists of English Premier League football matches from 1993-94 to 2014-15 season. There are 8524 matches in total. The data set contains the date of the match, the home team, the away team, and the final scores for both teams. The English Premier League is chosen as a representative as competitive team sports because of its high popularity. In each season, twenty teams will compete against each other using the double round-robin system: each team plays the others twice, once at the home field and once as guest team. The winner of each match scores three championship points. If the match draws, both teams score one point. The final ranking of the teams are determined by the championship points scored in the season. The team with the highest rank will be the champion and the three teams with the lowest rank will move to Division One (a lower-division football league) next season. Similarly, three best performing teams will be promoted from Division One into the Premier League each year. In the data set, 47 teams has played in the Premier League. The data set is retrieved from http://www.football-data.co.uk/. The algorithms are allowed to use all available information prior to the match to predict the outcome of the match (win, lose, draw). \subsubsection{Validation setting\label{sub:Tunning-and-validation}} In the study of the real data set, we need a proper way to quantify the predictive performance of a model. This is important for two reasons. Firstly, we need to tune the hyper-parameters in the model by performing model validation. The hyper-parameters that bring best performance will be chosen. More importantly, we wish to compare the performance of different types of models scientifically. Such comparison is impossible without a quantitative measure on model performance. It is a well-known fact that the errors made on the training data will underestimate the model's true generalization error. The common approaches to assess the goodness of a model include cross validation and bootstrapping \citep{stone1974cross,efron1997improvements}. However, both methods assume that the data records are statistically independent. In particular, the records should not contain temporal structure. In the literature, the validation for data with temporal structure is largely an unexplored area. However, the independence assumption is plausibly violated in this study and it is highly likely to affect the result. Hence, we designed an set of ad-hoc validation methods tailored for the current application. The validation method takes two disjoint data sets, the training data and the testing data. We concatenate the training and testing data into a single data set and partition it into batches $\mathcal{D}$ following the definitions given in \ref{sub:Training-structured-log-odds.online}. We then run Algorithm \ref{alg:batch_epoch_training} on $\mathcal{D}$, but only collect the predictions of matches in the testing data. Those predictions are then compared with the real outcomes in the testing data and various evaluation metrics can be computed. The exact way to obtain batches $\mathcal{D}$ will depend on the training method we are using. In the experiments, we are mostly interested in the repeated batch re-training method (henceforth batch training method), the on-line training method and the two-stage training method. For these three methods, the batches are defined as follows. \begin{enumerate} \item Batch training method: the whole training data forms the initial batch $\mathcal{D}_{0}$; the testing data is partitioned into similar-sized batches based on time of the match. \item On-line training method: all matches are partitioned into similar-sized batches based on time of the match. \item Two-stage method: the same as batch training method with a different batch size on testing data. \end{enumerate} In general, a good validation setting should resemble the usage of the model in practice. Our validation setting guarantees that no future information will be used in making current predictions. It is also naturally related to the training algorithm presented in \ref{sub:Training-structured-log-odds.online}. \subsubsection{Prediction Strategy\label{sub:Prediction-Strategy}} Most models in this comparative study have tunable hyper-parameters. Those hyper-parameters are tuned using the above validation settings. We split the whole data set into three disjoint subsets, the training set, the tuning set and the testing set. The first match in the training set is the one between Arsenal and Coventry on 1993-08-04, and the first match in the tunning set is the one between Aston Villa and Blackburn on 2005-01-01. The first match in the testing data is the match between Stoke and Fulham on 2010-01-05, and the last match in the testing set is between Stoke and Liverpool on 2015-05-24. The testing set has 2048 matches in total. In the tuning step, we supply the training set and the tuning set to the validation procedure as the training and testing data. To find the best hyper-parameter, we perform a gird search and the hyper-parameter which achieves the highest out-of-sample likelihood is chosen. In theory, the batch size and epoch size are tunable hyper-parameters, but in the experiments we choose these parameters based on our prior knowledge. For the on-line and two-stage method, each individual match in testing data is regarded as a batch. The epoch size is chosen to be one. This reflects the usual update rule of the conventional \'{E}l\H{o} ratings: the ratings are updated immediately after the match outcome becomes available. For the batch training method, matches take place in the same quarter of the year are allocated to the same batch. The model with the selected hyper-parameters is tested using the same validation settings. The training data now consists of both training set and tuning set. The testing data is supplied with the testing set. This prediction strategy ensures that the training-evaluating-testing split is the same for all training methods, which means that the model will be accessible to the same data set regardless of what training method is being used. This ensures that we can compare different training methods fairly. All the models will also be compared with a set of benchmarks. The first benchmark is a naive baseline which always predicts home team to win the match. The second benchmark is constructed from the betting odds given by bookmakers. For each match, the bookmakers provide three odds for the three outcomes, win, draw and lose. The betting odds and the probability has the following relationship: $\text{P}=\frac{1}{\text{odds}}$. The probabilities implied by betting odds are used as prediction. However, the bookmaker's odds will include a vigorish so the implied ``probability'' does not sum to one. They are normalized by dividing each term with the sum to give the valid probability. The historical odds are also obtained from http://www.football-data.co.uk/. \subsubsection{Quantitative comparison for the evaluation metrics} We use log-likelihood and accuracy on the testing data set as evaluation metrics. We apply statistical hypothesis testing on the validation results to compare the models quantitatively. We calculate the log-likelihood on each test case for each model. If we are comparing two models, the evaluation metrics for each test case will form a paired sample. This is because test cases might be correlated with each other and model's performance is independent given the test case. The paired t-test is used to test whether there is a significant difference in the mean of log-likelihood. We draw independent bootstrap samples with replacement from the log-likelihood values on test cases, and calculate the mean for each sample. We then calculate the 95\% confidence interval for the mean log-likelihood based on the empirical quantiles of bootstrapped means \citep{davison1997bootstrap}. Five thousand bootstrap samples are used to calculate these intervals. The confidence interval for accuracy is constructed assuming the model's prediction for each test case, independently, has a probability $p$ to be correct. The reported 95\% confidence interval for Binomial random variable is calculated from a procedure first given in \citet{clopper1934use}. The procedure guarantees that the confidence level is at least 95\%, but it may not produce the shortest-length interval. \subsubsection{Performance of the structured log-odds model\label{sub:Performance-elo}} We performed the tunning and validation of the structured log-odds models using the method described in section \ref{sub:Tunning-and-validation}. The following list shows all models examined by this experiment: \begin{enumerate} \item The Bradley-Terry-\'{E}l\H{o} model (section \ref{sub:The-Elo-model}) \item Two-factor Bradley-Terry-\'{E}l\H{o} model (section \ref{sub:The-structured-log-odds}) \item Rank-four Bradley-Terry-\'{E}l\H{o} model (section \ref{sub:The-structured-log-odds}) \item The Bradley-Terry-\'{E}l\H{o} model with score difference (section \ref{sub:Using-score-difference}) \item The Bradley-Terry-\'{E}l\H{o} model with two additional features (section \ref{sub:covariate}) \end{enumerate} All models include a free parameter for home advantage (see section \ref{sub:covariate}), and they are also able to capture the probability of a draw (section \ref{sub:The-structured-log-odds}). We have introduced two covariates in the fifth model. These two covariates indicate whether the home team or away team is just promoted from Division One this season. We have also tested the trace norm regularized log-odds model, but as indicated in section \ref{sub:Regularized-log-odds-matrix} the model still has many limitations for the application to the real data. The validation results are summarized in table \ref{tab:elo-batch} and table \ref{tab:elo-batch-1}. The testing results help us understand the following two scientific questions: \begin{enumerate} \item Which training method brings the best performance to structured log-odds models? \item Which type of structured log-odds model achieves best performance on the data set? \end{enumerate} In order to answer the first question, we test the following hypothesis: \begin{description} \item [{(H1):}] Null hypothesis: for a certain model, two-stage training method and online training method produce the same mean out-of-sample log-likelihood. Alternative hypothesis: for a certain model two-stage training method produces a higher mean out-of-sample log-likelihood than online training method. \end{description} Here we compare the traditional on-line updating rule with the newly developed two-stage method. The paired t-test is used to assess the above hypotheses. The p-values are shown in table \ref{tab:test1}. The cell associated with the \'{E}l\H{o} model with covariates are empty because the online training method does not update the coefficients for features. The first columns of the table gives strong evidence that the two-stage training method should be preferred over online training. All tests are highly significant even if we take into account the issue of multiple testing. In order to answer the second question, we compare the four new models with the Bradley-Terry-\'{E}l\H{o} model. The hypothesis is formulated as \begin{description} \item [{(H2):}] Null hypothesis: using the best training method, the new model and the \'{E}l\H{o} model produce the same mean out-of-sample log-likelihood. Alternative hypothesis: using the best training method, the new model produces a higher mean out-of-sample log-likelihood than the \'{E}l\H{o} model. \end{description} The p-values are listed in the last column of table \ref{tab:test1}. The result also shows that adding more factors in the model does not significantly improve the performance. Neither two-factor model nor rank-four model outperforms the original Bradley-Terry-\'{E}l\H{o} model on the testing data set. This might provide evidence and justification of using the Bradley-Terry-\'{E}l\H{o} model on real data set. The model that uses the score difference performs slightly better than the original Bradley-Terry-\'{E}l\H{o} model. However, the difference in out-of-sample log-likelihood is not statistically significant (the p-value for one-sided test is 0.24 for likelihood). Adding additional covariates about team promotion significantly improves the Bradley-Terry-\'{E}l\H{o} model. \begin{table}[H] \begin{centering} \begin{tabular}{|c|c|c|} \hline Type & H1 & H2\tabularnewline \hline \hline \'{E}l\H{o} model & $7.8\times10^{-5}$ & -\tabularnewline \hline Two-factor model & $4.4\times10^{-14}$ & \textasciitilde{}1\tabularnewline \hline Rank-four model & $9.8\times10^{-9}$ & \textasciitilde{}1\tabularnewline \hline Score difference & $2.2\times10^{-16}$ & 0.235\tabularnewline \hline \'{E}l\H{o} model with covariates & - & 0.002\tabularnewline \cline{2-3} \end{tabular} \par\end{centering} \caption{Hypothesis testing on the structured log-odds model. The column ``Type'' specifies the type of the model; the remaining two columns shows the one-sided p-values for the associated hypothesis\label{tab:test1}} \end{table} \begin{table}[H] \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline Type & Method & Acc & 2.5\% & 97.5\%\tabularnewline \hline \hline \multirow{2}{*}{Benchmark } & Home team win & 46.07\% & 43.93\% & 48.21\% \tabularnewline \cline{2-5} & Bet365 odds & \textbf{54.13\% } & 51.96\% & 56.28\% \tabularnewline \hline \multirow{3}{*}{\'{E}l\H{o} model} & Two-stage & 52.40\% & 50.23\% & 54.56\% \tabularnewline \cline{2-5} & Online & 52.16\% & 50.00\% & 54.32\%\tabularnewline \cline{2-5} & Batch & 50.58\% & 48.41\% & 52.74\%\tabularnewline \hline \multirow{3}{*}{Two-factor model} & Two-stage & 51.30\% & 49.13\% & 53.46\% \tabularnewline \cline{2-5} & Online & 50.34\% & 48.17\% & 52.50\%\tabularnewline \cline{2-5} & Batch & 50.86\% & 48.69\% & 53.03\%\tabularnewline \hline \multirow{3}{*}{Rank-four model} & Two-stage & 51.34\% & 49.17\% & 53.51\% \tabularnewline \cline{2-5} & Online & 50.34\% & 48.17\% & 52.50\%\tabularnewline \cline{2-5} & Batch & 50.58\% & 48.41\% & 52.74\%\tabularnewline \hline \multirow{3}{*}{Score difference} & Two-stage & 52.59\% & 50.42\% & 54.75\%\tabularnewline \cline{2-5} & Online & 47.17\% & 45.01\% & 49.34\%\tabularnewline \cline{2-5} & Batch & 51.10\% & 48.93\% & 53.27\%\tabularnewline \hline \multirow{2}{*}{\'{E}l\H{o} model with covariates} & Two-stage & \textbf{52.78\%} & 50.61\% & 54.95\%\tabularnewline \cline{2-5} & Batch & 50.86\% & 48.69\% & 53.03\%\tabularnewline \hline Trace norm regularized model & Batch & 45.89\% & 43.54\% & 48.21\%\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Structured log-odds model's accuracy on testing data. The column ``Type'' specifies the type of the model; the column ``Method'' specifies the training method. Testing accuracy is given in the column ``Acc''. The last two columns gives the 95\% confidence interval for testing accuracy \label{tab:elo-batch}} \end{table} \begin{table}[H] \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline Type & Method & Mean log-loss & 2.5\% & 97.5\%\tabularnewline \hline \hline Benchmark & Bet365 odds & \textbf{-0.9669} & -0.9877 & -0.9460\tabularnewline \hline \multirow{3}{*}{\'{E}l\H{o} model} & Two-stage & -0.9854 & -1.0074 & -0.9625\tabularnewline \cline{2-5} & Online & -1.0003 & -1.0254 & -0.9754\tabularnewline \cline{2-5} & Batch & -1.0079 & -1.0314 & -0.9848\tabularnewline \hline \multirow{3}{*}{Two-factor model} & Two-stage & -1.0058 & -1.0286 & -0.9816\tabularnewline \cline{2-5} & Online & -1.0870 & -1.1241 & -1.0504\tabularnewline \cline{2-5} & Batch & -1.0158 & -1.0379 & -0.9919\tabularnewline \hline \multirow{3}{*}{Rank-four model} & Two-stage & -1.0295 & -1.0574 & -1.0016\tabularnewline \cline{2-5} & Online & -1.1024 & -1.0638 & -1.1421\tabularnewline \cline{2-5} & Batch & -1.0078 & -1.0291 & -0.9860\tabularnewline \hline \multirow{3}{*}{Score difference} & Two-stage & -0.9828 & -1.0034 & -0.9623\tabularnewline \cline{2-5} & Online & -1.1217 & -1.1593 & -1.0833\tabularnewline \cline{2-5} & Batch & -1.0009 & -1.0206 & -0.9802\tabularnewline \hline \multirow{2}{*}{\'{E}l\H{o} model with covariates} & Two-stage & \textbf{-0.9807} & -1.0016 & -0.9599\tabularnewline \cline{2-5} & Batch & -1.0002 & -1.0204 & -0.9798\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Structured log-odds model's mean log-likelihood on testing data. The column ``Type'' specifies the type of the model; the column ``Method'' specifies the training method. Mean out-of-sample log-likelihood is given in the column ``Mean log-loss''. The last two columns gives the 95\% confidence interval for mean out-of-sample log-likelihood\label{tab:elo-batch-1}} \end{table} \subsubsection{Performance of the batch learning models} This experiment compares the performance of batch learning models. The following list shows all models examined by this experiment: \begin{enumerate} \item GLM with elastic net penalty using multinomial link function \item GLM with elastic net penalty using ordinal link function \item Random forest \item Dixon-Coles model \end{enumerate} The first three models are machine learning models that can be trained on different features. The following features are considered in this experiment: \begin{enumerate} \item Team id: the identity of home team and away team \item Ranking: the team's current ranking in Championship points and goals \item VS: the percentage of time that home team beats away team in last 3, 6, and 9 matches between them \item Moving average: the moving average of the following monthly features using lag 3, 6, 12, and 24 \begin{enumerate} \item percentage of winning at home \item percentage of winning away \item number of matches at home \item number of matches away \item championship points earned \item number of goals won at home \item number of goals won away \item number of goals conceded at home \item number of goals conceded away \end{enumerate} \end{enumerate} The testing accuracy and out-of-sample log-likelihood are summarized in table \ref{tab:acc_batch} and table \ref{tab:log-lik-batch}. All models perform better than the baseline benchmark, but no model seems to outperform the state-of-the-art benchmark (betting odds). We applied statistical testing to understand the following questions \begin{enumerate} \item Does the GLM with ordinal link function perform better than the GLM with multinomial link function? \item Which set of features are most useful to make prediction? \item Which model performs best among GLM, Random forest, and Dixon-Coles model? \end{enumerate} For question one, we formulate the hypothesis as: \begin{description} \item [{(H3):}] Null hypothesis: for a given set of feature, the GLM with ordinal link function and the GLM with multinomial link function produce the same mean out-of-sample log-likelihood. Alternative hypothesis: for a given set of feature, the mean out-of-sample log-likelihood is different for the two models. \end{description} The p-values for these tests are summarized in table \ref{tab:p-values-for-H5}. In three out of four scenarios, the test is not significant. There does not seem to be enough evidence against the null hypothesis. Hence, we retain our believe that the GLM with different link functions have the same performance in terms of mean out-of-sample log-likelihood. \begin{table} \begin{centering} \begin{tabular}{|c|c|} \hline Features & p-value\tabularnewline \hline \hline Team\_id only & 0.148\tabularnewline \hline Team\_id and ranking & 0.035\tabularnewline \hline Team\_id and VS & 0.118\tabularnewline \hline Team\_id and MA & 0.121\tabularnewline \hline \end{tabular} \par\end{centering} \caption{p-values for H3\label{tab:p-values-for-H5}} \end{table} For question two, we observe that models with the moving average feature have achieved better performance than the same model trained with other features. We formulate the hypothesis as: \begin{description} \item [{(H4):}] Null hypothesis: for a given model, the moving average feature and an alternative feature set produce the same mean out-of-sample log-likelihood. Alternative hypothesis: for a given model, the mean out-of-sample log-likelihood is higher for the moving average feature. \end{description} The p-values are summarized in table \ref{tab:p-values-for-H6}. The tests support our believe that the moving average feature set is the most useful one among those examined in this experiment. \begin{table} \begin{centering} \begin{tabular}{|c|c|c|} \hline Features & GLM1 & GLM2\tabularnewline \hline \hline Team\_id only & $2.7\times10^{-12}$ & $5.3\times10^{-8}$\tabularnewline \hline Team\_id and ranking & $1.2\times10^{-9}$ & $3.7\times10^{-6}$\tabularnewline \hline Team\_id and VS & 0.044 & 0.004\tabularnewline \hline \end{tabular} \par\end{centering} \caption{p-values for H4: the column ``Features'' are the alternative features compared with the moving average features. The next two columns contain the p-values for the GLM with multinomial link function (GLM1) and the GLM with ordinal link function (GLM2) \label{tab:p-values-for-H6}} \end{table} Finally, we perform comparison among different models. The comparisons are made between the GLM with multinomial link function, Random forest, and Dixon-Coles model. The features used are the moving average feature set. The p-values are summarized in table \ref{tab:p-values-for-H7}. The tests detect a significant difference between GLM and Random forest, but the other two pairs are not significantly different. We apply the p-value adjustment using Holm's method in order to control family-wise type-one error \citep{sinclair2013alpha}. The adjusted p-values are not significant. Hence, we retain our belief that the three models have the same predictive performance in terms of mean out-of-sample log-likelihood. \begin{table} \begin{centering} \begin{tabular}{|c|c|c|} \hline Comparison & p-value & adjusted\tabularnewline \hline \hline GLM and RF & 0.03 & 0.08\tabularnewline \hline GLM and DC & 0.48 & 0.96\tabularnewline \hline DC and RF & 0.54 & 0.96\tabularnewline \hline \end{tabular} \par\end{centering} \caption{p-values for model comparison: the column ``Comparison'' specifies which two models are being compared. ``RF'' stands for Random forest; ``DC'' stands for the Dixon-Coles model. The column ``p-value'' contains the two-sided p-value of the corresponding paired t-test. The column ``adjusted'' shows the adjusted p-values for multiple testing\label{tab:p-values-for-H7}} \end{table} \begin{table} \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline Models & Features & Acc & 2.5\% & 97.5\% \tabularnewline \hline \hline \multirow{2}{*}{Benchmark } & Home team win & 46.07\% & 43.93\% & 48.21\% \tabularnewline \cline{2-5} & Bet365 odds & 54.13\% & 51.96\% & 56.28\% \tabularnewline \hline \multirow{4}{*}{GLM1} & Team\_id only & 50.05\% & 47.88\% & 52.22\% \tabularnewline \cline{2-5} & Team\_id and ranking & 50.62\% & 48.45\% & 52.79\% \tabularnewline \cline{2-5} & Team\_id and VS & 51.25\% & 49.08\% & 53.41\% \tabularnewline \cline{2-5} & Team\_id and MA & \textbf{52.69\% } & 50.52\% & 54.85\% \tabularnewline \hline \multirow{4}{*}{GLM2} & Team\_id only & 50.67\% & 48.52\% & 52.82\% \tabularnewline \cline{2-5} & Team\_id and ranking & 50.24\% & 48.09\% & 52.38\% \tabularnewline \cline{2-5} & Team\_id and VS & 51.92\% & 49.75\% & 54.08\% \tabularnewline \cline{2-5} & Team\_id and MA & 52.93\% & 50.76\% & 55.09\% \tabularnewline \hline RF & Team\_id and MA & 52.06\% & 49.89\% & 54.23\%\tabularnewline \hline Dixon-Coles & - & 52.54\% & 50.40\% & 54.68\% \tabularnewline \hline \end{tabular} \par\end{centering} \caption{Testing accuracy for batch learning models: The column ``Type'' specifies the type of the model; ``GLM1'' refers to the GLM with multinomial link function, and ``GLM2'' refers to the GLM with ordinal link function. column ``Models'' specifies the model, and the column ``Features'' specifies the features used to train the model. Testing accuracy is given in the column ``Acc''. The last two columns gives the 95\% confidence interval for testing accuracy.\label{tab:acc_batch}} \end{table} \begin{table} \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline Models & Features & Mean log-loss & 2.5\% & 97.5\% \tabularnewline \hline \hline \multirow{1}{*}{Benchmark} & Bet365 odds & -0.9669 & -0.9877 & -0.9460\tabularnewline \hline \multirow{4}{*}{GLM1} & Team\_id only & -1.0123 & -1.0296 & -0.9952\tabularnewline \cline{2-5} & Team\_id and ranking & -1.0006 & -1.0175 & -0.9829\tabularnewline \cline{2-5} & Team\_id and VS & -0.9969 & -1.0225 & -0.9721\tabularnewline \cline{2-5} & Team\_id and MA & \textbf{-0.9797} & -0.9993 & -0.9609\tabularnewline \hline \multirow{4}{*}{GLM2} & Team\_id only & -1.0184 & -1.0399 & -0.9964\tabularnewline \cline{2-5} & Team\_id and ranking & -1.0097 & -1.0317 & -0.9874\tabularnewline \cline{2-5} & Team\_id and VS & -1.0077 & -1.0338 & -0.9813\tabularnewline \cline{2-5} & Team\_id and MA & -0.9838 & -1.0028 & -0.9656\tabularnewline \hline RF & Team\_id and MA & -0.9885 & -1.0090 & -0.9683\tabularnewline \hline Dixon-Coles & - & -0.9842 & -1.0076 & -0.9610\tabularnewline \hline \end{tabular} \par\end{centering} \caption{out-of-sample log-likelihood for batch learning models: The column ``Type'' specifies the type of the model; ``GLM1'' refers to the GLM with multinomial link function, and ``GLM2'' refers to the GLM with ordinal link function. the column ``Models'' specifies the model, and the column ``Features'' specifies the features used to train the model. Mean out-of-sample log-likelihood is given in the column ``Mean log-loss''. The last two columns gives the 95\% confidence interval for mean out-of-sample log-likelihood.\label{tab:log-lik-batch}} \end{table} \subsection{Fairness of the English Premier League ranking} ``Fairness'' as a concept is statistically undefined and due to its subjectivity is not empirical unless based on peoples' opinions. The latter may wildly differ and are not systematically accessible from our data set or in general. Hence we will base our study of the Premier League ranking scheme's ``fairness'' on a surrogate derived from the following plausibility considerations: Ranking in any sport should plausibly be based on the participants' skill in competing in official events of that sport. By definition the outcomes of such events measure the skill in competing at the sport, distorted by a possible component of ``chance''. The ranking, derived exclusively from such outcomes, will hence also be determined by the so-measured skills and a component of ``chance''. A ranking system may plausibly be considered fair if the final ranking is only minimally affected by whatever constitutes ``chance'', while accurately reflecting the ordering of participating parties in terms of skill, i.e., of being better at the game. Note that such a definition of fairness is disputable, but it may agree with the general intuition when ranking players of games with a strong chance component such as card or dice games, where cards dealt or numbers thrown in a particular game should, intuitively, not affect a player's rank, as opposed to the player's skills of making the best out of a given dealt hand or a dice throw. Together with the arguments from Section~\ref{sub:intro_one} which argue for predictability-in-principle surrogating skill, and statistical noise surrogating chance, fairness may be surrogated as the stability of the ranking under the best possible prediction that surrogates the ``true odds''. In other words, if we let the same participants, under exactly the same conditions, repeat the whole season, and all that changes is the dealt cards, the thrown numbers, and similar possibly unknown occurrences of ``chance'', are we likely to end up with the same ranking as the first time? While of course this experiment is unlikely to be carried out in real life for most sports, the best possible prediction which is surrogated by the prediction by the best accessible predictive model yields a statistically justifiable estimate for the outcome of such a hypothetical real life experiment. To obtain this estimate, we consider the as the ``best accessible predictive model'' the Bradley-Terry-\'{E}l\H{o} model with features, learnt by the two-stage update rule (see Section~\ref{sub:Performance-elo}), yielding a probabilistic prediction for every game in the season. From these predictions, we may independently sample match outcomes and final rank tables according to the official scoring and ranking rules. Figure~\ref{fig:final_ranking} shows estimates for the distribution or ranks of Premier League teams participating in the 2010 season. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.8]{figures/final_ranking.png} \par\end{centering} \caption{Estimated probability for each team participating in the English Premier League season 2010-2011 to obtain the given final rank. Rows are indexed by the different teams in the Premier League of 2010-2011, ordered descendingly by their actual final rank. The x-axis is indexed by the possible ranks from 1 (best) to 20 (worst). The horizontal box-plots are obtained from a Monte-Carlo-sample from 10.000 of the predictive ranking distribution; boxes depict estimates the 25\%, 50\% and 75\% quantiles of the predictive distribution's Monte Carlo estimate, with whiskers being min/max or 1.5IQR. \label{fig:final_ranking}} \end{figure} It may be observed that none of the teams, except Manchester United, ends up with the same rank they achieved in reality in more than 50\% of the cases. For most teams, the middle 50\% are spread over 5 or more ranks, and for all teams, over 2 or more. From a qualitative viewpoint, the outcome for most teams appears very random, hence the allocation of the final rank seems qualitatively similar to a game of chance notable exceptions being Manchester United and Chelsea whose true final rank is similar to a narrow expected/predicted range. It is also worthwhile noting that Arsenal has been predicted/expected among the first three with high confidence, but eventually was ranked fourth. The situation is qualitatively similar for later years, though not shown here. \newpage \section{Discussion and Summary\label{sec:Summary-and-Conclusion}} We discuss our findings in the context of our questions regarding prediction of competitive team sports and modelling of English Premier League outcomes, compare Section~\ref{sec:Questions} \subsection{Methodological findings} As the principal methodological contribution of this study, we have formulated the Bradley-Terry-\'{E}l\H{o} model in a joint form, which we have extended to the flexible class of structured log-odds models. We have found structured log-odds models to be potentially useful in the following way: \begin{enumerate} \item[(i)] The formulation of the Bradley-Terry-\'{E}l\H{o} model as a parametric model within a supervised on-line setting solves a number of open issues of the heuristic \'{E}l\H{o} model, including setting of the K-factor and new players/teams. \item[(ii)] In synthetic experiments, higher rank \'{E}l\H{o} models outperform the Bradley-Terry-\'{E}l\H{o} model in predicting competitive outcomes if the generative truth is higher rank. \item[(iii)] In real world experiments on the English Premier league, we have found that the extended capability of structured log-odds models to make use of features is useful as it allows better prediction of outcomes compared to not using features. \item[(iv)] In real world experiments on the English Premier league, we have found that our proposed two-stage training strategy for on-line learning with structured log-odds models is useful as it allows better prediction of outcomes compared to using standard on-line strategies or batch training. \end{enumerate} We would like to acknowledge that many of the mentioned suggestions and extensions are already found in existing literature, while, similar to the Bradley-Terry and \'{E}l\H{o} models in which parsimonious parametric form and on-line learning rule have been separated, those ideas usually appear without being joint to a whole. We also anticipate that the highlighted connections to generalized linear models, low-rank matrix completion and neural networks may prove fruitful in future investigations. \subsection{Findings on the English Premier League} The main empirical on the English Premier League data may be described as follows. \begin{enumerate} \item[(i)] The best predictions, among the methods we compared, are obtained from a structured log-odds model with rank one and added covariates (league promotion), trained via the two-stage strategy. Not using covariates or the batch training method makes the predictions (significantly) worse (in terms of out-of-sample likelihood). \item[(ii)] All our models and those we adapted from literature were outperformed by the Bet365 betting odds. \item[(iii)] However, all informed models were very close to each other and the Bet 365 betting odds in performance and not much better than the uninformed baseline of team-independent home team win/draw/lose distribution. \item[(iv)] Ranking tables obtained from the best accessible predictive model (as a surrogate for the actual process by which it is obtained, i.e., the games proper) are, qualitatively, quite random, to the extent that most teams may end up in wildly different parts of the final table. \end{enumerate} While we were able to present a parsimonious and interpretable state-of-art model for outcome prediction for the English Premier League, we found it surprising how little the state-of-art improves above an uninformed guess which already predicts almost half the (win/lose/draw) outcomes correctly, while differences between the more sophisticated methods range in the percents. Given this, it is probably not surprising that a plausible surrogate for humanity's ``secret'' or non-public knowledge of competitive sports prediction, the Bet365 betting odds, is not much better either. Note that this surrogate property is strongly plausible from noticing that offering odds leading to a worse prediction leads to an expected loss in money, hence the market indirectly forces bookmakers to disclose their best prediction\footnote{ The expected log-returns of a fractional portfolio where a fraction $q_i$ of the money is bet on outcome $i$ against a bookmaker whose odds correspond to probabilities $p_i$ are $\ensuremath{\mathbb{E}} [L_\ell (p,Y)] - \ensuremath{\mathbb{E}}[L_\ell (q,Y)] - c$ where $L_\ell$ is the log-loss and $c$ is a vigorish constant. In this utility quantifier, portfolio composition and bookmaker odds are separated, hence in a game theoretic adversarial minimax/maximin sense, the optimal strategies consist in the bookmaker picking $p$ and the player picking $q$ to be their best possible/accessible prediction, where ``best'' is measured through expected log-loss (or an estimate thereof). Note that this argument does not take into account behavioural aspects or other utility/risk quantifiers such as a possible risk premium, so one should consider it only as an approximation, though one that is plausibly sufficient for the qualitative discussion in-text. }. Thus, the continued existence of betting companies hence may lead to the belief that this is possibly rather due to predictions of ordinary people engaged in betting that are worse than uninformed, rather than betting companies' capability of predicting better. Though we have not extensively studied betting companies empirically, hence this latter belief is entirely conjectural. Finally, the extent to which the English Premier League is unpredictable raises an important practical concern: influential factors cannot be determined from the data if prediction is impossible, since by recourse to the scientific method assuming an influential factor is one that improves prediction. Our results above allow to definitely conclude only three such factors which are observable, namely a general ``good vs bad'' quantifier for whatever one may consider as a team's ``skills'', which of the teams is at home, and the fact whether the team is new to the league. As an observation, this is not very deep or unexpected - the surprising aspect is that we were not able to find evidence for more. On a similar note, it is surprising how volatile a team's position in the final ranking tables seems to be, given the best prediction we were able to achieve. Hence it may be worthwhile to attempt to understand the possible sources of the observed nigh-unpredictability. On one hand, it can simply be that the correct models are unknown to us and the right data to make a more accurate prediction have been disregarded by us. Though this is made implausible by the observation that the betting odds are similarly bad in predicting, which is somewhat surprising as we have not used much of possibly available detail data such as in-match data and/or player data (which are heavily advertised by commercial data providers these days). On the other hand, unpredictability may be simply due to a high influence of chance inherent to English Premier League games, similar to a game of dice that is not predictable beyond the correct odds. Such a situation may plausibly occur if the ``skill levels'' of all the participating teams are very close - in an extreme case, where 20 copies of the same team play against each other, the outcome would be entirely up to chance as the skills match exactly, no matter how good or bad these are. Rephrased differently, a game of skill played between two players of equal skill becomes a game of chance. Other plausible causes of the situation is that the outcome a Premier League game is more governed by chance and coincidence than by skills in the first place, or that there are unknown influential factors which are unobserved and possibly distinct from both chance or playing skills. Of course, the mentioned causes do not exclude each other and may be present in varying degrees not determinable from the data considered in this study. From a team's perspective, it may hence be interesting to empirically re-evaluate measures that are very costly or resource consuming under the aspect of predictive influence in a similar analysis, say. \subsection{Open questions} A number of open research questions and possible further avenues of investigation have already been pointed out in-text. We summarize what we believe to be the most interesting avenues for future research: \begin{enumerate} \item[(i)] A number of parallels have been highlighted between structured log-odds models and neural networks. It would be interesting to see whether adding layers or other ideas of neural network flavour are beneficial in any application. \item[(ii)] The correspondence to low-rank matrix completion has motivated a nuclear norm regularized algorithm; yielding acceptable results in a synthetic scenario, the algorithm did not perform better than the baseline on the Premier League data. While this might be due to the above-mentioned issues with that data, general benefits of this alternative approach to structured log-odds models may be worth studying - as opposed to training approaches closer to logistic regression and neural networks. \item[(iii)] The closeness to low-rank matrix completion also motivates to study identifiability and estimation variance bounds on particular entries of the log-odds matrix, especially in a setting where pairings are not independently or uniformly sampled. \item[(iv)] While our approach to structured log-odds is inherently parametric, it is not fully Bayesian - though naturally, the benefit of such an approach may be interesting to study. \item[(v)] We did not investigate in too much detail the use of features such as player data, and structural restrictions on the feature coefficient matrices and tensors. Doing this, not necessarily in the context of the English Premier League, might be worthwhile, though such a study would have to rely on good sources of added feature data to have any practical impact. \end{enumerate} On a more general note, the connection between neural networks and low-rank or matrix factorization principles apparent in this work may also be an interesting direction to explore, not necessarily in a competitive outcome prediction context. \bibliographystyle{plainnat}
{ "attr-fineweb-edu": 2.986328, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUb4rxK6wB9mpb04Md
\section{Introduction} Sports riots are a worldwide phenomenon and great cause for concern, due to the financial and physical damages they can incur. Moreover, the occurrence of riots can incite a sense of fear amongst the public, with people concerned for their well-being and safety. For example, the riots which occurred in June 2011 in Vancouver, upon the city home team losing the Stanley Cup ice hockey tournament, incurred approximately C\$3.78 million in damages, 52 reported assaults, and 250 visits to emergency rooms at nearby hospitals \cite{Vancouver2011rep}. In February 2012, 79 people were killed in a riot at a football match, when Al-Masry supporters charged the field after a victory over Al-Ahly club \cite{port-said}. By contrast, legitimate protests associated with social reform and activism have only in rare occasions led to riotous behaviour, as the impetus of these riots is directly linked to aggressive intervention by law enforcement officials \cite{campbell2004remote,hopkins2014football}. As such, we specifically focus on sports riots prior to police intervention, in an effort to distinguish illegitimate riotous behaviour arising in sporting events from actions linked to peaceful protest. While public policies have been introduced with the intention of curbing hooliganism and anti-social behaviour arising from sporting events, including football banning orders in the UK \cite{stott2006football,hopkins2014football,hester2021assessing}, many of these policies have been criticised for their impact upon civil liberties and human rights \cite{stott2006football,hopkins2014football,hester2021assessing}. In order to lessen and limit the negative impacts of sports riots, further understanding has been sought from social-psychological \cite{zani, mannleonpearce, soreloser, russellpersonalities, dunning} and physiological perspectives \cite{bloodpressure,testosterone}. Theoretical studies and practical investigations have aimed to relate riot initiations and escalations to several variables, including environmental factors \cite{baron, geen, dewar}, situational factors \cite{gaskell, semyonov}, the influence of alcohol \cite{fitzpatrick, piquero, guschwan, peitersen}, and a myriad of social factors \cite{mannleonpearce, zani, arms1997, apter92, soreloser, russellpersonalities, russell,ostrowsky,caseboucher,spaaij,lewisbook,fields2007}. These studies have been conducted across a wide range of different sports, sporting events, level of play, and countries. As such, while studies investigating the relation of some factors are in agreement, others stand in conflict. Nevertheless, a common element in riotous behaviour is the rise of crowd behaviours \cite{socidtheo, idandsoctheo, granovetter}. In particular, \cite{granovetter} focuses on the idea of `thresholds' for an individual's participation in a group activity, suggesting that personal thresholds can decrease as other individuals participate. More recently, mathematical modelling perspectives are sought after to understand riot dynamics and implement control measures with a view to reducing consequences such as property damage (c.f. \cite{bonnasse2018epidemiological, London2011, nonlinearurbancrime, communaldisorder}). Previous mathematical models of riots \cite{London2011}, urban crime \cite{nonlinearurbancrime}, and communal disorder \cite{communaldisorder} fit deterministic models to realistic patterns and obtained data. While these models aim to reproduce the population-level behaviour of riot dynamics, few models have been proposed that emphasise the individual-level interactions that give rise to rioting (c.f. \cite{bonnasse2018epidemiological, alsenafi2021multispecies}). One mathematical framework that is suitable for describing such individual-level interactions is by using stochastic agent-based models, whereby individuals (agents) interact with one another according to pre-defined processes on an underlying spatial grid. Such models have found great use in cell-level dynamics \cite{multi-excl,simpsoncellprolif,byrne,tissueabm}, ecology \cite{fadaipopallee}, and epidemic modelling \cite{perez2009agent,ajelli2010comparing}. In this work, we develop a stochastic agent-based model (ABM) that characterises individual-level mechanisms giving rise to population-level riotous behaviour. Individual agents, classified as `rioters' or `bystanders', move on a two-dimensional square lattice restricted by exclusion processes to prevent agent overlap \cite{chowdhury,multi-excl,simpsoncellprolif, fadaiunpackallee} and can either be recruited or defect from their respective sub-population \cite{multi-excl}. In particular, we allow recruitment and defection processes to vary with local population density: the recruitment of bystanders changes with the number of nearby rioters, while rioters defect based on the number of nearby bystanders (c.f. \cite{fadaiunpackallee}). While multi-population stochastic ABMs and density-dependent reaction processes in ABMs have been previously considered separately, the combination of these two ABM frameworks, as we present in this work, has not been previously examined. Consequently, this agent-based modelling framework provides the unifying link between multi-population stochastic models and density-dependent reaction processes. Following an examination of the qualitative features of ABM simulations, we derive the continuum limit of the ABM in order to compare average individual-level dynamics with population-level descriptions of dynamics. The continuum description of this ABM framework is determined to be a system of nonlinear reaction-diffusion equations that describe the migration of both sub-populations, as well as the recruitment of bystanders and defection of rioters. We demonstrate good agreement between the ABM and continuum descriptions, which in turn provides further understanding of individual-level mechanisms that give rise to macroscale rioting phenomena. \section{Results and Discussion} In this stochastic agent-based modelling framework, we consider the population of two classes of agents, termed as `rioters' and `bystanders', on an $X{\Delta} \times Y{\Delta}$ lattice, where ${\Delta}$ is a typical amount of space an individual occupies. We focus on non-dimensional lattices (i.e., ${\Delta}=1$) and represent the location of the top right corner of each site in Cartesian co-ordinates as $(x_i,y_j)=(i,j)$, where $i=1,\dots,X$ and $j=1,\dots,Y$. A rioter at lattice site $(i,j)$ and time $t$ is denoted as $r_{i,j}(t)$; similarly, $b_{i,j}(t)$ represents a bystander at lattice site $(i,j)$ and time $t$. Furthermore, we employ \textit{exclusion processes} to ensure that at most one agent can occupy a lattice site at any given time \cite{chowdhury,multi-excl,simpsoncellprolif, fadaiunpackallee}. The initial configuration of each sub-population, $r_{i,j}(0)$ and $b_{i,j}(0)$, is left to the user's choice. If spatially uniform initial conditions are desired, rioters and bystanders can be initially seeded on the lattice with constant probabilities $r_0$ and $b_0$. Regardless of their initial configurations, individuals in both sub-populations move to adjacent lattice sites with unbiased direction with a single motility rate $m$. Reflecting boundary conditions are employed on the boundaries of the lattice domain for simplicity. The ABM also incorporates agent recruitment (a bystander becoming a rioter) and defection (a rioter becoming a bystander), where the recruitment and defection rates vary with local density \cite{fadaiunpackallee}. As a simple metric of local density, the recruitment and defection rates will change with how many rioters, from zero to four, are present at lattice sites in their von Neumann neighbourhoods (i.e., the adjacent North, South, East, and West lattice sites). We consider the recruitment processes to have non-negative rates \(\lambda_{r0}\), \(\lambda_{r1}\), \(\lambda_{r2}\), \(\lambda_{r3}\) and \(\lambda_{r4}\), respectively. Similarly, the defection process have rates \(\lambda_{d0}\), \(\lambda_{d1}\), \(\lambda_{d2}\), \(\lambda_{d3}\) and \(\lambda_{d4}\), due to zero, one, two, three and four neighbouring bystanders, respectively. While the recruitment and defection rates \(\lambda_{rn}\) and \(\lambda_{dn}\) are explicitly related to local pairwise interactions of neighbours for $n\ge1$, the rates \(\lambda_{r0}\) and \(\lambda_{d0}\) can also represent \textit{global}, non-local effects of recruitment and defection processes, including spontaneous rioting, lack of interest that devolves into defection, and social media influences \cite{baker2011mediated}. Finally, we make the additional assumption that individuals move much more often than being recruited or defecting, i.e. $m\gg \max_n(\lambda_{rn}, \lambda_{dn})$. This assumption is a standard model simplification for fast-moving populations \cite{fadaiunpackallee}. Using a Gillespie approach \cite{gillespie1977exact}, we are able to simulate the number of both agent sub-populations as a function of time and space (Algorithm 1); a MATLAB implementation of this algorithm can be found at \url{https://github.com/nfadai/Clements2021}. \begin{algorithm} \caption{Pseudocode for agent-based simulations of rioter and bystander dynamics} \label{alg:ABM} \begin{algorithmic}[1] \State Set up an \(X\times Y\) lattice and specify initial placement of rioters and bystanders; \State Specify counters \(Q_{r}(t)\) and \(Q_{b}(t)\); \State Specify recruitment rates $\lambda_{rn}$, defection rates $\lambda_{dn}$, and motility rate $m$; \State Set \(t=0\) and specify terminating time $t_{\text{end}}$; \While{\(t<t_{\text{end}}\)} \State Calculate random variables \(u_{1}\) and \(u_{2}\), uniformly distributed on \([0,1]\); \State Select an agent at random and determine its sub-population (rioter or bystander); \State Compute the number of nearest neighbours \(n\) in the opposite sub-population of the chosen agent to determine \(\lambda_{rn}\) and \(\lambda_{dn}\); \State Calculate propensity \(p=(m+\lambda_{dn})Q_{r}(t)+(m+\lambda_{rn})Q_{d}(t)\); \State Calculate time step duration \(\tau=-\ln(u_{1})/p\); \State \(t=t+\tau\); \State \(Q_{r}(t)=Q_{r}(t-\tau)\); \State \(Q_{b}(t)=Q_{b}(t-\tau)\); \If{Agent is a rioter} \If{\(u_{2}<m/(m+\lambda_{dn})\)} \State Choose a neighbouring site at random to move to; \If{Neighbouring site is empty} \State Move rioter to chosen site; \Else \State Nothing happens; \EndIf \Else \State Rioter becomes a bystander; \State \(Q_{r}(t)=Q_{r}(t)-1\); \State \(Q_{b}(t)=Q_{b}(t)+1\); \EndIf \Else \If{\(u_{2}<m/(m+\lambda_{rn})\)} \State Choose a neighbouring site at random to move to; \If{Neighbouring site is empty} \State Move bystander to chosen site; \Else \State Nothing happens; \EndIf \Else \State Bystander becomes a rioter; \State \(Q_{b}(t)=Q_{b}(t)-1\); \State \(Q_{r}(t)=Q_{r}(t)+1\); \EndIf \EndIf \EndWhile \end{algorithmic} \end{algorithm} \subsection{ABM simulations of riots} To examine the qualitative features of ABM simulations, we consider various choices of recruitment and defection rates and observe the spatial and temporal evolution of the total agent population. In particular, we will focus our simulations on a particular lattice configuration that represents a single street. This geometry is obtained by using the domain \(0< x\leq 200\), \(0<y\leq 20\), which is equivalent to specifying the lattice dimensions as \(X=200\) and \(Y=20\). Furthermore, the sub-population densities $\langle R(t) \rangle$ and $\langle B(t) \rangle$ can be computed by averaging over multiple ABM simulations: \begin{align} & \langle R(t) \rangle=\frac{1}{PXY}\sum_{p=1}^{P}Q_{r,p}(t), \label{eq:ravg} \\ &\langle B(t) \rangle=\frac{1}{PXY}\sum_{p=1}^{P}Q_{b,p}(t).\label{eq:bavg} \end{align} Here, $Q_{r,p}(t)$ and $Q_{b,p}(t)$ are the total number of each sub-population on the lattice at time $t$, in the $p$th identically-prepared realisation of the ABM. The total number of identically-prepared realisations is $P$; we choose $P=20$ throughout this work. Finally, when employing spatially-dependent initial configurations that are spatially dependent in the $x$-direction alone, as will be examined in Section \ref{sec:SD}, we will also consider the sub-population densities averaged over multiple simulations and averaged in the $y$-direction alone: \begin{align} & \langle R(x,t) \rangle=\frac{1}{PY}\sum_{p=1}^{P}\sum_{j=1}^{Y}r_{i,j,p}(t), \label{eq:ravg2} \\ &\langle B(x,t) \rangle=\frac{1}{PY}\sum_{p=1}^{P}\sum_{j=1}^{Y}b_{i,j,p}(t).\label{eq:bavg2} \end{align} Here, $r_{i,j,p}(t)$ and $b_{i,j,p}(t)$ are the rioter and bystander occupancies at lattice site $(i,j)$ at time $t$ in the $p$th identically-prepared realisation of the ABM. \subsection{Spatially uniform initial conditions} We first consider results of the agent-based model for simulations beginning from spatially uniform initial conditions. We present snapshots of the two agent sub-populations for initial densities \(r_{0}=0.05\) and \(b_{0}=0.25\), representing situations where the majority of attendees at the sports event are not inclined to riot initially. We then consider three representative parameter sets associated with different levels of recruitment and defection: \begin{align} &\text{Mild Unrest:} &\quad &\lambda_{rn} = \begin{cases} 0, n=0,1, \\ 1, n=2,3,4, \end{cases} &\quad & \lambda_{dn} \equiv 1. \label{eq:Mild} \\ &\text{Moderate Unrest:} &\quad &\lambda_{rn} = \begin{cases} 0, n=0, \\ 1, n=1,2,3,4, \end{cases} &\quad & \lambda_{dn} = \begin{cases} 0, n=0, \\ 1, n=1,2,3,4. \label{eq:Med} \end{cases} \\ &\text{Severe Unrest:} &\quad &\lambda_{rn} \equiv 1, &\quad & \lambda_{dn} = \begin{cases} 0, n=0,1, \\ 1, n=2,3,4.\label{eq:High} \end{cases} \end{align} In the Mild Unrest regime, rioters defect at the same rate regardless of how many bystanders are present, while bystanders are only recruited when two or more rioters are nearby. The Severe Unrest regime swaps the recruitment and defection processes: bystanders can become rioters regardless of the number of nearby rioters, while rioters only defect when two or more bystanders are nearby. Finally, in the Moderate Unrest regime, bystanders can become rioters in the presence of at least one rioter, and vice versa for the defection processes. For all simulations, we take $m=100 \max_n(\lambda_{rn},\lambda_{dn})=100$ to ensure spatial uniformity is retained throughout. Depending on the level of unrest, three main qualitative features can be observed in the agent sub-populations. In the Mild Unrest parameter regime, shown in Figure \ref{fig:Mild1}, we observe that the population eventually all become bystanders. For larger amounts of unrest, such as the Moderate Unrest scenario shown in Figure \ref{fig:Med1}, the rioting sub-population persists, but the bystander population also persists in approximately equal numbers. Finally, in Figure \ref{fig:High1}, we see that despite there being many more bystanders than rioters initially, the Severe Unrest parameter regime overwhelms the defection processes and leads to the entire population becoming rioters. While by no means a comprehensive list of phenomena, the three unrest parameter regimes shown in Figures \ref{fig:Mild1}--\ref{fig:High1} demonstrate that the ABM framework can give rise to three main qualitative features: (i) the entire population becoming bystanders, (ii) a co-existence of rioters and bystanders, and (iii) the entire population becoming rioters. \begin{figure} \centering \includegraphics[width=.95\textwidth]{Figures/MildSim} \caption{A single realisation of rioters (red) and bystanders (blue) in the Mild Unrest parameter regime with initial densities \(r_{0}=0.05\) and \(b_{0}=0.25\).} \label{fig:Mild1} \end{figure} \begin{figure} \centering \includegraphics[width=.95\textwidth]{Figures/MedSim} \caption{A single realisation of rioters (red) and bystanders (blue) in the Moderate Unrest parameter regime with initial densities \(r_{0}=0.05\) and \(b_{0}=0.25\).} \label{fig:Med1} \end{figure} \begin{figure} \centering \includegraphics[width=.95\textwidth]{Figures/HighSim} \caption{A single realisation of rioters (red) and bystanders (blue) in the Severe Unrest parameter regime with initial densities \(r_{0}=0.05\) and \(b_{0}=0.25\).} \label{fig:High1} \end{figure} \subsection{Spatially uniform continuum limit}\label{sec:CL} While the ABM framework allows us to visualise individual simulations of rioting dynamics, it is often more convenient to examine a simpler mathematical description of the average behaviour of the ABM, called the \textit{continuum limit description} \cite{compart-based, multi-excl, fadaiunpackallee}. The continuum limit description gives us the ability to study global, deterministic features of the ABM when the number of lattice sites is large and the number of simulations being averaged is also large. As a result, we can compare the average ABM sub-population densities, $\langle R(t) \rangle$ and $\langle B(t) \rangle$, with their continuum limit analogues, denoted as $r(t)$ and $b(t)$ respectively. When the ABM employs spatially uniform initial conditions and the motility rate of agents $m$ is large, the net flux of agents entering and leaving each lattice site due to motility events is, on average, zero \cite{fadaiunpackallee}. Therefore, spatial derivatives in the continuum limit will vanish, meaning that the continuum description of the average sub-population densities, $0\le r, b \le 1$, are functions of time alone. For the derivation of the continuum limit of each sub-population, we follow \cite{compart-based, fadaiunpackallee} and consider each recruitment and defection processes individually. For recruitment of bystanders to rioters at rate \(\lambda_{rn}\), we need to consider all the spatial configurations for which a bystander has precisely \(n\) neighbouring sites occupied by rioters, and precisely \(4-n\) sites not occupied by rioters. Similarly, for the defection of rioters to bystanders at rate \(\lambda_{dn}\), a rioter must have exactly \(n\) neighbouring sites occupied by bystanders and the remaining \(4-n\) sites not occupied by bystanders. Accounting for all of these possibilities leads to the following continuum limit descriptions for \(r(t)\) and \(b(t)\): \begin{equation} \frac{\mathrm{d}r}{\mathrm{d}t}=-\frac{\mathrm{d}b}{\mathrm{d}t}=\underbrace{b\sum_{n=0}^{4}{\lambda_{rn}{4\choose{n}}r^{n}(1-r)^{4-n}}}_{\text{recruitment}}-\underbrace{r\sum_{n=0}^{4}{\lambda_{dn}{4\choose{n}}b^{n}(1-b)^{4-n}}}_{\text{defection}}. \label{eq:CL} \end{equation} Furthermore, due to the ABM reflecting boundary conditions and lack of any source or sink terms in the ABM framework, the total number of agents is conserved: \begin{equation} r(t)+b(t)=r_0+b_0:=K\le1. \end{equation} Therefore, we can rearrange \eqref{eq:CL} in terms of $r(t)$ alone: \begin{equation} b(t)=K-r(t), \qquad \frac{\mathrm{d}r}{\mathrm{d}t}=(K-r)\sum_{n=0}^{4}{\lambda_{rn}{4\choose{n}}r^{n}(1-r)^{4-n}}-r\sum_{n=0}^{4}{\lambda_{dn}{4\choose{n}}(K-r)^{n}(1-K+r)^{4-n}}. \label{eq:CL2} \end{equation} \subsubsection{Comparison of ABM agent density and continuum limit} To highlight the similarities between the continuum limit and the average behaviour of ABM simulations, we examine the population density of each sub-population in the parameter regimes described in equations \eqref{eq:Mild}--\eqref{eq:High}. From \eqref{eq:CL2}, the corresponding continuum limit descriptions of the rioter density for each parameter regime become the following: \begin{align} &\text{Mild Unrest:} &\quad&\frac{\mathrm{d}r}{\mathrm{d}t}= (K-r)r^2(3r^2-8r+6)-r, \label{eq:CLmild} \\ &\text{Moderate Unrest:} &\quad&\frac{\mathrm{d}r}{\mathrm{d}t}= (K-r)[1-(1-r)^4]-r[1-(1-K+r)^4],\label{eq:CLmed} \\ &\text{Severe Unrest:} &\quad&\frac{\mathrm{d}r}{\mathrm{d}t}= (K-r)-r(K-r)^2[3(K-r)^2-8(K-r)+6].\label{eq:CLhigh} \end{align} In the Mild Unrest case, the only steady-state for $r,b\in[0,K]$ is $(r,b)=(0,K)$, which is stable. Similarly, the Severe Unrest case only has $(r,b)=(K,0)$ as a steady-state, which is stable. Finally, in the Moderate Unrest case, there are three steady-states: $(r,b)=(0,K),(K/2,K/2),(K,0)$, which are unstable, stable, and unstable, respectively. While only a small representative of the sample parameter space, the continuum limit equations for $r$ and $b$ clearly show the possibility of three steady-state values for $r$: no rioters ($r=0$), all rioters ($r=K$) and an intermediate rioter population density in the interval $(0,K)$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figures/MildAvg} \includegraphics[width=0.45\textwidth]{Figures/MedAvg} \\ (a) \hspace{7cm} (b) \\ \includegraphics[width=0.45\textwidth]{Figures/HighAvg} \\ (c) \caption{Comparison of the average ABM behaviour over 20 identically-prepared simulations, $\langle R(t) \rangle$ and $\langle B(t) \rangle$, with their continuum limit descriptions, $r(t)$ and $b(t)$. All simulations begin with the initial densities \(r_{0}=0.05\) and \(b_{0}=0.25\) and the parameter regimes used are: (a) Mild Unrest; (b) Moderate Unrest; and (c) Severe Unrest.} \label{fig:CLvsABM} \end{figure} In Figure \ref{fig:CLvsABM}, we compare average ABM behaviour over 20 identically-prepared simulations, $\langle R(t) \rangle$ and $\langle B(t) \rangle$ defined in \eqref{eq:ravg} and \eqref{eq:bavg} and $P=20$, with their continuum limit descriptions, $r(t)$ and $b(t)$ defined in \eqref{eq:CL2}. The numerical solutions of \eqref{eq:CL2} are computed using \texttt{ode45} in MATLAB. We observe excellent agreement between the ABM and continuum descriptions of agent densities in the Mild and Severe Unrest regimes. In the Moderate Unrest regime, we note that while the same equilibrium density value is achieved, there is some discrepancy between the two model descriptions for intermediate time. As some continuum limit descriptions of ABM frameworks require additional refinements for accuracy, include agent state space, agent adhesion, and clustering effects (c.f. \cite{compart-based, gapfilling,johnston2020predicting,fadaiunpackallee}), we anticipate that the Moderate Unrest parameter regime will require additional terms in the continuum limit description for more accuracy. \subsection{Determining individual-level mechanisms from global population dynamics: inverse problem} It is important to emphasise at this point that the three parameter regimes considered in this section (Mild, Moderate and Severe Unrest) are by no means an exhaustive list of potential phenomena that can occur as predicted via the continuum limit. Since \eqref{eq:CL2} reduces to a polynomial in $r$ of degree 5, it is possible to have up to 5 equilibria in $[0,K]$. Additionally, it is more likely that we will know the \textit{global} trends in agent and bystander populations rather than their \textit{local}, individual-based mechanisms of rioting or defecting. Consequently, we will now explore the \textit{inverse problem} of obtaining the local recruitment and defection rates, i.e. $\lambda_{rn}$ and $\lambda_{dn}$, from a given continuum description of a particular rioter sub-population. To solve this inverse problem, we follow \cite{fadaiunpackallee} and apply the same methodologies to relate the continuum limit of a particular ABM parameter set to a given global population description of rioters. Firstly, we rewrite the continuum limit system shown in \eqref{eq:CL} in terms of Bernstein basis polynomials of fourth degree \cite{bernstein}: \begin{equation} \frac{\mathrm{d}r}{\mathrm{d}t}=-\frac{\mathrm{d}b}{\mathrm{d}t}=b\sum_{n=0}^{4}\lambda_{rn}B_{n,4}(r) -r\sum_{n=0}^{4}\lambda_{dn}B_{n,4}(b), \end{equation} where \begin{equation} B_{n,4}(x)={4\choose{n}}x^{n}(1-x)^{4-n},\quad n=0,1,2,3,4. \end{equation} We can then convert these Bernstein basis functions to the standard basis of monomials \(\{x^{0},x^{1},x^{2},x^{3},x^{4}\}\), by means of the following transformation \cite{farouki1987}: \begin{equation} x^{m}=\sum_{n=m}^{4}{\frac{{n\choose{m}}}{{4\choose{m}}}B_{n,4}(x)} \iff \mathbf{x}=\mathbf{M}\mathbf{b}, \end{equation} where \begin{align} \mathbf{x}= \begin{bmatrix} x^{0}\\ x^{1}\\ x^{2}\\ x^{3}\\ x^{4}\\ \end{bmatrix}, && \mathbf{M}= \begin{bmatrix} 1 & 1 & 1 & 1 & 1\\ 0 & 1/4 & 1/2 & 3/4 & 1\\ 0 & 0 & 1/6 & 1/2 & 1\\ 0 & 0 & 0 & 1/4 & 1\\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}, && \mathbf{b}= \begin{bmatrix} B_{0,4}(x)\\ B_{1,4}(x)\\ B_{2,4}(x)\\ B_{3,4}(x)\\ B_{4,4}(x)\\ \end{bmatrix}.\label{matrixeq} \end{align} This one-to-one transformation enables us to directly identify population-level parameters with corresponding individual rates. In other words, if we assume that the population-level descriptions of recruitment and defection processes are expressed as \begin{equation} \frac{\mathrm{d}r}{\mathrm{d}t}=-\frac{\mathrm{d}b}{\mathrm{d}t}=b\sum_{n=0}^{4}\alpha_n r^n -r\sum_{n=0}^{4}\delta_n b^n, \end{equation} we are able to identify, by means of the Bernstein basis transformation, that \begin{equation} \sum_{n=0}^{4}\alpha_n r^n = \sum_{n=0}^{4}B_{n,4}(r) \left[\alpha_0 + \frac{\alpha_1 n}{4}+\frac{\alpha_2 n(n-1)}{12} + \frac{\alpha_3 n(n-1)(n-2)}{4!}+\frac{\alpha_4 n(n-1)(n-2)(n-3)}{4!} \right], \end{equation} which immediately implies that \begin{align} \lambda_{r0}&=\alpha_0, \\ \lambda_{r1}&=\alpha_0+\frac{\alpha_1}{4}, \\ \lambda_{r2}&=\alpha_0+\frac{\alpha_1}{2}+\frac{\alpha_2}{6}, \\ \lambda_{r3}&=\alpha_0+\frac{3\alpha_1}{4}+\frac{\alpha_2}{2}+\frac{\alpha_3}{4}, \\ \lambda_{r4}&=\alpha_0+\alpha_1+\alpha_2+\alpha_3+\alpha_4. \end{align} A near-identical calculation can be used to relate the global defection rate parameters, $\delta_n$, with their corresponding individual-level parameters, $\lambda_{dn}$. For ease of computation, it is worth noting that the individual-level rates $\lambda_{rn}$ can also be obtained by multiplying each row of $\mathbf{M}$ in \eqref{matrixeq} by their corresponding $\alpha_m$ values and summing the $n$th column. \subsubsection{A caveat on individual-level parameter identifiability} At this point, we should stress that the identifiability of these individual-level recruitment and defection mechanisms can only be uniquely determined if the global recruitment and defection rates are known separately to one another. Contrastingly, if only the \textit{net} global sub-population growth rate is known, the majority of the individual-level rates cannot be uniquely determined. To demonstrate this claim, suppose that the net sub-population growth of rioters is known to be a polynomial of degree 5 of fewer: \begin{equation} \frac{\mathrm{d}r}{\mathrm{d}t}=G(r) := \sum_{m=0}^{5} \beta_m r^m. \label{eq:Inv0} \end{equation} As the continuum limit shown in \eqref{eq:CL2}, i.e., the rioter sub-population growth rate, is also a polynomial of degree 5 or fewer, we can attempt to determine unique choices of $\lambda_{rn}$ and $\lambda_{dn}$ that will identically match $G(r)$: \begin{equation} \frac{\mathrm{d}r}{\mathrm{d}t}=(K-r)\sum_{n=0}^{4}{\lambda_{rn}{4\choose{n}}r^{n}(1-r)^{4-n}}-r\sum_{n=0}^{4}{\lambda_{dn}{4\choose{n}}(K-r)^{n}(1-K+r)^{4-n}} = \sum_{m=0}^{5} \beta_m r^m. \label{eq:Inv} \end{equation} It immediately follows that, due to 10 unknown parameters on the left hand side of \eqref{eq:Inv} being matched to 6 known parameters on the right hand side of \eqref{eq:Inv}, the associated inverse problem is underdetermined. However, by evaluating \eqref{eq:Inv} at $r=0,K$, we are able to uniquely determine two of the individual-level rates, $\lambda_{r0}$ and $\lambda_{d0}$: \begin{equation} \lambda_{r0}=\frac{\beta_0}{K},\qquad \lambda_{d0}=-\sum_{m=0}^{5} \beta_m K^{m-1}. \end{equation} Since all individual-level rates are assumed to be non-negative, it follows that two key constraints of the global recruitment rate are \begin{equation} \beta_0\ge0,\qquad \sum_{m=0}^{5} \beta_m K^m \le0. \end{equation} In other words, the recruitment rate at $r=0$ must be non-decreasing, while the recruitment rate at $r=K$ must be non-increasing; both of these constraints are expected since the total number of agents must remain constant \cite{fadaiunpackallee}. The remaining eight individual-level recruitment and defection rates can be related by equating powers of $r^m, $ for $m=1,2,...,5$. However, we will still have at least three degrees of freedom in this reduced underdetermined system. As an illustrative example of the non-identifiability of the individual-level rates, let us consider a rioter growth rate that behaves akin to logistic growth (c.f. \cite{fadaipopallee,murray,fadaiunpackallee}): \begin{equation} \frac{\mathrm{d}r}{\mathrm{d}t}=r(K-r). \label{eq:Inv2} \end{equation} It can be shown that there are four freely chosen parameters, $\lbrace A,B,C,D\rbrace$, that emerge when decomposing this rioter growth rate into a difference of recruitment and defection rates: \begin{equation} \frac{\mathrm{d}r}{\mathrm{d}t}=(K-r)[(1+A)r+Br^2+Cr^3+Dr^4]-r [A(K-r)+Br(K-r)+Cr^2(K-r)+Dr^3(K-r)]. \end{equation} Furthermore, by using the aforementioned Bernstein basis transformation shown in \eqref{matrixeq}, we determine that the individual-level recruitment and defection rates are \begin{align*} \lambda_{r0}&=\lambda_{d0}=0, \\ \lambda_{r1}&=\frac{1+A}{4}, \\ \lambda_{r2}&=\frac{1+A}{2}+\frac{ B}{6}, \\ \lambda_{r3}&=\frac{3(1+A)}{4}+\frac{B}{2}+\frac{C}{4}, \\ \lambda_{r4}&=1+A+B+C+D, \\ \lambda_{d1}&=\frac{A+KB+K^2C+K^3D}{4}, \\ \lambda_{d2}&=\frac{A+KB+K^2C+K^3D}{2}-\frac{B+2KC+3K^2D}{6}, \\ \lambda_{d3}&=\frac{3A}{4}+\frac{B(3K-2)}{4}-\frac{C(1-K)(1-3K)}{4}+\frac{3K(1-K)^2D}{4}, \\ \lambda_{d4}&=A-(1-K)B+(1-K)^2C-(1-K)^3D. \end{align*} While we require that all of these individual-level rates are non-negative, there is still a considerable subspace within $\lbrace A,B,C,D\rbrace$-space to pick different individual-level rates that give rise to the same rioter growth rate. To summarise, the key features of the ABM while employing spatially uniform initial conditions give rise to three main qualitative features: complete take-over by rioters, complete take-over of bystanders, or a co-existence equilibrium of both sub-populations. All three qualitative features are faithfully reproduced in the continuum limit of the ABM, which also gives rise to a systematic method of relating individual-level recruitment and defection rates to their analogous population-level counterparts. However, these individual-level rates cannot be uniquely determined if only the \textit{net} growth mechanisms of either sub-population, i.e. the net difference between recruitment and defection rates, is known. Nevertheless, the associated individual-level mechanisms can be obtained with the inclusion of a few freely-determined parameters. \subsection{Spatially-dependent initial conditions}\label{sec:SD} To incorporate spatial dependence within ABM simulations, we can employ spatially-dependent initial conditions in the ABM framework to observe how sub-population densities evolve in both space and time. This is analogous to considering situations whereby supporters of a particular sports team are grouped together and become riotous upon their team losing the game. For this spatial configuration, we consider a `block' of rioters with average population density $r_0$ centred along the street, while blocks of bystanders with average population density $b_0$ are initially on either side of the rioters: \begin{align} r_{i,j}(0)&= \begin{cases} r_0, & 91\leq i\leq 110, 1\leq j\leq 20,\\ 0, & \text{otherwise.} \end{cases}\label{eq:PDE_ICr}\\ b_{i,j}(0)&= \begin{cases} b_0, & 61\leq i\leq 80 \text{ or } 121\leq i\leq 140, 1\leq j\leq 20,\\ 0, & \text{otherwise.} \end{cases}\label{eq:PDE_ICb} \end{align} While the initial population densities $r_0, b_0$ can be set to 1, as is often chosen with spatially-dependent ABM simulations (c.f. \cite{fadaiunpackallee, multi-excl, compart-based}), we will assign the initial population densities $r_0=b_0=0.5$ for simulations shown in Figures \ref{fig:Mild2}--\ref{fig:High2}. This reduced initial population density is to prevent any local clustering from hindering recruitment or defection processes at the individual scale. Furthermore, it is unrealistic that groups of people will be packed as close as physically possible in a block, whereas cells as other populations previously considered in similar ABM simulations can easily achieve maximum population density in a given region (c.f. \cite{multi-excl, compart-based}). \begin{figure} \centering \includegraphics[width=.95\textwidth]{Figures/MildSim_SD} \caption{A single realisation of rioters (red) and bystanders (blue) in the Mild Unrest parameter regime with initial conditions listed in \eqref{eq:PDE_ICr}-- \eqref{eq:PDE_ICb}.} \label{fig:Mild2} \end{figure} \begin{figure} \centering \includegraphics[width=.95\textwidth]{Figures/MedSim_SD} \caption{A single realisation of rioters (red) and bystanders (blue) in the Moderate Unrest parameter regime with initial conditions listed in \eqref{eq:PDE_ICr}-- \eqref{eq:PDE_ICb}.} \label{fig:Med2} \end{figure} \begin{figure} \centering \includegraphics[width=.95\textwidth]{Figures/HighSim_SD} \caption{A single realisation of rioters (red) and bystanders (blue) in the Severe Unrest parameter regime with initial conditions listed in \eqref{eq:PDE_ICr}-- \eqref{eq:PDE_ICb}.} \label{fig:High2} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Figures/PDEvsCL_Street2.eps} \caption{Comparison of the average ABM behaviour over 20 identically-prepared simulations, $\langle R(x,t) \rangle$ and $\langle B(x,t) \rangle$, with their continuum limit descriptions, $r(x,t)$ and $b(x,t)$. All simulations begin with the initial conditions described in \eqref{eq:PDE_ICr}--\eqref{eq:PDE_ICb} and the three parameter regimes (Mild Unrest, Moderate Unrest, and Severe Unrest) are described in \eqref{eq:Mild}--\eqref{eq:High}.} \label{fig:PDEvsCL} \end{figure} To modify the continuum limit of the ABM to incorporate spatial dependence, we follow \cite{multi-excl} to determine the effects of diffusion and motility within the continuum limit. Combined with the aforementioned recruitment and defection processes stated in Section \ref{sec:CL}, we have that the continuum limit description of the ABM is represented as a coupled PDE system for \(r(x,y,t)\) and \(b(x,y,t)\): \begin{align} \frac{\partial r}{\partial t}&=D\nabla \cdot \left[(1-b)\nabla r+r\nabla b\right]+\rho(r,b), \label{continuumr} \\ \frac{\partial b}{\partial t}&=D\nabla \cdot \left[(1-r)\nabla b+b\nabla r\right]-\rho(r,b), \label{continuumb} \end{align} where \begin{equation} D=\frac{m\Delta^{2}}{4} ~~\text{ and }~~ \rho(r,b) =b\sum_{n=0}^{4}\lambda_{rn}B_{n,4}(r) -r\sum_{n=0}^{4}\lambda_{dn}B_{n,4}(b). \label{Dsum} \end{equation} We note that, due to the reflecting boundary conditions and the initial conditions being independent of $y$, the solutions for $r$ and $b$ will also be independent of $y$ \cite{pathlines}, i.e. $r(x,y,t)=r(x,t)$ and $b(x,y,t)=b(x,t)$. Additionally, the incorporation of linear and cross-diffusion terms in the continuum limit descriptions do not affect the underlying recruitment and defection rates discussed previously. In other words, the Mild, Moderate, and Severe parameter regimes described in \eqref{eq:Mild}--\eqref{eq:High} continue to obey the continuum limit descriptions shown in \eqref{eq:CLmild}--\eqref{eq:CLhigh}. Furthermore, by combining \eqref{continuumr} with \eqref{continuumb}, we note that the total number of agents, $T=r+b$, continue to be a conserved quantity within the domain, while the evolution of agents within the domain follows the standard linear diffusion equation: \begin{equation} \frac{\partial T}{\partial t}=D\nabla^2 T. \end{equation} Finally, as discussed in \cite{multi-excl}, each sub-population density evolves according to standard linear diffusion when a single sub-population is present, whereas cross-diffusion effects play a larger role when both sub-populations are present. Numerical solutions of the PDE system \eqref{continuumr}--\eqref{Dsum}, such as those presented in Figure \ref{fig:CLvsABM}, are computed using \texttt{pdepe} in MATLAB. With reference to Figure \ref{fig:PDEvsCL}, we observe that the continuum limit of the ABM faithfully reproduces the average behaviour of ABM simulations employing spatially-dependent initial conditions. Like in the case where spatially uniform initial conditions are employed, the Mild and Severe Unrest parameter regimes evolve over faster timescales than the Moderate Unrest parameter regime, since agents in the Mild and Severe Unrest parameter regimes can undergo spontaneous defection or recruitment without the requirement of agents from the opposing sub-population to be present. \section{Conclusions} In this work, we propose a new agent-based model (ABM) that can be used to simulate individuals involved in sports riots. Unlike other forms of rioting, which are often escalated and exacerbated due to the presence of law enforcement officials, sports riots are generally initiated from within a sub-population of sports-goers. With a view to limit property damage and contain anti-social behaviour resulting from sports riots, it is essential to understand the temporal and spatial evolution of the aforementioned rioting sub-population. To provide a qualitative understanding of the rioting phenomena that can arise from simulations of sports riots, we consider an ABM with two sub-populations (rioters and bystanders), in which agents can move and change sub-population type by means of recruitment and defection mechanisms. These individual-level mechanisms vary with the local population density of the opposite sub-population and can be shown to be linked in one-to-one correspondence with prescribed \textit{global} recruitment and defection rates. Furthermore, these global continuum descriptions of the underlying individual-level agent-based mechanisms faithfully capture the average behaviour of these agent-based simulations, providing not only more tractable and understandable mathematical models of sports riots, but also the crucial links between individual-level mechanisms and population-level phenomena. There are several avenues for further consideration that stem from the modelling frameworks presented here. For instance, the ABM domain can easily be extended to incorporate additional realistic features of a city layout, including a road and sidewalk network, public transport lines, and buildings. These additional movement augmentations and hindrances will clearly affect the direction and spread of riotous activity within the city structure. Additionally, the incorporation of additional agent sub-populations, such as rival sports fans that are independently rioting, would provide additional insight into the multifaceted nature of sports riots, such as to the relative effects between property damage and violent activity from opposing fans. Another feature that can be included in this ABM framework is the destructive nature of the rioters themselves. In this work, we simply consider the location and population density of the rioter sub-population, rather than what the rioters themselves are \textit{doing}. It would be beneficial to the application of sports riots, both from a mathematical and social sciences perspective, to incorporate `targets' of riotous activity, such as rival sports fans or nearby buildings and businesses. Finally, the expansion of agent-based models into social science applications need not be contained to sports riots alone. For example, the worldwide phenomena of panic-buying amidst the COVID-19 pandemic also crucially hinges on what proportion of shoppers influence the recruitment or defection of panic-buying activity \cite{billore2021panic}. The agent-based modelling framework presented in this work is an ideal starting point in terms of incorporating further aspects characteristic of panic-buying, such as dispersion and aggregation of shoppers \cite{starke2014nonlinear, d2006self}. We leave these ABM extensions for future exploration. \subsection*{Data accessibility} All data and MATLAB algorithms used to generate results are available on Github at \url{https://github.com/nfadai/Clements2021}. \bibliographystyle{vancouver}
{ "attr-fineweb-edu": 2.675781, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdZE5qX_AY5k-y1ge
\section{Introduction}\label{sec:intro} Teams looking to improve their chances of winning will naturally seek to understand their performance, and also that of their opposition. From a data mining perspective, \citet{carpita2013football} and \citet{carpita2015discovering} used cluster and principal component analysis techniques in order to identify the drivers that most affect the probability to win a match. From a different perspective, nowadays, analyzing players' trajectories using spatio-temporal data is becoming cool \footnote{Apart from sports, other fields of study have encountered the need to analyze trajectories in a space-time dimension. This is, for example, the case of animal movement \citep{brillinger2004exploratory,calenge2007exploring,calenge2009concept,schwager2007robust}}. There are a number of IT systems in use that capture these data from team sports during matches. Spatio-temporal data are characterized by a sequence of samples containing the timestamp and location of some phenomena. In the team sports domain, two types of spatio-temporal data are commonly captured: object trajectories capture the movement of players or the ball; event logs record the location and time of match events, such as passes, shots at goal or fouls. The movement of players or the ball around the playing area are sampled as a timestamped sequence of location points in the plane. The trajectories are captured using optical- or device-tracking and processing systems. Optical tracking systems use fixed cameras to capture the player movement, and the images are then processed to compute the trajectories \citep{bradley2007reliability}. In other cases, device tracking systems rely on devices that infer their location, and are attached to the players' clothing or embedded in the ball. It is a common procedure to discretize the playing area into regions and assign the location points contained in the trajectory to a discretized region. A common approach is to subdivide the playing area into rectangles of equal size \citep{cervone2016multiresolution}. Understanding the interaction between players is one of the more important and complex problem in sports science. Trajectories allow to analyze movements of a single player as well as interactions of all players as a synchronized group, in order to asses the importance of such players to the team. Methods used to analyze these movements using trajectories borrow from many disciplines, such as machine learning, network and complex systems, GIS, computational geometry, computer vision and statistics. For example, the central goal of social network analysis is to capture the interactions between individuals \citep{wasserman1994social}. As a consequence, in the last decade numerous papers applied social network analysis to team sports, mainly focusing on passing networks and transition networks. Centrality techniques has been used with the aim of identifying key (or \textit{central}) players, or to estimate the interaction and the cooperation between team members \citep{passos2011networks}. Furthermore, control space is considered a key factor in the team's performance. A player dominates an area if he can reach every point in that area before anyone else. Literature makes use of the tool of the dominant region \citep{taki1996development}, which is equivalent to the \textit{Voronoi} region \citep{fortune1987sweepline}, when acceleration is constant. Another approach regards measuring the average distance of the players in the court, and its evolution over time. Many works are devoted to analyze as the space is occupied by players - when attacking and when defending - or in crucial moments of the match. We can find examples in football \citep{couceiro2014dynamical,moura2012quantitative} or in futsal \citep{fonseca2012spatial,travassos2012spatiotemporal} Another issue regards to model the evolution of football play from the trajectories of the players, which has been researched extensively, particularly in the computer vision community \citep{yue2014learning,wei2014forecasting}. For example \citet{kim2010motion} predicted the location of the ball at a point in the near future. Predefined plays are used in many team sports to achieve some specific objective: teammates who are familiar with each other's playing style may develop ad-hoc productive interactions that are used repeatedly. \citet{brillinger2007potential} addressed the question how to describe analytically the spato-temporal movement of particular sequences of passes (i.e. the last 25 passes before a score). Moreover, segmenting a match into phases is a common task in sports analysis, as it facilitates the retrieval of important phases for further analysis. \citet{perin2013soccerstories} developed a system for visual exploration of phases in football. \vline As described above, a variety of approaches and methods has already been proposed to solve different issues related to the relation between trajectories and performances in team sports. This paper provides a simple and ad-hoc strategy to visualize the spatio-temporal movement of a player as well as the synchronized movements of all players of the team. The aim is to support researchers on preliminar stages of their analysis as well as to facilitate the interpretation of results. To this scope, I propose to use motion charts. A motion chart is a dynamic bubble chart which allows efficient and interactive exploration and visualization of multivariate data. Motion charts map variables into time, 2D coordinate axes, size and colors, and facilitate the interactive display of multidimensional and temporal data. The best known motion chart, popularised by Hans Rosling in his TED talks, is probably the one provided within \texttt{googleVis} package in R, \texttt{gvisMotionChart}. This function allows the user to visualize data stored in R data frames directly in a web browser. \vline In section \ref{sec:motionchart} I introduce the \texttt{gvisMotionChart} function in \texttt{googleVis} package, and I discuss this method in relation to team sports. Section \ref{sec:case} presents a case study based on trajectories data of basketball players. In this section I empirically show how the use of \texttt{gvisMotionChart} could gives us clues for further analysis. Section \ref{sec:concl} concludes and suggests future developments. \section{Motion Charts for team sports' movements using googleVis}\label{sec:motionchart} When talking of a motion chart we can think at a dynamic bubble chart. A bubble chart is a type of chart that displays three dimensions of data. Each entity with is a triplet of associated data, which is plotted expressing two of the three values through the XY-axes and the third through its size. Bubble charts can facilitate the understanding of social, economical, medical, and other scientific relationships. Bubble charts can be considered a variation of the scatter plot, in which the data points are replaced with bubbles. Motion Chart allows efficient and interactive exploration and visualization of space-time multivariate data and provides mechanisms for mapping ordinal, nominal and quantitative variables onto time, 2D coordinate axes, size and colors which facilitate the interactive display of multidimensional and temporal data. Motion charts provide a dynamic data visualization that facilitates the representation and understanding of large and multivariate data. Using the familiar 2D Bubble charts, motion Charts enable the display of large multivariate data with thousands of data points and allow for interactive visualization of the data using additional dimensions like time, size and color, to show different characteristics of the data. The central object of a motion chart is a bubble. Bubbles are characterized by size, position and appearance. Using variable mapping, motion charts allow control over the appearance of the bubble at different time points. This mechanism enhances the dynamic appearance of the data in the motion chart and facilitates the visual inspection of associations, patterns and trends in space-time data. The \texttt{gvisMotionChart} is a function of \texttt{googleVis} package \citep{gesmann2013package} which reads a \textit{data.frame} object and creates text output referring to the Google Visualisation API. It can be included into a web page, or as a stand-alone page. The actual chart is rendered by the web browser in Flash \footnote{It does not work in all the browsers, but require Google.}. The function generates a motion chart, that is a dynamic chart which is traditionally designed to explore several indicators over time. Motion charts are intensively used and publicized by Ans Roslin trought TED. \texttt{gvisMotionChart} is used in a wide range of topics, such as students learning processes. \citet{santos2012goal} used different visualization methods available in \textit{googleVis}; \citet{hilpert2011dynamic} is an example of works where motion charts are adopted as a visual instrument in linguistic and semantic studies of the dynamic of linguistic change over the time. Motion Charts are applied different subfields of economics, for example in finance, to visualize sales data in an insurance context \citep{heinz2014practical} and for the study of inequality and income \citep{saka2015inequality}. In \citet{santori2014application} motion charts was applied to aggregated liver transplantation data. Visualization of Water Quality Sampling-Events in Florida are analyzed by means of motion charts in \citet{bolt2015visualizing}. \vline Analytically, the \texttt{gvisMotionChart} function reads as follow: \vline \texttt{gvisMotionChart(data, idvar = "id", timevar = "time", xvar = " ", yvar = " ", colorvar = " ", sizevar = " ",date.format = "Y/m/d", options = list(), chartid)} \vline where \begin{itemize} \item \texttt{data} is a data.frame object. The data has to have at least four columns with subject name (\texttt{idvar}), time (\texttt{timevar}) and two columns of numeric values. Further columns, numeric and character/factor are optional. The combination of \texttt{idvar} and \texttt{timevar} has to describe a unique row. The column names of the \texttt{idvar} and \texttt{timevar} have to be specified. Further columns, if not specified by the other arguments (\texttt{xvar, yvar, colorvar, sizevar}) will be assumed to be in the order of the arguments. \item \texttt{idvar} is a column name of data with the subject to be analysed. \item \texttt{timevar} is a column name of data which shows the time dimension. The information has to be either numeric, of class date or a character which follows the pattern 'YYYYWww' (e.g. '2010W04' for weekly data) or 'YYYYQq' (e.g. '2010Q1' for quarterly data). \item \texttt{xvar}: column name of a numerical vector in data to be plotted on the x-axis. \item \texttt{yvar}: column name of a numerical vector in data to be plotted on the y-axis. \item \texttt{colorvar}: column name of data that identifies bubbles in the same series. Use the same value to identify all bubbles that belong to the same series; bubbles in the same series will be assigned the same color. Series can be configured using the \texttt{series} option. \item \texttt{sizevar}: values in this column are mapped to actual pixel values using the sizeAxis. \item \texttt{options}: list of configuration options for Google motion chart. The options are documented in detail by Google online, \end{itemize} \vline Now, I contextualize the use of \texttt{gvisMotionChart} for the teams sports' movement. Let suppose having data about a number of players: in our \textit{data.frame} object we should have a variable that uniquely identifies these players. This is the \texttt{idvar} variable. Our \textit{data.frame} should also contains a variable uniquely identifying the time dimension in which players' movements are tracked; this is the \textit{timevar} variable. A record in the \textit{data.frame} should be uniquely identified by the combination of \textit{idvar} and \textit{timevar}. Moreover, our \textit{data.frame} should contains two additional variables containing the input for the x-axis and the y-axis. For the x-axis we have the position of the player in (let say) the court length and for the y-axis the position in (let say) the court width. These are, respectively, \texttt{xvar} and \texttt{yvar}. \begin{figure}[!htb] \centering \includegraphics[width=0.65\textheight]{mchart1.jpg} \caption{Setting the Motion Chart via html - 1} \label{mchart1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.65\textheight]{mchart2.jpg} \caption{Setting the Motion Chart via html - 2} \label{mchart2} \end{figure} The \texttt{plot} function can be used as to represent the \texttt{googleVis} motion chart using browser. \texttt{Options} command can be used to define the court's dimension. By default, \texttt{gvisMotionChart} displays a squared chart (i.e. same length and width). With \texttt{Options} we can transform the court to be rectangular and with the right proportions. Summarizing, a \textit{data.frame} containing the four variables above described permits to visualize the dynamic of more than one player together: Different players can be reported with different (unique) colors (please see the middle chart in figure \ref{mchart1}). Other variables should be supplied to the function in order to, for example, set the bubbles' dimension (see the bottom chart in figure \ref{mchart1}): when having available the x-axis and y-axis coordinates in successive moments of time, it is easy to compute the speed. A speed variable should be inputed to characterize the bubbles' dimension. It is possible to visualize the movement of one or more players together by ticking them in the appropriate box in the browser (top chart of figure \ref{mchart2}). In the same vein, we can activate players' trails: it will leave a line in the chart as bubbles play over time (middle chart of figure \ref{mchart2}). Finally, it is possible to set the speed by regulating the \textit{playback speed} key (please refer to bottom chart of figure \ref{mchart2}). \section{Case Study} \label{sec:case} Basketball is a sport generally played by two teams of five players each on a rectangular court. The objective is to shoot a ball through a hoop 18 inches (46 cm) in diameter and mounted at a height of 10 feet (3.05 m) to backboards at each end of the court. This sport was invented in Springfield (Canada) in 1891 by Dr. James Naismith. Rules of european basketball from FIBA (www.fiba.com) differs from the rules of the United States first league, National Basketball Association (NBA). The match lasts 40 minutes, divided in four periods of 10 minutes each. There is a 2 minutes break after the first quarter and after the third quarter of the match. After the first half, there are 15 minutes of half-time break. This case study refers to a friendly match played on march 22th, 2016 by a team based in the city of Pavia (near Milano) called \textit{Winterass Omnia Basket Pavia}. This team, in the season 2015-2016, played in the "C gold" league, the fourth league in Italy. This league is organized in 8 divisions in which teams geographically close by play together. Each division is composed by 14 teams that play twice with every other team of the same division (one as a guest and one as a host team) for a total of 26 games in the regular season. At the end of the regular season, the top 8 teams in the final rank play a post season (also called "playoff") that serves as to declare the winning team as well as to determine the team that goes to the upper league in the next season. \subsection{Dataset description} On march 22th, 2016, six \textit{Winterass} players took part of the friendly match. All that players have worn a microchip in their neck. The microchip tracks their movements in the court. The court length measures 28 meters while the court width equals to 15 meters. The system collects the position (in pixels of 1 $m^2$) in both the two axis (respectively x-axis and y-axis), as well as in the z-axis (i.e. how much that player jumps). The positioning of the players has been detected for a huge amount of close instants of time measured in milliseconds. Considering all the six players, the system recorded a total of 133,662 space-time observations. More in detail, a list of collected data follows\footnote{A reduced version of the full dataset is available upon request.}: \begin{itemize} \item \textbf{id}: this is a ID variable that is unique for each record in the dataset. \item Both \textbf{insert\_date} and \textbf{position\_ts} reports the date (dd/mm/aaa) and the time (hh:mm) of the detection. \item The column \textbf{tagid} uniquely identifies the player. In the dataset, 6 different ID are present, associated to each of the 6 players. \item \textbf{timestamp\_ms\_ok} reports the timestamp of the observation, in terms of milliseconds. \item \textbf{smt\_x},\textbf{ smt\_y}, \textbf{smt\_z} reports the non filtered values for the x-axis, y-axis and z-axis. \item \textbf{klm\_x}, \textbf{klm\_y}, \textbf{klm\_z} instead, are the values for the x-axis, y-axis and z-axis, filtered with a Kalman approach. \item \textbf{klv\_x}, \textbf{klv\_y}, \textbf{klv\_z} reports the speed along, respectively, the x-axis, the y-axis and the z-axis, based on filtered data described above. \item \textbf{tagid\_new} reports the same info of \textbf{tagid}, but here players are identified as 1, 2, ... , 6. \item \textbf{time} is a ID variable for the time dimension (i.e. the first record in terms of time is marked with a 1, the second record in terms of time with 2, etc...) \item \textbf{speed.mtr.sec}: It is a \textit{raw} measure of speed (in \textit{m/s}) of each player in each moment of time. \end{itemize} With regards to the same match, a play-by-play dataset is also available. The play by play accounts for the actions of an event. In details, I have recorder the following variables: \begin{itemize} \item \textbf{timestamp}: this variable reports the date (dd/mm/aaa) and the time (hh:mm) of the detection. \item \textbf{action} is a string variable that reports the type of action (for example "two shot made", "rebound", ...). \item \textbf{name}: The first name of the player to which the action is associated. \item \textbf{surname}: The surname of the player to which the action is associated. \item \textbf{x\_coord} and \textbf{y\_coord} report the x-axis and the y-axis (expressed in values from -100 to 100). The coordinate (0,0) is the center of the court. \end{itemize} Both the movements data and the play-by-play were kindly provided by MYagonism (\url{https://www.myagonism.com/}). Unfortunately, the finest disaggregation level of the time (minutes) in the play-by-play does not permit to properly match the play-by-play with the movements data. So, in the following, the most part of the analysis only make use of the informations coming from the movements' dataset. \subsection{Descriptive statistics} Six player took turns in the court. The total number of records equals to 133,662. Using the klm\_y and the klm\_z variables as the x- and y-axis and looking to the players' position in the court, I drop the pre-match, the half-time break and the post match periods from the full dataset (please refer to appendix \ref{AppA}). I end up with a total of 106,096 observations. Having available the timestamp variable, I found that the match lasts for 3,971,180 milliseconds, which equals to about 66 minutes. This also mean that, in average, the system collects positions about 37 times every second (3,971,180 / 106,096). Considering that 6 players are in the court at the same time, the position of each single player is collected, in average, 6.2 times every second (in other words, the position of each player is collected, in average, every 162 milliseconds). \vline Inspecting the data, I found that, out of the total of 106,096 observations, 17379 report the position of \textit{player 1}, 16708 report the position of \textit{player 2}, 15702 belong to \textit{player 3} while 18573 belong to \textit{player 4}. Moreover, \textit{player 5}'s and \textit{player 6}'s positions are collected, respectively, 18668 and 19066 times. This does not mean that the last three players remained in the court more than the others. Table \ref{summary} reports the summary statistics of the variables contained in the \textit{data.frame}. Min/max, mean and relevant quartiles are reported. It emerged that players moves inside the area covering all the 1 m$^2$ cells in which the court has been divided; more in detail, players also stay the cells related to the bench area (when smt\_y and klm\_y reports negative values) as well as outside the court (it happen when the y-axis values are outside the interval [0,15] and when the x-axis values are outside the interval [0,28]) I note that filtered (with kalman) and not filtered coordinates are really close each other. The average x-axis value (length) equal to 12.67 and the average y-axis (width) to 6.29. Roughly the same when considering the filtered values. The z-axis (i.e. the height) values range from 0 to 3, meaning that the maximum height that a player reach is around three meters. Speed (expressed in m/s) reports a mean of 1.89. \begin{table}[htbp] \centering \caption{Summary statistics for the relevant variables in the dataset} \small \begin{tabular}{l|rrrrrrrrrr} & smt\_z & smt\_x & smt\_y & klm\_z & klm\_x & klm\_y & klv\_z & klv\_x & klv\_y & speed(m/s) \\ \hline Min. & 0.00 & -2.00 &-2.00& 0.00 &-2.00& -2.00& 0.00 &-5.00 & -4.00& 0.00 \\ 1st Qu. & 0.00 & 5.00& 4.00& 0.00 &5.00& 4.00& 0.00 &0.00& 0.00 & 0.00 \\ median & 0.00 & 11.00& 7.00& 0.00 &11.00& 7.00 &0.00 &0.00& 0.00 & 0.00 \\ mean & 0.09 & 12.67& 6.29& 0.00 &12.68& 6.30& 0.08& 0.01& 0.00 & 1.89 \\ 3rd Qu. & 0.00& 21.00& 9.00& 0.00& 21.00 &9.00& 0.00 &0.00& 0.00 & 4.29 \\ Max. & 3.00& 29.00& 16.00& 4.00 &30.00& 17.00& 3.00& 6.00& 4.00 &78.57\\ \end{tabular} \label{summary} \end{table} \subsection{Heatmaps} I split the dataset in six smaller datasets each one referring to the location of one of the six players (please refer to appendix \ref{AppB}). I subdivide the playing area into squares of equal size (1 $m^2$) and I count the number of time a player lies in that square. To do this, I create six non-squared matrices of dimension 15 x 28, as in appendix \ref{AppB}. Each cell of each matrix contains the count of times that a certain player was in the related square. Then, using these matrices, I draw the heatmaps using \texttt{heatmap} function within \texttt{stats} package in R. Heatmaps are reported in figure \ref{heat}. In the figures, the length of the court is reported as the x-axis and the court's width is reported as the y-axis. Colors range from white (lowest intensity, i.e. the player rarely locates in that cell) to red (higest intensity, i.e. the player often locates in that cell) while intermediate intensities are marked with a yellow color. By the comparison of the heatmaps is possible to see some differences in the preferred location of each player (figure \ref{heat}). Heatmaps of player 1 and player 2 are similar, in the sense that both players tend to prefer areas close to the basket\footnote{The basket is positioned at the coordinate (1,8).}. Players 4 and 6 show a different locational pattern: their heatmap are less concentrated close to the basket and present an higher level of heterogeneity (i.e. we can also find red cells far away from the basket). Heatmaps of player 3 and player 5 present a red cell close to the bench: it means these two players passed lot of time on the bench. A Kernel approach is also used here (please refer to appendix \ref{AppC} for codes). A Kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. In other words, I replace the exact count of times the players lie in a cell with an estimation of it. I change the rectangular area from a collection of marked cells to a continuous space in which every single point in the area have a certain estimated value. Heatmaps generated from the KDE values are reported in figure \ref{heat_kernell}. As for the charts in figure \ref{heat}, the length of the court is reported as the x-axis and the court's width is reported as the y-axis, while colors range from white (lowest density) to red (higest density). These figures provide similar results to previous ones. \begin{figure}[!htb] \centering \includegraphics[width=0.31\textheight]{heat1} \includegraphics[width=0.31\textheight]{heat2} \includegraphics[width=0.31\textheight]{heat3} \includegraphics[width=0.31\textheight]{heat4} \includegraphics[width=0.31\textheight]{heat5} \includegraphics[width=0.31\textheight]{heat6} \caption{Heatmaps for the six players, in comparison.} \label{heat} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.34\textheight]{heat1_kernel} \includegraphics[width=0.34\textheight]{heat2_kernel} \includegraphics[width=0.34\textheight]{heat3_kernel} \includegraphics[width=0.34\textheight]{heat4_kernel} \includegraphics[width=0.34\textheight]{heat5_kernel} \includegraphics[width=0.34\textheight]{heat6_kernel} \caption{Heatmaps for the six players, in comparison.} \label{heat_kernell} \end{figure} \subsection{Motion Charts} Despite the heatmaps give some hints about the pattern of positioning of players, no conclusions can be done about their dynamic over time and about the interaction among them. The heatmaps completely disregard the time dimension: it is not possible to say something about the location of a certain player in a specific moment or to examine their trajectories. Moreover, heatmaps do not shed light on the interactions among players. With motion charts we can account for the time dimension and trace the trajectories of players. This tool allows to analyze the movement of a single player over time as well as the interaction of all the players together (please, refer to Appendix \ref{AppD} to find codes to reproduce the chart). A video showing how motion charts works in our dataset can be found at the link: \url{http://bodai.unibs.it/BDSports/Ricerca Top chart of figure \ref{trail} reports an example of motion chart trail of player 4 during an offensive play. In this example, the player starts from the bottom right (defensive) region of the court and he moves straight to the left part (the offensive region). Subsequently, he moves first to the bottom and then close to its basket. Player 4 ends its play moving a couple of meters far away from the basket. A similar analysis could be done by plotting in the same chart the trails of all the five players together (bottom chart of figure \ref{trail}) in order to highlight the interaction among them. \begin{figure}[!htb] \centering \includegraphics[width=0.65\textheight]{trail} \includegraphics[width=0.65\textheight]{trail_tot} \caption{Motion Chart. Example of trail for the movements of player4 (top) and all the five players together (bottom) during an offensive action.} \label{trail} \end{figure} Motion charts also allow to analyze the interactions in terms of spacing (i.e. the relative position of a players in terms of the position of the others). A correct spacing could affect the performance of the team. Moreover, to different schemes could correspond different spacing structures. Figure \ref{MV_A/D} reports a typical spacing structure during an offensive play (top) and a typical spacing structure during an defensive play (bottom). It immediately emerges that there is much distance among players during an offensive action. This is not the case of a typical defensive play: in this case, players are really close by. This makes sense. In fact, in attack, players must be well spaced to effectively play their schemes, move the ball and let the player free to shoot. Conversely, in defense, players may want to play such that the movement of the ball of the opposing team is prevented. In order to do so, players should be positioned close to each other. \begin{figure}[!htb] \centering \includegraphics[width=0.65\textheight]{attack} \includegraphics[width=0.65\textheight]{defence} \caption{A typical spacing structure for an offensive play (top) and a Typical spacing structure for a defensive play (bottom)} \label{MV_A/D} \end{figure} \subsection{Further evidences} The fact that players are more close each other in defensive actions motivate us to further analyze the interaction of players by mean of distances and Voronoi areas. I compute the average distance for every moment of the match by computing the distance among every pair of players and then by computing the mean of such a distances. Larger is the distance, more spread around the court are the five players. The Voronoi area is computed as the sum of the five Voronoi areas related to each of the five players in the court. Bigger is this value, bigger is the area that players dominate (i.e. that can reach before the players of the opposite team). Then I determine, for each moment, whether it was attack or defense, by computing the xy-axes centroid of the location of the five players in the court. Table \ref{A_D} reports the average distance and the Voronoi area and confirm what the motion chart expressed visually: players are much spread around the court during offensive plays, while they concentrate in closer points when defending. \begin{table}[htbp] \centering \caption{Mean Voronoi area and mean average distance in attack and defense.} \small \begin{tabular}{l|rr} & Voronoi area ($m^2$) & Avg. distance (m) \\ \hline Defense & 28.47 & 5.68 \\ Attack & 42.59 & 7.25 \\ \end{tabular} \label{A_D} \end{table} To answer the question if different quintets plays in a different way, and, in doing so, they are more (less) spread in the court, I compute the Voronoi area and the average distance for each possible quintet. There are six quintets when six players rotate: the first is composed by all the players except the player 1, the second quintet is composed by all the players except the player 2, and so on and so forth. Table \ref{quint} reports the Voronoi area and the average distance for each quintet, splitting by attack and defense. Once more, results show that, when attacking, the team presents a bigger Voronoi area and a smaller average distance among players. When the player 1 is on the bench, both the Voronoi area and the average distance are larger in attacking actions. In defense, the quintets having player 4 or player 5 on the bench play more closer each other; this is confirmed both by the Voronoi area and by the average distance. \begin{table}[htbp] \centering \caption{Mean Voronoi area and mean average distance for different quintets.} \small \begin{tabular}{l|rr|rr} & Attack & & Defense & \\ Quintet & Voronoi area & Avg. distance & Voronoi area & Avg. distance \\ \hline 1 & 47.54 & 7.70 & 30.52 & 5.85 \\ 2 & 41.09 & 7.09 & 28.97 & 5.76 \\ 3 & 42.76 & 7.29 & 29.14 & 5.57 \\ 4 & 39.69& 7.13 & 23.00 & 5.03 \\ 5 & 41.46 & 7.04 & 22.99 & 4.99 \\ 6 & 43.03 & 7.15 & 31.22 & 6.22 \\ \end{tabular} \label{quint} \end{table} Another aspect is to analyze the relation between the distance between players and the team performance. In other words, we want to ask the question whether to play more spread around the court is positively correlated with a good shooting percentage. I match our movements \textit{data.frame} with the play by play and I associate to each minute the two point and three point shoots percentage. Doing this match, I can find the average distance and the Voronoi area in different periods of te match. In detail, I split the match in minutes where the shooting percentage was 0\%, 25\%, 33\%, 50\%, 67\&, 100\%. However, results (Table \ref{perc}) do not show significant differences among groups. \begin{table}[htbp] \centering \caption{Mean Voronoi area and mean average distance in attack, for different moments of the match.} \small \begin{tabular}{l|rr} \% & Voronoi area ($m^2$) & Avg. distance (m) \\ \hline 0 & 40.90 & 7.18 \\ 25 & 48.95 & 7.48 \\ 33 & 46.05 & 7.38 \\ 50 & 46.98 & 7.35 \\ 66 & 46.90 & 7.71 \\ 100 & 40.84 & 6.99 \\ \end{tabular} \label{perc} \end{table} \section{Conclusions} \label{sec:concl} There is a variety of methods and models used in the field of spatio-temporal analysis for team sports that borrow from many research communities, including machine learning, network science, GIS, computational geometry, computer vision, complex systems science and statistics. Analyzing the relation between the team performance and the players' trajectories is a tricky task. To choose the ad-hoc methodology is of vital importance and the availability of a visualization method that guides researchers on this choice it's urgent. In this paper I show that \texttt{MotionChart} function from \texttt{googleVis} package in R is a useful tool for visualizing trajectories. The chart properly shows, in time motion, the synchronized trajectories of more then one player on the same 2-dimensional chart. I recommend the use of the motion chart that could be useful in supporting researcher on preliminar stages of their analysis and to facilitate the interpretation of their related results. With a case study based on basketball player's movements, I show how the tool of the motion charts suggest the presence of interaction among players as well as specific patterns of movements. Guided by these evidences, I have computed Voronoi areas and distances among players for offensive and defensive actions, and for different quintets in the court. Evidences suggested by the motion charts have been confirmed. Future developments aim to adopt spatial statistics and spatial econometrics techniques applied to trajectory analysis \cite{brillinger2008modelling}, such as bivariate K-function method \cite{arbia2008class}. Adapting these techniques to team sports will contribute in this field by better characterizing specific patterns of players' movements. \section*{Acknowledgments} Research carried out in collaboration with the Big\&Open Data Innovation Laboratory (BODaI-Lab), University of Brescia (project nr. 03-2016, title \textit{Big Data Analytics in Sports}, \url{www.bodai.unibs.it/BDSports/}), granted by Fondazione Cariplo and Regione Lombardia. I would like to thanks Paola Zuccolotto and Marica Manisera (University of Brescia) for giving me suggestions during the preparation of this paper. Furthermore, I thanks Tullio Facchinetti and Federico Bianchi (University of Pavia) for having helped me with the data interpretation.
{ "attr-fineweb-edu": 2.376953, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeXnxK6EuM_Ubr5Rv
\section{Introduction} Sports provide a rich laboratory in which to study competitive behavior in a well-defined way. The goals of sports competitions are simple, the rules are well defined, and the results are easily quantifiable. With the recent availability of high-quality data for a broad range of performance metrics in many sports (see, for example, \url{shrpsports.com}), it is now possible to address questions about measurable aspects of sports competitions that were inaccessible only a few years ago. Accompanying this wealth of new data is a rapidly growing body of literature, both for scientific and lay audiences, on quantitative modeling and analysis of sports statistics (for general references, see, e.g., \cite{M97,ABC,KOPR07,AK08,GE09,AM11}). In this spirit, our investigation is motivated by the following simple question: can basketball scoring be described by a random walk? To answer this question we analyze play-by-play data for four seasons of all National Basketball Association (NBA) games. Our analysis indicates that a simple random-walk model successfully captures many features of the observed scoring patterns. We focus on basketball primarily because there are many points scored per game --- roughly 100 scoring events in a 48-minute game --- and also many games in a season. The large number of scoring events allows us to perform a meaningful statistical analysis. Our random walk picture addresses the question of whether sports performance metrics are determined by memory-less stochastic processes or by processes with long-time correlations (\cite{GVT85,MW91,G96,DC00,EG08}). To the untrained eye, streaks or slumps --- namely, sustained periods of superior or inferior performances --- seem so unusual that they ought to have exceptional explanations. This impression is at odds with the data, however. Impartial analysis of individual player data in basketball has discredited the notion of a `hot hand' (\cite{GVT85, AF04}). Rather, a player's shooting percentage is independent of past performance, so that apparent streaks or slumps are simply a consequence of a series of random uncorrelated scoring events. Similarly, in baseball, teams do not get `hot' or `cold' (\cite{V00,SR09}); instead, the functional forms of winning and losing streak distributions arise from random statistical fluctuations. In this work, we focus on the statistical properties of scoring during each basketball game. The scoring data are consistent with the scoring rate being described by a continuous-time Poisson process. Consequently, apparent scoring bursts or scoring droughts arise from the Poisson statistics rather than from a temporally correlated process. Our main hypothesis is that the evolution of the score difference between two competing teams can be accounted by a continuous-time random walk. This idealized picture of random scoring has to be augmented by two features --- one that may be ubiquitous and one idiosyncratic to basketball. The former is the existence of a weak linear restoring force, in which the leading team scores at a slightly lower rate (conversely, the losing team scores at a slightly higher rate). This restoring force seems to be a natural human response to an unbalanced game --- a team with a large lead may be tempted to coast, while a lagging team likely plays with greater urgency. A similar ``rich get poorer'' and ``poor get richer'' phenomenon was found in economic competitions where each interaction has low decisiveness (\cite{DHS98, GS07}). Such a low payoff typifies basketball, where the result of any single play is unlikely to determine the outcome of the game. The second feature, idiosyncratic to basketball, is \emph{anti-persistence}, in which a score by one team is more likely to be followed by a score from the opponent because of the change in ball possession after each score. By incorporating these attributes into a continuous-time random-walk description of scoring, we build a computational model for basketball games that reproduces many statistical features of basketball scoring and team win/loss records. \section{Scoring Rate} Basketball is played between two teams with five players each. Points are scored by making baskets that are each worth 2 points (typically) or 3 points. Additional single-point baskets can occur by foul shots that are awarded after a physical or technical foul. The number of successive foul shots is typically 1 or 2, but more can occur. The duration of a game is $48$ minutes (2880 seconds). Games are divided into four 12-minute quarters, with stoppage of play at the end of each quarter. The flow of the game is ostensibly continuous, but play does stop for fouls, time-outs, and out-of-bounds calls. An important feature that sets the time scale of scoring is the 24-second clock. In the NBA, a team must either attempt a shot that hits the rim or score within 24 seconds of gaining possession of the ball, or else possession is forfeited to the opposing team. At the end of the game, the team with the most points wins. We analyze play-by-play data from 6087 NBA games for the 2006/07-- 2009/10 seasons, including playoff games (see \url{www.basketballvalue.com}); for win/loss records we use a larger dataset for 20 NBA seasons (\url{www.shrpsports.com}). To simplify our analysis, we consider scoring only until the end of regulation time. Thus every game is exactly 48 minutes long and some games end in ties. We omit overtime to avoid the complications of games of different durations and the possibility that scoring patterns during overtime could be different from those during regulation time. We focus on what we term \emph{scoring plays}, rather than individual baskets. A scoring play includes any number of baskets that are made with no time elapsed between them on the game clock. For example, a 2-point play could be a single field goal or two consecutive successful foul shots; a 3-point play could be a normal field goal that is immediately followed by a successful foul shot, or a single successful shot from outside the 3-point line. High-value plays of 5 and 6 points involve multiple technical or flagrant fouls. Since they have negligible probability of occurence (Table~\ref{scoreProb}), we will ignore them in our analysis. Consistent with our focus on scoring plays, we define the scoring rate as the number of scoring plays per second. This quantity is measured for each second of the game. For the 4 seasons of data, the average scoring rate is roughly constant over the course of a game, with mean value of $0.03291$ plays/sec (Fig.~\ref{scoreRate}). Averaging each quarter separately gives a scoring rate of 0.03314, 0.03313, 0.03243, and 0.03261 for first through fourth quarters, respectively. The scoring rate corresponds to 94.78 successful plays per game. Since there is, on average, 2.0894 points scored per play, each team has 99.018 points in an average game (\cite{graph}). Parenthetically, the average scoring rate is constant from season to season, and equals 0.03266, 0.03299, 0.03284, 0.03315 for the 2006--07 to the 2009--10 seasons. \begin{table}[htb] \center{\mbox{ \begin{tabular}{|l|l|} \hline Points per Basket & Percentage \\ \hline 1 pt. & 33.9\% \\ 2 pts. & 54.6\% \\ 3 pts. & 11.5\% \\ \hline \end{tabular} \quad\quad\quad \begin{tabular}{|l|l|} \hline Points per Play & Percentage \\ \hline 1 pt. & 8.70\% \\ 2 pts. & 73.86\% \\ 3 pts. & 17.28\% \\ 4 pts. & 0.14\% \\ 5 pts. & 0.023\% \\ 6 pts. & 0.0012\% \\ \hline \end{tabular} }} \caption{ Point values of each basket (left) and each play (right) and their respective percentages. } \label{scoreProb} \end{table} \begin{figure}[htb] \begin{center} \includegraphics[width=0.46\textwidth]{Rate} \quad \includegraphics[width=0.46\textwidth]{endQuarters} \caption{(a) Average scoring rate as a function of time over all games in our dataset. (b) Rate near the change of each quarter; zero on the abscissa corresponds to the start/end of a quarter.} \label{scoreRate} \end{center} \end{figure} Curiously, significant deviations to the constant scoring rate occur near the start and end of each quarter (Fig.~\ref{scoreRate}(a)). During roughly the first 10 seconds of each quarter, scoring is unlikely because of a natural minimum time to make a basket after the initiation of play. Near the end of each of the first three quarters, the scoring rate first decreases and then sharply increases right at the end of the quarter. This anomaly arises because, within the last 24 seconds of the quarter, teams may intentionally delay their final shot until the last moment, so that the opponent has no chance for another shot before the quarter ends. However, there is only an increase in the scoring rate before the end of the game, possibly because of the urgent effort of a losing team in attempting to mount a last-minute comeback via intentional fouls. While these deviations from a constant scoring rate are visually prominent, they occur over a small time range near the end of each quarter. For the rest of our analysis, we ignore these end-of-quarter anomalies and assume that scoring in basketball is temporally homogeneous. In addition to temporal homogeneity, the data suggest that scoring frequency obeys a Poisson-like process, with little memory between successive scores (see also~\cite{SG11}). To illustrate this property, we study the probability $P(t)$ of time intervals between successive scoring plays. There are two natural such time intervals: (a) the interval $t_{\rm e}$ between successive scores of either team, and (b) the interval $t_{\rm s}$ between successive scores of the same team. The probability $P(t_{\rm e})$ has a peak at roughly 16 seconds, which evidently is determined by the 24-second shot clock. This probability distribution decays exponentially in time over nearly the entire range of data (Fig.~\ref{intervals}). Essentially the same behavior arises for $P(t_{\rm s})$, except that the time scale is larger by an obvious factor of 2. When all the same-team time intervals are divided by 2, the distributions $P(t_{\rm e})$ and $P(t_{\rm s})$ overlap substantially. The long-time tails of both $P(t_{\rm e})$ and $2P(t_{\rm s}/2)$ are proportional to the exponential function $\exp(-\lambda_{\rm tail}t)$, with rate $\lambda_{\rm tail}=0.048$ plays/sec. This value is larger than the actual scoring rate of 0.03291 plays/sec because scoring intervals of less than 10 seconds are common for the exponential distribution but are rare in real basketball games. Amusingly, the longest time interval in the dataset for which neither team scored was 402 seconds, while the longest interval for which a single team did not score was 685 seconds. \begin{figure}[ht] \begin{center} \includegraphics[width=0.46\textwidth]{intervals_either} \includegraphics[width=0.46\textwidth]{intervals_same} \caption{Probability distributions of time intervals between successive scores for either team, $P(t_e)$ vs.\ $t_{\rm e}$ (a), and for the same team, $P(t_{\rm s})$ vs.\ $t_{\rm s}$ (b). The line is the least-squares linear fit of $\ln(P)$ vs.\ $t$ over the range $t_{\rm e}>30$ sec and $t_{\rm s}>60$ sec and corresponds to a decay rate $\lambda_{\rm tail}=0.048$ and 0.024, respectively.} \label{intervals} \end{center} \end{figure} It is instructive to compare the distribution of total score in a single game to that of a Poisson process. Under the assumption that scores occur at the empirically-observed rate of $\lambda=0.03291$ plays/sec, the probability that a game has $k$ scoring plays is given by the Poisson distribution, $\mathrm{Prob}({\rm \#~plays}=k)=\frac{1}{k!}(\lambda T)^ke\,^{-\lambda T}$, where $T=2880$ sec.\ is the game duration. Since the average score of each play is $\overline{s} =2.0894$ points, a game that contains $k$ scoring plays will have a total score of approximately $S=\overline{s}k$. By changing variables from $k$ to $S$ in the above Poisson distribution, the probability that a game has a total score $S$ is \begin{equation} \label{gamma} \mathrm{Prob}({\rm score}=S)= \frac{1}{\overline{s}}\frac{(\lambda T)^{S/\overline{s}}\, e^{-\lambda T}}{(S/\overline{s})!}. \end{equation} This probability agrees reasonably with game data (Fig.~\ref{totalScore}), considering that \eqref{gamma} is derived using only the mean scoring rate and mean points per play. By including the different point values for each play, the resulting score distribution would broaden. Furthermore, if we impose a cutoff in the probability of short scoring intervals (see Fig.~\ref{intervals}) the total score distribution of Fig.~\ref{totalScore} would shift slightly left which would bring the model prediction closer to the data. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\textwidth]{totalScore} \caption{Probability $\mathrm{Prob}({\rm score}=S)$ for a total score $S$ in a single game. Circles are the data, and the solid curve is the Poisson distribution \eqref{gamma}. } \label{totalScore} \end{center} \end{figure} An important aspect of the time intervals between successive scoring events is that they are weakly correlated. To illustrate this feature, we take the time-ordered list of successive scoring intervals $t_1, t_2, t_3, \ldots$, for all games and compute the n-lag correlation function (\cite{BJ76}) \begin{equation} \label{corr} C(n)\equiv \frac{\sum_k (t_k-\overline{t})(t_{k+n}-\overline{t})}{\sum_k (t_k-\overline{t})^2}~. \end{equation} Thus $n=1$ gives the correlation between the time intervals between successive scores, $n=2$ to second-neighbor score intervals, etc. For both the intervals $t_{\rm e}$ (independent of which team scored) and $t_{\rm s}$ (single team), we find that $C(n)<0.03$ for $n\geq 1$. Thus there is little correlation between scoring events, suggesting that basketball scoring is a nearly memory-less process. Accordingly, scoring bursts or scoring droughts are nothing more than manifestations of the fluctuations inherent in a Poisson process of random and temporally homogeneous scoring events. \section{Random-Walk Description of Scoring} We now turn to the question of \emph{which} team scores in each play to build a random-walk description of scoring dynamics. After a given team scores, possession of the ball reverts to the opponent. This change of possession confers a significant disadvantage for a team to score twice in succession. On average, immediately after a score, the same team scores again with probability $q=0.348$, while the opponent scores with probability $0.652$. This tendency for alternating scores is characteristic of an \emph{anti-persistent\/} random walk (\cite{G07}), in which a step in a given direction is more likely to be followed by a step in the opposite direction. As we now discuss, this anti-persistence is a determining factor in the streak-length distribution. A streak of length $s$ occurs when a team scores a total of $s$ consecutive points before the opposing team scores. We define $Q(s)$ as the probability for a streak to have length $s$. To estimate this streak-length probability, note that since $\overline{s}=2.0894$ points are scored, on average, in a single play, a scoring streak of $s$ points corresponds to $s/\overline{s}$ consecutive scoring plays. In terms of an anti-persistent random walk, the probability $Q(s)$ for a scoring streak of $s$ points is $Q(s)=Aq^{s/\overline{s}}$ where $A=q^{-1/\overline{s}}-1$ is the normalization constant. This simple form reproduces the observed exponentially decaying probability of scoring streaks reasonably accurately (Fig.~\ref{streaks}). \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\textwidth]{streaks} \caption{Probability $Q(s)$ for a consecutive point streak of $s$ points ($\circ$). The dashed line corresponds to $Q(s)=Aq^{s/\overline{s}}$, with $q=0.348$ and $A$ the normalization constant. The solid line corresponds to a refined model that incorporates the different probabilities of 1, 2, 3, and 4-point plays (see Eqs.~\eqref{lowProb} and \eqref{recursive}). } \label{streaks} \end{center} \end{figure} However, we can do better by constructing a refined model that incorporates the different probabilities for 1, 2, 3, and 4 point plays. Let $w_\alpha$ be the probability that a play is worth $\alpha$ points (Table~\ref{scoreProb}) and let $v_m$ be the value of the $m^{\rm th}$ play in a streak. A scoring sequence $\{v_1,\ldots\,v_n\}$ that results in $s$ points must satisfy the constraint $\sum_{k=1}^n v_k=s$, where $n$ is the number of plays in the sequence. The probability for this streak is given by $\prod_{k=1}^n w_{v_k}$. Because a streak of length $s$ points involves a variable number of plays, the total probability for a streak of $s$ points is \begin{equation} Q(s)=\sum_{n=1}^\infty \left[ q^{n-1}(1-q) \sum_{\{v_k\}}\left(\prod_{k=1}^n w_{v_k}\right)\right] \,, \label{generalPs} \end{equation} Here the inner sum is over all allowed sequences $\{v_k\}$ of $n$ consecutive point-scoring events, and the factor $q^{n-1}(1-q)$ gives the probability for a streak of exactly $n$ plays. For example, the probabilities for streaks up to $s=4$ are: \begin{align} \label{lowProb} \begin{split} Q(1) &= (1-q)w_1 \\ Q(2) &= (1-q)[w_2 + qw_1^2] \\ Q(3) &= (1-q)[w_3 + 2qw_2w_1 + q^2w_1^3] \\ Q(4) &= (1-q)[w_4 + q(2w_3w_1+w_2^2) + 3q^2w_2w_1^2 + q^3w_1^4]. \end{split} \end{align} A direct calculation of these probabilities for general $s$ becomes tedious for large $s$, but we can calculate them recursively for $s>4$. To do so, we decompose a streak of $s$ points as a streak of $s-v_n$ points, followed by a single play that of $v_n$ points. The probability of such a play is $qw_{v_n}$. Because the last play can be worth 1, 2, 3, or 4 points, the probability for a streak of length $s$ is given recursively by \begin{equation} \label{recursive} Q(s) = q[w_1Q(s-1) + w_2Q(s-2) + w_3Q(s-3) + w_4Q(s-4)]. \end{equation} Using Eqs.~\eqref{lowProb} and \eqref{recursive}, we may calculate $Q(s)$ numerically for any $s$. The resulting probabilities closely match the empirical data (Fig.~\ref{streaks}), suggesting that streaks arise only from random statistical fluctuations and not from teams or individuals getting hot or cold. Another intriguing feature of basketball games is that the scoring probability at any point in the game is affected by the current score: the probability that the winning team scores decreases systematically with its lead size; conversely, the probability that the losing team scores increases systematically with its deficit size (Fig.~\ref{Pvsd}). This effect is well-fit by a linear dependence of the bias on the lead (or deficit) size. (Such a linear restoring force on a random walk is known in the physics literature as the Ornstein-Uhlenbeck model (\cite{UO30}). For basketball, the magnitude of the effect is small; assuming a linear dependence, a least-squares fit to the data gives a decrease in the scoring rate of 0.0022 per point of lead. Naively, this restoring force originates from the winning team `coasting' or the losing team increasing its level of effort. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\textwidth]{Pvsd} \caption{Data for the probability $S(L)$ that a team will score next given a lead $L$ ($\circ$). The line is the least-squares linear fit, $S(L)=\frac{1}{2}-0.0022L$.} \label{Pvsd} \end{center} \end{figure} We now build a random-walk picture for the time evolution of the difference in the score $\Delta(t)$ between two teams. Each game starts scoreless and $\Delta(t)$ subsequently increases or decreases after each scoring play until the game ends. The trajectory of $\Delta(t)$ versus $t$ qualitatively resembles the position of a random walk as a function of time. Just as for random walks, the statistically significant quantity is $\sigma^2\equiv {\rm var}( \Delta(t))$, the variance in the score difference, averaged over many games. For a classic random walk, $\sigma^2=2Dt$, where $D$ is the diffusion coefficient. As illustrated in Fig.~\ref{varVSt}, $\sigma^2$ does indeed grow nearly linearly with time for NBA basketball games, except for the last $2.5$ minutes of the game; we will discuss this latter anomaly in more detail below. A least-squares linear fit to all but the last 2.5 minutes of game data gives $\sigma^2=2D_{\rm fit}t$, with $D_{\rm fit}=0.0363$ points$^2$/sec. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\textwidth]{varVSt} \caption{Variance in the score difference, $\sigma^2$, as a function of time. The line $\sigma^2=2D_{fit}t$ is the least-squares linear fit, excluding the last 2.5 minutes of data. The variance reaches its maximum $2.5$ minutes before the end of the game (dashed line). } \label{varVSt} \end{center} \end{figure} We may also independently derive an effective diffusion constant from the time evolution of the score difference from basic parameters of an anti-persistent random walk. For such a walk, two successive scores by the same team correspond to two random-walk steps in the same direction. As mentioned above, we found that the probability of this outcome is $q=0.348$. Conversely, the probability for a score by one team immediately followed with a score by the opposing team is $1-q$. Let us define $P(\Delta,t)$ as the probability that the score difference equals $\Delta$ at time $t$. Using the approach of \cite{G07} for an anti-persistent random walk, $P(\Delta,t)$ obeys the recursion \begin{subequations} \begin{eqnarray} P(\Delta,t+\tau)=q P(\Delta-\ell,t) + q P(\Delta+\ell,t) + [(1-q)^2-q^2]P(\Delta, t-\tau), \label{Difference} \end{eqnarray} where $\ell$ is the point value of a single score. To understand this equation, we rewrite it as \begin{eqnarray} P(\Delta,t+\tau)= q[P(\Delta-\ell,t)+P(\Delta+\ell,t)-P(\Delta,t-\tau)] +(1-q) P(\Delta,t-\tau). \label{simplify} \end{eqnarray} \end{subequations} The second factor in \eqref{simplify} corresponds to two scores by alternating teams; thus the score difference equals $\Delta$ at time $t-\tau$ and again at time $t+\tau$. This event occurs with probability $1-q$. The terms in the square bracket correspond to two successive scores by one team. Consequently a score difference of $\Delta\pm2\ell$ at time $t-\tau$ evolves to a score difference $\Delta$ at time $t+\tau$. Thus the corresponding walk must be at $\Delta\pm\ell$ at time $t$ but \emph{not} at $\Delta$ at time $t-\tau$. Expanding $P(\Delta,t)$ in Eq.~\eqref{Difference} to first order in $t$ and second order in $\Delta$ yields \begin{equation} \frac{\partial P}{\partial t}=\frac{q}{(1-q)}\,\frac{\ell^2}{2\tau}\,\frac{\partial^2 P}{\partial \Delta^2}\equiv D_{\rm ap}\,\frac{\partial^2 P}{\partial \Delta^2}~. \label{TaylorExpand} \end{equation} where $D_{\rm ap}$ is the effective diffusion coefficient associated with an anti-persistent random walk. Notice that for $q=\frac{1}{2}$ the score evolution reduces to a simple symmetric random walk, for which the diffusion coefficient is $D_{\rm ap}=\ell^2/(2\tau)$. Substituting in the values, from the game data, $q=0.348$ (probability for the same team to score consecutively), $\ell=2.0894$ (the mean number of points per scoring event), and $\tau=30.39$ seconds (the average time between successive scoring events), we obtain \begin{equation} D_{\rm ap} =\frac{q}{1-q}\,\frac{\ell^2}{2\tau}=0.0383\,\,\frac{(\mathrm{points})^2}{\mathrm{sec}}~. \label{EffectiveD} \end{equation} This diffusion coefficient is satisfyingly close to the value $D_{\rm fit}=0.0363$ from the empirical time dependence $\sigma^2$, and suggests that an anti-persistent random-walk accounts for its time dependence. We attribute the small discrepancy in the two estimates of the diffusion coefficient to our neglect of the linear restoring force in the diffusion equation \eqref{TaylorExpand}, Thus far, we have treated all teams as equivalent. However, the influence of team strengths on basketball scoring is not decisive --- weaker teams can (and do) win against better teams. The data show that the winning team in any game has a better season record than the losing opponent with probability 0.6777. Thus within our random-walk picture, the underlying bias that arises from the disparity in the strengths of the two competing teams is masked by random-walk fluctuations. For a biased random walk with bias velocity $v$ and diffusion coefficient $D$, the competition between the bias and fluctuations is quantified by the \emph{P\'eclet} number $Pe\equiv v^2t/2D$ (see, e.g., \cite{Pe,R01}), the ratio of the average displacement squared $(vt)^2$ to the mean-square displacement $2Dt$ caused by random-walk fluctuations. For $Pe\ll1$, bias effects due to disparities in team strengths are negligible, whereas for $Pe\gg1$ the bias is important. For basketball, we estimate a typical bias velocity from the observed average final score difference, $\overline{|\Delta|}\approx 10.7$ points, divided by the game duration of $t=2880$ seconds to give $v\approx 0.0037$ points/sec. Using $D\approx 0.0363$ points$^2$/sec, we obtain $Pe\approx 0.55$, which is small, but not negligible. Consequently, the bias arising from intrinsic differences in team strengths is typically not large enough to predict the outcome of typical NBA basketball games. \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{diffDist} \caption{Probability for a given score difference at the end of the first quarter, after 45.5 minutes, and at the end of the game. The abscissa is rescaled by linear fit of variance, $\sigma^2\approx2D_{fit}t$ (see Fig.~\ref{varVSt}). The dashed curve is the distribution from simulated games with team strength variance, $\sigma^2_X=0.0083$ (see Sec.~4). } \label{diffDist} \end{center} \end{figure} Finally, the scoring anomaly associated with the last 2.5 minutes of the game is striking. If the score evolves as an anti-persistent random walk, the distribution of the score difference should be Gaussian whose width grows with time as $\sqrt{Dt}$. As shown in Fig.~\ref{diffDist}, the distribution of score difference has a Gaussian appearance, with a width that grows slightly more slowly than $\sqrt{Dt}$. We attribute this small deviation to the weak restoring force, which gives a diffusion constant that decreases with time. However, in the final $2.5$ minutes of the game, the score-difference distribution develops a spike at $\Delta=0$ and dips for small $|\Delta|$. Thus close games tend to end in ties much more often than expected from the random-walk picture of the score evolution. This anomaly may stems from the losing team playing urgently to force a tie, a hypothesis that accords with the observed increase in scoring rate near the end of the game (Fig.~\ref{scoreRate}). \section{Computational Model} From all of the empirical observations about scoring, we now construct a computational random-walk model that broadly accounts for point-scoring statistical phenomena, as well as the win/loss record of all teams at the end of the season. In our model, games are viewed as a series of temporally homogeneous and uncorrelated scoring plays. The time between plays is drawn from a Poisson distribution whose mean is the observed value of $30.39$ seconds. We ignore the short-lived spikes and dips in the scoring rate at the end of each quarter (Fig.~\ref{scoreRate}) and also the very rare plays of 5 or 6 points. Thus plays can be worth 1, 2, 3, or 4 points, with corresponding probabilities drawn from the observed distribution in Table~\ref{scoreProb}. Simulations of scoring events continue until the final game time of $48$ minutes is reached. There are three factors that determine \emph{which} team scores. First, the better team has a greater intrinsic chance of scoring. The second factor is the anti-persistence of successive scoring events that arises from the change of possession after a score. The last is the linear restoring force, in which the scoring probability of a team decreases as its lead increases (and vice versa for a team in deficit). We therefore write the probabilities $P_A$ and $P_B$ that team A or team B scores next, immediately after a scoring event, as: \begin{eqnarray} \label{modelProb} \begin{split} P_A&=I_{A} - 0.152r -0.0022 \Delta, \\ P_B&=I_{B} + 0.152r +0.0022 \Delta. \end{split} \end{eqnarray} Here $I_{A}$ and $I_{B}$ are the intrinsic scoring probabilities (which must satisfy $I_{A}+I_{B}=1$; and the term $\pm 0.152r$ accounts for the anti-persistence. Here $r$ is defined as \begin{equation} r= \begin{cases} +1 & \text{team A scored previously},\\ -1 & \text{team B scored previously},\\ 0 & \text{first play of the game}, \end{cases} \label{rDef} \end{equation} and ensures that the average probability for the same team to score twice in succession equals the observed value of 0.348. Finally, the term $0.0022\Delta$ (with $\Delta$ the score difference) accounts for the restoring force with the empirically measured restoring coefficient (Fig.~\ref{Pvsd}). In our minimalist model, the only distinguishing characteristic of team $\alpha$ is its intrinsic strength $X_\alpha$. We estimate team strengths by fitting simulated team win/loss records to that predicted by the classic Bradley-Terry competition model (\cite{BT52}), in which the intrinsic scoring probabilities are given by \begin{equation} I_{A} = \frac{X_A}{X_A + X_B}~, \quad\quad\quad I_{B}=\frac{X_B}{X_A+X_B}~. \label{pStrengths} \end{equation} To simulate a season, we first assign a strength parameter to each team that is fixed for the season. We assume that the distribution of strengths is drawn from a Gaussian distribution with average $\mu_X$ and variance $\sigma^2_X$ (\cite{JAS93}). Nearly identical results arise for other team strength distributions. Since the intrinsic probabilities, $I_A$ and $I_B$, depend only on the strength ratio $X_A/X_B$, we may choose $\mu_X=1$ without loss of generality, so the only free parameter is $\sigma^2_X$. We determine $\sigma^2_X$ by simulating many NBA seasons for a league of 30 teams for a range of $\sigma^2_X$ values and comparing the simulated probability distributions for various fundamental game observables with corresponding empirical data. Specifically, we examined: (i) The distribution of a given final score difference (already shown in Fig.~\ref{diffDist}). (ii) The season team winning percentage as a function of its normalized rank (Fig.~\ref{ranks} (a)); here, normalized rank is defined so that the team with the best winning percentage has rank 1, while the team with worst record has rank 0. (iii) The probability for a team to lead for a given fraction of the total game time (Fig.~\ref{ranks} (b)). (iv) The distribution of the number of lead changes during a game (Fig.~\ref{ranks} (c)). Our motivation for focusing on these measures is that they provide useful statistical characterizations of how basketball games evolve. The score difference is the most basic information about the outcome of a basketball game. Similarly, the relation between rank and winning percentage provides a clean overall test of our model. The probability for a given lead time is motivated by the well-known, but mysterious arcsine law (\cite{F68}). According to this law, the trajectory of a one-dimensional random walk is likely to always be on one side of the origin rather than the walk spending equal amounts of time to the left and to the right of the origin. The ramification of the arcsine law for basketball is that a single team is likely to lead for the most of the game rather than both teams to equally sharing the time in the lead. As a corollary to the arcsine law, there are typically $\sqrt{N}$ crossings of the origin for a one-dimensional random walk of $N$ steps, and the distribution in the number of lead changes is Gaussian. These origin crossings correspond to lead changes in basketball games. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{ranks} \quad \includegraphics[width=0.45\textwidth]{arcsine} \includegraphics[width=0.45\textwidth]{leadChanges} \caption{(a) Winning percentage as a function of team rank. The data (circles) correspond to the 1991--2010 NBA seasons. The solid curve is the simulated win/loss record when the team strength variance $\sigma^2_X=0.0083$. The dashed curve is the simulated win/loss record if all teams have equal strength, $\sigma^2_X=0$. (b) Probability that a randomly-selected team leads for a given total time. (c) Probability for the number of lead changes per game: data ($\circ$) and simulation (curve). Simulations were run for $10^4$ seasons with $\sigma^2_X=0.0083$.} \label{ranks} \end{center} \end{figure} For each of the four empirical observables listed above, we compare game data with the corresponding simulation results for a given value of the team strength variance $\sigma^2_X$. We quantify the quality of fit between the game data and the simulation results by the value $\chi^2$ defined by \begin{equation} \chi^2 = \sum_x (F_E(x) - F_S(x))^2~. \label{chi} \end{equation} Here $F_E(x)$ is one of the four above-mentioned empirical observables, $F_S(x)$ is the corresponding simulated observable, and $x$ is the underlying variable. For example, $F_E(x)$ and $F_S(x)$ could be the empirical and simulated probabilities of the final score difference and $x$ would be the final score difference. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\textwidth]{fitting} \caption{ $\chi^2$ as a function of $\sigma^2_X$ for: the score difference distribution at 45.5 minutes ($\circ$), number of lead changes per game ($\bigtriangledown$), distribution of time that a team is leading ($\triangleright$), and winning percentage as a function of rank ($\bigtriangleup$). Each point is based on simulation of $10^3$ seasons.} \label{fitting} \end{center} \end{figure} Figure~\ref{fitting} shows the values of $\chi^2$ as a function of $\sigma^2_X$ for the four observables. The best fit between the data and the simulations all occur when $\sigma^2_X$ is in the range $[0.00665,\,0.00895]$. To extract a single optimum value for $\sigma^2_X$, we combine the four $\chi^2$ measurements into a single function. Two simple and natural choices are the additive and multiplicative forms \begin{eqnarray} f_{\rm add}=\sum_{i=1}^{4} \frac{\chi^2_i}{\min(\chi^2_i)}\,, \qquad\qquad f_{\rm mult}=\prod_{i=1}^{4} \frac{\chi^2_i}{\min(\chi^2_i)}\,, \label{combineChi} \end{eqnarray} where the sum and product are over the four observables, $\chi^2_i$ is associated with the $i^{\rm th}$ observable, and $\min(\chi^2_i)$ is its minimum over all $\sigma^2_X$ values. The denominator allows one to compare the quality of fit for disparate functions. In the absence of any prior knowledge about which statistical measure about basketball scoring is most important, we have chosen to weight them equally. With this choice, both $f_{\rm add}$ and $f_{\rm mult}$ have minima at $\sigma^2_X=0.0083$. Moreover, for this value of $\sigma^2_X$, the value of $\chi^2_i$ for each observable exceeds its minimum value by no more than $1.095$. These results suggest that the best fit between our model and empirical data arises when we choose $\sigma^2_X=0.0083$. Thus roughly 2/3 of the NBA teams have their intrinsic strength in the range $1\pm\sqrt{\sigma_x^2}\approx 1\pm 0.09$. \section{Outlook} From all the play-by-play data of every NBA basketball game over four seasons, we uncovered several basic features of scoring statistics. First, the rate of scoring is nearly constant during a basketball game, with small correlations between successive scoring events. Consequently, the distribution of time intervals between scoring events has an exponential tail (Fig.~\ref{intervals}). There is also a scoring anti-persistence, in which a score by one team, is likely to be followed by a score by the opponent because of the possession change after each basket. Finally, there is a small restoring force that tends to reduce the score difference between competitors, perhaps because a winning team coasts as its lead grows or a losing team plays more urgently as it falls behind. Based on the empirical data, we argued that basketball scoring data is well described by a nearly unbiased continuous-time random walk, with the additional features of anti-persistence and a small restoring force. Even though there are differences in the intrinsic strengths of teams, these play a small role in the random-walk picture of scoring. Specifically, the dimensionless measure of the effect of disparities in team strength relative to stochasticity, the P\'eclet number, is small. The smallness of the P\'eclet number means that it is difficult to determine the superior team by observing a typical game, and essentially impossible by observing a short game segment. We simulated our random-walk model of scoring and found that it satisfyingly reproduces many statistical features about basketball scoring in NBA games. This study raises several open issues. First, is the exponential distribution of time intervals between scoring events a ubiquitous feature of sports competitions? We speculate that perhaps other free-flowing games, such as lacrosse (\cite{EG08}), soccer (\cite{DC00}), or hockey (\cite{T07,BWP}), will have the same scoring pattern as basketball when the time intervals between scores are rescaled by the average scoring rate for each sport. It also seems plausible that other tactical metrics, such as the times intervals between successive crossings of mid-field by the game ball (or puck) may also be described by Poisson statistics. If borne out, perhaps there is a universal rule that governs the scoring time distribution in sports. Seen through the lens of coaches, fans, and commentators, basketball is a complex sport that requires considerable analysis to understand and respond to its many nuances. A considerable industry has thus built up to quantify every aspect of basketball and thereby attempt to improve a team's competitive standing. However, this competitive rat race largely eliminates systematic advantages between teams, so that all that remains, from a competitive standpoint, are small surges and ebbs in performance that arise from the underlying stochasticity of the game. Thus seen through the lens of the theoretical physicist, basketball is merely a random walk (albeit in continuous time and with some additional subtleties) and many of the observable consequences of the game follow from this random-walk description. \medskip We thank Guoan Hu for assistance with downloading and processing the data and Ravi Heugel for initial collaborations on this project. We also thank Aaron Clauset for helpful comments on an earlier version of the manuscript. This work was supported in part by NSF grant DMR0906504. \bibliographystyle{bepress}
{ "attr-fineweb-edu": 2.386719, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdfk4eIXh4qTE5jpL
\section{Introduction} Imagine waking up on a crisp fall morning and deciding to use the day for a hike. You drive to the trailhead and begin to chart a route. Since you must return to your car, your elevation will be the same at the beginning and the end of your walk. Are there other elevations through which you will pass twice? Clearly there are. If you begin on an ascent, you must descend to return to the trailhead, and if you begin with a decent, you must eventually ascend. If the trail is perfectly flat, then at every moment your elevation is shared by every other moment. This intuition is often given in an introductory calculus course to illustrate the intermediate value theorem. A natural follow up question: Can anything be said about the time elapsed between two points of equal elevation? For instance, if your hike lasts an hour, we know that there are two instants, separated by an hour, of equal elevation, namely the start and the finish. Need there be two such instances separated by a half hour? The answer, it turns out, is yes. Separated by 25 minutes? No, it's possible to design a hike with no 25 minute time interval leaving you at the same elevation that you started. So what is special about 30 minutes? Can we characterize all such durations? In this paper, we answer this and related questions.\footnote{The contents of this paper are motivated by Exercise 5.4.6. in \cite{abbott}.} \section{Notation} Given a closed interval $[a,b] \subset \mathbb{R}$ and a real number $\lambda$, we will use $C_\lambda([a,b])$ to represent the set of continuous functions on $[a,b]$ mapping both endpoints to $\lambda$. More precisely, $$C_\lambda([a,b]) = \set{f:[a,b]\to \mathbb{R} \;|\; f \text{ is continuous and } f(a)=f(b)=\lambda}$$ \noindent We will use $C_\mathbb{R}([a,b])$ to refer to functions in any $C_\lambda([a,b])$: $$C_\mathbb{R}([a,b]) = \bigcup_{\lambda\in\mathbb{R}} C_\lambda([a,b]) = \set{f:[a,b]\to \mathbb{R} \;|\; f \text{ is continuous and } f(a)=f(b)}$$ \noindent Given a function $f\in C_\mathbb{R}([a,b])$, and a subset $X\subseteq [a,b]$, let $$D_f(X) = \set{d>0: \abs{x-y}=d \text{ and } f(x)=f(y) \text{ for some } x, y \in X }$$ \noindent If $X$ is absent as in $D_f$, assume $X=[a,b]$. $A^\mathrm{o}$, $\overline{A}$, and $\partial A$ will be used to denote the topological interior, closure, and boundary of $A$ respectively, $\mu(A)$ will be used for the Lebesgue measure of $A$. \section{Main Results} \begin{theorem} \label{thm1} Let $f$ be a real-valued, continuous function on the closed interval $[a,b]$ such that $f(a)=f(b)$. Given any $n \in \mathbb{N}$, there exist $x$ and $y$ in $[a,b]$ such that $|x-y| = \frac{b-a}{n}$ and $f(x) = f(y)$. \end{theorem} \begin{proof} We may assume without loss of generality that $[a,b] = [0,1]$. If not, just apply the result to $f(a + (b-a)x)$. Define $g(x) = f(x + \frac1n) - f(x)$ and consider the sum \begin{align} & g(0) + g\left(\frac1n\right) + g\left(\frac2n\right) + \dots + g\left(\frac{n-1}{n}\right) \label{thm1line1}\\ & = f\left(\frac1n\right) - f(0) + f\left(\frac2n\right) - f\left(\frac1n\right) + \dots + f(1) - f\left(\frac{n-1}{n}\right)\label{thm1line2} \\ & = f(1) - f(0) = 0 \label{thm1line3} \end{align} where (\ref{thm1line3}) follows from (\ref{thm1line2}) due to cancellation.\\ If every term in (\ref{thm1line1}) is 0, then the result follows immediately because $f(\frac{k+1}{n}) = f(\frac{k}{n})$ for $k = 0, 1, \dots, n-1$. If (\ref{thm1line1}) contains one or more nonzero terms, then there must be at least one positive and one negative term in order for the sum to be zero. That is, $g\left(\frac{k_1}{n}\right)<0$ and $g\left(\frac{k_2}{n}\right) > 0$ for some integers $k_1$ and $k_2$ between 0 and $n-1$. Thus, by the intermediate value theorem, $g(c) = 0$ for some $c$ between $\frac{k_1}{n}$ and $\frac{k_2}{n}$ (the continuity of $g$ follows from the continuity of $f$). Therefore, we have $f(c + \frac1n) - f(c) = 0$. \end{proof} Thereom \ref{thm1} provides a partial answer to the question posed in the introduction. If we hike for an hour, there will be two instants, 30 minutes apart, of equal elevation because 30 minutes is half of an hour. The same is true for 20 minutes, 15 minutes, etc. We are not done, however, because we haven't ruled out other durations. \begin{theorem} \label{thm2} Given a closed interval $[a,b]$, let $0<d<b-a$. If $d$ is not of the form $\frac{b-a}{n}$, then there exists a continuous function $f:[a,b] \to \mathbb{R}$ with $f(a) = f(b)$ such that $d\notin D_f$.\footnote{The definition of $D_f$ is given in Section 2: Notation.} \end{theorem} \begin{proof} Once again, we can assume without loss of generality that $[a,b] = [0,1]$. First, let $p(x)$ be any continuous $d$-periodic function with $p(0)\neq p(1)$. Note that the existence of such functions hinges on the fact that $d\neq\frac1n$. Next, let $m(x)$ be any strictly monotone continuous function such that $m(0)=0$ and $m(1)=p(0)-p(1)$. We can insist on strict monotonicity since $m(0)=0\neq p(0)-p(1)=m(1)$. Then $p+m$ is continuous as the sum of continuous functions. Furthermore, $(p+m)(0) = p(0) = p(1) + p(0) - p(1) = (p+m)(1)$. To finish, we must show that $d \notin D_{p+m}$. Indeed, for all $x\in [0,1-d]$, we have \begin{align*} (p+m)(x+d) - (p+m)(x) &= p(x+d) - p(x) + m(x+d) -m(x)\\ &= 0 + m(x+d) - m(x) \neq 0 \end{align*} using the monotonicity of $m$ and the periodicity of $p$. \end{proof} Taken together, Theorem \ref{thm1} and Theorem \ref{thm2} tell us that, on a hike that begins and ends at the same point, the only durations we know, a priori, will separate times of equal elevation, must evenly divide that total time of the hike. This is expressed formally in the following corollary:\footnote{The definition of $C_\mathbb{R}([a,b])$ is given in Section 2: Notation.} \begin{corollary} \label{cor1} \[ \bigcap_{f\in C_\mathbb{R}([a,b])} D_f = \set{\frac{b-a}{n}: n\in\mathbb{N}}. \] \end{corollary} \begin{proof} Theorem \ref{thm1} gives one inclusion and Theorem \ref{thm2} gives the other. \end{proof} Corollary \ref{cor1} characterizes the distances which are common to all functions in $C_\mathbb{R}([a,b])$. One then might wonder whether this represents a small intersection of large overlapping sets or there is a particular $f\in C_\mathbb{R}([a,b])$ such that $D_f = \set{\frac{b-a}{n}: n\in\mathbb{N}}$. It turns out to be the former. Each $D_f$ is considerably larger than the set of divisors of $b-a$. In fact, each $D_f$ contains at least a third of the numbers between 0 and $b-a$. Before we prove it, we need to develop a series of lemmas about $D_f$. \begin{lemma} \label{inclusion} If $A \subseteq B$, then $D_f(A) \subseteq D_f(B)$. \end{lemma} \begin{proof} Assume $d\in D_f(A)$. Then there are points $x,y\in A$ such that $|x-y|=d$ and $f(x)=f(y)$. But $A \subseteq B$, so $x$ and $y$ are also in $B$. Thus, $d\in D_f(B)$. \end{proof} \begin{lemma} \label{constant} Let $f$ be a constant function on a bounded set $A\subset\mathbb{R}$. Assume $A$ has a maximum value $m$. Then $\mu(D_f(A)) \geq \mu(A)$. \end{lemma} \begin{proof} Notice that $D_f(A)$ contains the set $m-A = \set{m-a \;|\; a\in A}$. Therefore $\mu(D_f(A)) \geq \mu(m-A) = \mu(A)$. \end{proof} \begin{lemma} \label{1 interval} Let $f\in C_\lambda([a,b])$ and suppose either $f(x) > \lambda$ for all $a<x<b$ or $f(x) < \lambda$ for all $a<x<b$. Then $D_f([a,b]) = (0,b-a]$. \end{lemma} \begin{proof} We may assume without loss of generality that $f(x) > \lambda$ for all $a<x<b$ since $D_f([a,b]) = D_{-f+2\lambda}([a,b])$. In other words, $D_f([a,b])$ does not change when the graph of $f$ is reflected over the line $y=\lambda.$ It is clear that $b-a\in D_f([a,b])$ since $f(a)=f(b)$, so we will let $d\in(0,b-a)$ and show that $d\in D_f([a,b])$. Define $g(x) = f(x+d)-f(x)$. Note that $g(a)=f(a+d)-f(a) = f(a+d) - \lambda > 0$ because $f(a+d) > \lambda$. Also, $g(b-d)=f(b)-f(b-d) = \lambda -f(b-d) < 0$ because $f(b-d) > \lambda$. The intermediate value theorem guarantees the existence of a $c\in(a, b-d)$ such that $g(c) = f(c+d)-f(c) = 0$, i.e., $f(c+d)=f(c)$. Therefore $d\in D_f([a, b])$. \end{proof} \begin{lemma} \label{2 intervals} Given any $a_1 < a_2 \leq a_3 < a_4$, define $A=[a_1,a_2]\cup[a_3,a_4]$, and let $f:A\to\mathbb{R}$ be a continuous function such that $f(a_k) = \lambda$ for $1\leq k \leq 4$. Suppose either $f(x) > \lambda$ for all $x\in A^\mathrm{o}$ or $f(x) < \lambda$ for all $x\in A^\mathrm{o}$. If $\max_{[a_1,a_2]}(f) \geq \max_{[a_3, a_4]}(f)$, then $D_f(A) \supseteq [a_3-a_1, a_4-a_1]$. \end{lemma} \begin{proof} As before, we can assume without loss of generality that $f(x) > \lambda$ for all $x\in A^\mathrm{o}$. It is clear that $a_3-a_1, a_4-a_1\in D_f(A)$ since $f(a_1)=f(a_3)=f(a_4)$, so we will let $d\in(a_3-a_1, a_4-a_1)$ and show that $d\in D_f(A)$. We will do this in three cases, depending on whether $d$ is greater than, less than, or equal to $a_4-a_2$. Define $g(x) = f(x+d)-f(x)$.\\ \noindent \textit{Case 1: $d > a_4-a_2$}. In this case, we compute $g(a_1) = f(a_1+d)-f(a_1) = f(a_1+d) -\lambda > 0$ and $g(a_4-d) = f(a_4) - f(a_4-d) = \lambda - f(a_4-d) < 0$. Here, we've used that $a_1+d \in (a_3,a_4)$ and $a_4-d \in (a_1, a_2)$ and $f>\lambda$ on these two open intervals. The intermediate value theorem then guarantees a $c \in (a_1,a_4-d)$ such that $g(c) = f(c+d)-f(c) = 0$. Hence $d\in D_f(A)$.\\ \noindent \textit{Case 2: $d < a_4-a_2$}. In this case, once again we compute $g(a_1) = f(a_1+d)-f(a_1) = f(a_1+d) - \lambda > 0$. This time, however, we observe that $g(t) \leq 0$ for some $t\in (a_1, a_2)$. Otherwise, we would have $f(t+d) > f(t)$ for all $t\in (a_1, a_2)$, contradicting the assumption $\max_{[a_1,a_2]}(f) \geq \max_{[a_3, a_4]}(f)$.\smallskip If $g(t) = 0$, we have $f(t+d) = f(t)$. If $g(t) < 0$, then the intermediate value theorem gives a $c \in (t,a_2)$ such that $g(c) = f(c+d)-f(c) = 0$. In either case, $d\in D_f(A)$.\\ \noindent \textit{Case 3: $d = a_4-a_2$}. This case is trivial as $f(a_2) = \lambda = f(a_4) = f(a_2 + d)$. \end{proof} \begin{lemma} \label{n intervals} Given any $a_1<b_1\leq a_2<b_2 \leq \dots \leq a_n < b_n$, define $A=\cup_{k=1}^n [a_k, b_k]$ and let $f:A\to\mathbb{R}$ be a continuous function such that $f(a_k) = f(b_k) = \lambda$ for $1\leq k \leq n$. Suppose either $f(x) > \lambda$ for all $x\in A^\mathrm{o}$ or $f(x) < \lambda$ for all $x\in A^\mathrm{o}$. Then $\mu(D_f(A)) \geq \mu(A)$. \end{lemma} \begin{proof} We will use proof by induction on $n$, the number of intervals.\\ \noindent \textit{Base case (n=1):} \\ The base case is covered by Lemma \ref{1 interval}, which gives us $D_f([a_1, b_1]) = (0, b_1 - a_1]$. Therefore, $\mu(D_f([a_1, b_1])) = \mu([a_1, b_1]) = b_1 - a_1$.\\ \noindent \textit{Induction Step:} \\ Our goal is to prove that $\mu(D_f(\cup_{k=1}^{n+1}[a_k, b_k])) \geq \mu(\cup_{k=1}^{n+1}[a_k, b_k])$. Assume, without loss of generality, that $\max_{[a_1,b_1]}(f) \geq \max_{[a_{n+1}, b_{n+1}]}(f)$. We do not lose generality because $D_f$ is invariant with respect to horizontal reflections, i.e., $D_{f(x)} = D_{f(-x)}$. Then, by Lemma \ref{2 intervals}, $D_f([a_1,b_1] \cup [a_{n+1},b_{n+1}]) \supseteq (a_{n+1}-a_1, b_{n+1}-a_1)$. Combining this fact with Lemma \ref{inclusion} gives \begin{align*} D_f(\cup_{k=1}^{n+1}[a_k, b_k]) &\supseteq D_f(\cup_{k=1}^{n}[a_k, b_k]) \cup D_f([a_1, b_2] \cup [a_{n+1}, b_{n+1}])\\ &\supseteq D_f(\cup_{k=1}^{n}[a_k, b_k]) \cup (a_{n+1}-a_1, b_{n+1}-a_1). \end{align*} Next, observe that $(a_{n+1}-a_1, b_{n+1}-a_1)$ and $D_f(\cup_{k=1}^{n}[a_k, b_k])$ are disjoint. Indeed, if $d\in D_f(\cup_{k=1}^{n}[a_k, b_k])$, then $d \leq b_n - a_1 \leq a_{n+1}-a_1$. \noindent Computing the length of both sides and applying the induction hypothesis, we get \begin{align*} \mu(D_f(\cup_{k=1}^{n+1}[a_k, b_k])) &\geq \mu(D_f(\cup_{k=1}^{n}[a_k, b_k]) \cup (a_{n+1}-a_1, b_{n+1}-a_1))\\ &= \mu(D_f(\cup_{k=1}^{n}[a_k, b_k])) + \mu((a_{n+1}-a_1, b_{n+1}-a_1))\\ &\geq \mu(\cup_{k=1}^{n}[a_k, b_k]) + \mu((a_{n+1}-a_1, b_{n+1}-a_1))\\ &= \mu(\cup_{k=1}^{n+1}[a_k, b_k]) \end{align*} \end{proof} \begin{lemma} \label{count intervals} Let $\set{I_n}$ be a countable collection of closed intervals and define $A=\cup_{n=1}^\infty I_n$. Assume that $A$ is bounded and $\set{I_n}$ have disjoint interiors. Let $f$ be a continuous function on $A$ such that $f(x) = \lambda$ on the endpoints of each $I_n$ and either $f(x) > \lambda$ for all $x\in A^\mathrm{o}$ or $f(x) < \lambda$ for all $x\in A^\mathrm{o}$. Then $\mu(D_f(A)) \geq \mu(A)$. \end{lemma} \begin{proof} Fix $\epsilon>0$. Since $A$ is bounded and $\set{I_n}$ have disjoint interiors, we know that $\lim\limits_{n\to\infty}\mu\left(\cup_{k=n}^\infty I_k\right) = 0$. Thus there exists some $N\in\mathbb{N}$ such that $\mu\left(\cup_{k=N}^\infty I_k\right) < \epsilon$. Applying Lemma \ref{n intervals} and Lemma \ref{inclusion} yields \begin{align*} \mu(D_f(A)) &\geq \mu\left(D_f\left(\cup_{k=1}^N I_k\right)\right)\\ &\geq \mu\left(\cup_{k=1}^N I_k\right)\\ &= \mu(A) - \mu\left(\cup_{k=N}^\infty I_k\right)\\ &> \mu(A) - \epsilon \end{align*} Therefore, $\mu(D_f(A)) \geq \mu(A)$ because $\epsilon$ was arbitrary. \end{proof} With access to these lemmas, we are now prepared to prove that $D_f([a,b])$ must contain at least a third of the distances in $(0,b-a]$. \begin{theorem} \label{main thm} If $f \in C_\lambda([a,b])$ then $\mu(D_f) \geq \frac{b-a}{3}$. \end{theorem} \begin{proof} Let $A_>$, $A_<$, and $A_=$ be the subsets of $[a,b]$ on which $f$ is greater than, less than, and equal to $\lambda$ respectively. $A_>$ and $A_<$ are the preimages of open sets under a continuous function and are thus open. Therefore, each is a countable union of open intervals. Applying Lemma \ref{count intervals} to the closure of each tells us that $\mu(D_f(\overline{A_>})) \geq \mu(\overline{A_>}) = \mu(A_>)$ and $\mu(D_f(\overline{A_<})) \geq \mu(\overline{A_<}) = \mu(A_<)$.\footnote{Dropping the closure doesn't change the length because the union of countably many intervals has a countable boundary.} Applying Lemma \ref{constant} to $A_=$ gives $\mu(D_f(A_=)) \geq \mu(A_=)$. Combining these three inequalities with Lemma \ref{inclusion}, we have \begin{align*} \mu(D_f([a,b])) &\geq \max\left(\mu(D_f(\overline{A_>})), \mu(D_f(\overline{A_<})), \mu(D_f(A_=))\right)\\ &\geq \max\left(\mu(A_>), \mu(A_<), \mu(A_=)\right)\\ &\geq \frac{b-a}{3} \end{align*} where the last line follows from $\mu(A_>) + \mu(A_<) + \mu(A_=) = b-a$. \end{proof} \section{Future Work} \subsection{Is $\frac{b-a}{3}$ a minimum?} \label{alternativeDf} Theorem \ref{main thm} establishes a lower bound on $D_f$ for functions in $C_\lambda([a,b])$. The key was to restrict our attention to $A_> = \set{x : f(x)>\lambda}$ because if $f(x)=f(y)$, then either both $x$ and $y$ are in $A_>$ or neither are. The same holds for $A_<$ and $A_=$. In other words, points in $D_f$ cannot arise due to "interactions" among $A_>$, $A_<$, and $A_=$. With this in mind, the bound in Theorem \ref{main thm} seems tight: simply define a function which is positive on the first third of $[a,b]$, negative on the second third, and zero on the last third. Then each of $A_>$, $A_<$, and $A_=$ should contribute $(0,\frac{b-a}{3}]$ to $D_f$. For example, let \[ f(x) = \begin{cases} \sin x &\mbox{if } 0 \leq x \leq 2\pi \\ 0 &\mbox{if } 2\pi \leq x \leq 3\pi \end{cases}. \] The reason this strategy doesn't work is $A_=$. Indeed, $D_f(A_>)=D_f(A_<)=(0,\pi]$. However, $A_= = \set{0,\pi}\cup [2\pi, 3\pi]$ and $D_f(A_=) = (0,3\pi]$. This makes $D_f$ as large as possible due to interactions between the points $0$ and $\pi$ and the interval $[2\pi, 3\pi].$ The trouble with the previous example is the presence of isolated points 0 and $\pi$ in $A_=$. The former is unavoidable, but we can eliminate the latter by making $f$ zero \textit{between} the intervals on which it is positive and negative. Let \[ f(x) = \begin{cases} \sin x &\mbox{if } 0 \leq x \leq \pi \\ 0 &\mbox{if } \pi \leq x \leq 2\pi \\ -\sin x &\mbox{if } 2\pi \leq x \leq 3\pi \end{cases}. \] Now $A_= = \set{0,3\pi}\cup [\pi, 2\pi]$ and $D_f(A_=) = (0,2\pi]$, but $D_f$ is still strictly greater than the bound established in Theorem \ref{main thm}. Had we defined $D_f$ slightly differently to ignore the endpoints of the domain of $f$, the previous example would prove Theorem \ref{main thm} is sharp. More precisely, if we instead define $D_f = \set{d>0: \abs{x-y}=d \text{ and } f(x)=f(y) \text{ for some } a<x<y<b}$, then $D_f = (0,\pi]$ in the previous example. However, if we stick to our original definition, is there an $f\in C_\lambda([a,b])$ with $\mu(D_f)=\frac{b-a}{3}$. If not, what is the infimum of $D_f$ over all such $f$?\\ \subsection{Generalizations} What does $D_f(X)$ look like when $X$ is not a closed interval? We could broaden the class of functions we look at by defining \[ C_\lambda(X) = \set{f:\overline{X}\to \mathbb{R} \;|\; f \text{ is continuous and } f(x) = \lambda \text{ for all } x\in\partial X}. \] What is $\bigcap D_f$ over all such $f$ and what is the infimum of $\mu(D_f)$? We could also explore functions with an $n$-dimesional domain and/or $m$-dimesional codomain. What does $D_f(X)$ look like when $X$ is $n$-dimesional? The more general definition of $C_\lambda(X)$ proposed above works just fine in this case. For simplicity, we might want to start with cubes or spheres, and slowly relax the constraints on $X$. Additionally, as in Section \ref{alternativeDf} we should amend the definition $D_f$ to ignore the boundary of $X$. Otherwise $D_f = (0, \diam(X)]$ always (unless $X$ is disconnected). What does $D_f([a,b])$ look like when the codomain of $f$ is $m$-dimesional? If $m>1$ the minimum $D_f([a,b])$ becomes $\set{b-a}$. Consider, for example, $f:[0,2\pi] \to \mathbb{R}^2$ defined by $f:x \mapsto (\cos x, \sin x)$. The only pair of points in $[0,2\pi]$ which get mapped to the same output are $0$ and $2\pi$, so $D_f = \set{2\pi}$. To construct an interesting generalization, we must then restrict our attention to functions mapping a closed interval to some subset $A \subset \mathbb{R}^m$. If $A$ contains any "loops," the minimum $D_f([a,b])$ becomes $\set{b-a}$, so $A$ should be a one-dimensional "loop-free" set. Lastly, if the previous questions are settled, perhaps we could define \[ C_\lambda(X,Y) = \set{f:\overline{X}\to Y \;|\; f \text{ is continuous and } f(x) = \lambda \text{ for all } x\in\partial X} \] and classify $D_f$ in terms of $X\subset \mathbb{R}^n$ and $Y\subset \mathbb{R}^m$.
{ "attr-fineweb-edu": 2.791016, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdr04eIZijkbdOOpJ
\section*{??? \end{document} \section{Introduction} Sports and gaming organizations, athletes, and fans often wish to estimate how good athletes are at their sports. This problem of athlete rating can affect planning and preparation for games, at both organizational and individual levels. For example, ratings may influence how sports organizations allocate resources across their athletes prior to an event to maximize their chances of winning. Ratings may also be used for designing tournaments and league play. This includes pairing athletes with similar estimated abilities in head-to-head competitions, dividing a large set of competitors into smaller tournaments according to skill level, or selecting top athletes to compete in elite ``by invitation only'' events. Approaches that rely on statistical modeling enable researchers to infer athletes' abilities from their game outcomes in a principled manner. A typical method for constructing athlete ratings treats each athlete's strength as a latent parameter (or vector of parameters) within a probability model. The models then use observed game outcomes to estimate the latent ability parameters, which may be thought of as fixed or varying over time. A variety of methods for estimating dynamic (i.e., time-varying) athlete ratings have been proposed for games with binary (i.e., win/loss) outcomes, or with rank-ordered outcomes. For multi-competitor games, approaches to rating competitors typically extend the Plackett-Luce model \citep{plackett1975analysis} through the evolution of the latent ability parameters. For example, \citet{glickman2015stochastic} models the evolution as a discrete stochastic process, \citet{caron2012bayesian} uses a nonparametric stochastic process, \citet{baker2015golf} interpolates abilities between discrete time points, and \citet{mckeough2020tale} considers parametric growth curves over time. Dynamic models have also been proposed for head-to-head games with win/loss outcomes, in both team \citep{herbrich2006trueskill} and individual \citep{glickman1999parameter, glickman2001dynamic, cattelan2013dynamic, baker2015time} game settings. Score outcomes are more granular than rank-order outcomes; as a result, models that effectively use score data may outperform models that only use ranking data. Observed rank outcomes can be viewed as partially censored score outcomes. For example, in a race, each runner's performance can be recorded as a race time, which can then be mapped into a race placement. In this instance, the race placement is the ranking and the race time is the score. Score data can potentially provide information, particularly about the gaps in performances between athletes, which may be useful to a rating model. While various dynamic models that use score or score-related information have been proposed for head-to-head games \citep{harville1977use, glickman1998state, lopez2018often, ingram2019point, kovalchik2020extension}, we are unaware of similar work for multi-competitor games. In this paper, we extend the normal dynamic linear model (DLM) proposed by \citet{harville1977use} and \citet{glickman1998state} to rate athletes who compete in multi-competitor games. DLMs provide a simple, natural framework for rating athletes in multi-competitor games with scored outcomes. They assume that an athlete's latent ability varies across time periods as a discrete stochastic process, and that an athlete's scores are normally distributed around their latent ability parameter within each time period. Athletes' game scores, however, can often be heavy-tailed or skewed, which suggests that the normal assumption in DLMs may not hold. Furthermore, games with blowouts or close wins may produce scores that do not accurately reflect athletes' skills. To directly account for blowouts, \citet{harville2003selection} considers simple strategies such as capping margins of victory and adjusting extreme scores using hazard functions. In a non-sports setting, \citet{lenk1990transformations} proposes a DLM that uses a naive grid search to learn an appropriate Box-Cox transformation \citep{box1964analysis} of the outcomes. While these methods are straightforward and intuitive, they lack the data-driven flexibility required for many real-world settings. \citet{Xia2000a} considers the more flexible monotone spline transformation \citep{Ramsay1988}, albeit in the simpler setting of univariate autoregression with static parameters for an ecological time-series dataset. In this paper, we propose a Bayesian DLM with a monotone spline outcome transformation to rate athletes who compete in multi-competitor games with scored outcomes. Our model uses a Bayesian approach to learn the best transformation from the data in a principled manner while still preserving the efficiency and transparency of a standard DLM framework. The paper proceeds as follows. Section \ref{sec:model} describes the general DLM framework and demonstrates the incorporation of monotone transformations into the model. Section \ref{sec:fitting} describes an efficient model-fitting algorithm for the DLM with transformations. Finally, Section \ref{sec:results} compares our DLM to other candidate models for rating athletes in multi-competitor games, using data provided by the US Olympic and Paralympic Committee. \section{Model} \label{sec:model} We introduce the model proposed in this paper in two stages. First, we present a standard DLM framework for rating athletes. We then modify it to account for game effects and non-normal outcomes, and address the special case of head-to-head games. \subsection{Standard DLM for athlete rating} A standard DLM for rating athletes models each athlete's observed scores as normally distributed around their latent ability, which evolves over time. The model likelihood for the score observed from a single athlete competing during time $t$ takes the form: \begin{equation} p(y_t \mid \theta_t, \sigma^2) = N(\theta_t, \sigma^2), \label{eq:lik1} \end{equation} where $y_t$ is the observed score, $\theta_t$ is the athlete's latent ability parameter at time $t$, and $\sigma^2$ is the observation variance. In this paper, we allow the latent ability parameter to evolve between time points as a normal random walk: \begin{equation} p(\theta_{t+1} \mid \theta_t, \sigma^2, w) = N(\theta_t, \sigma^2w), \label{eq:innov1} \end{equation} where $w$ is an additional parameter that controls how much latent abilities may vary over time relative to the observation variance. Other stochastic processes may also be considered for the evolution of latent ability parameters, such as a mean-preserving random walk \citep{glickman2015stochastic} or an autoregressive process \citep{glickman1998state}. Unlike these other stochastic processes, the normal random walk diverges over time; in practice, however, data are typically modelled over relatively few time points in practice, so this phenomenon does not pose practical problems. In a general sporting setting, we divide time into discrete \textit{rating periods}, e.g., six-month sporting seasons, with multiple games within each rating period and multiple athletes within each game. Let $p$ denote the total number of athletes in the data. Let $T$ denote the total number of discrete rating periods, indexed by $t = 1, \dots, T$. Athletes may compete in any number of games within each rating period. Finally, let $n_t$ denote the total number of observed scores within rating period $t$. The data model (Equation \ref{eq:lik1}) and innovation model (Equation \ref{eq:innov1}) may now be rewritten in multivariate form as: \begin{align} p(\mathbf{y}_{t} \mid {\bm{\theta}}_t, \sigma^2) &= N(X_{t} {\bm{\theta}}_t, \sigma^2 I_{n_t}) \nonumber \\ p({\bm{\theta}}_{t+1} \mid {\bm{\theta}}_{t}, \sigma^2, w) &= N({\bm{\theta}}_{t}, \sigma^2 w I_p). \nonumber \end{align} Here, $\mathbf{y}_t$ is the $n_t \times 1$ column vector of observed scores, ${\bm{\theta}}_t$ is the $p \times 1$ column vector of athlete latent abilities for rating period $t$, and $I_k$ denotes the $k \times k$ identity matrix. The $n_t \times p$ model matrix $X_t$ simply matches each athlete's observed score(s) to their latent ability.\footnote{Temporarily suppressing the time subscript $t$, the matrix $X$ has a single nonzero entry per row. Per row $r$, if entry $r$ of the column vector $\mathbf{y}$ corresponds to a score earned by athlete $a$, then $X_{ra}$ is set to equal 1.} Note that athletes' latent abilities are assumed to be constant within each rating period $t$, but may vary between rating periods $t$ and $t+1$. \subsection{Addressing game effects} In practice, athlete-rating DLMs need to account for conditions that affect game scores in a manner unrelated to latent athlete abilities. For example, a hot day might make all athletes in a race run more slowly, but the increased race times do not indicate weaker athletes. One approach to incorporate game-specific variation is to assume game-specific intercepts as part of the outcome model. This approach has been used, for example, by \cite{glickman1998state}. Instead of assuming game-specific intercepts, we pre-process the data by subtracting game-specific means from the observed scores. This approach avoids concerns relating to the arbitrary specification of priors for the intercepts, which could affect downstream results. Each game-centered score $\tilde{y}$ may then be modeled either by directly using Equation \ref{eq:lik1}, or by using a game-centered latent ability, as: \begin{equation} p(\tilde{y} \mid {\bm{\theta}}_{t}, \sigma^2) = N(\theta_t - \bar{\theta}_{tg}, \sigma^2). \nonumber \end{equation} The value $\bar{\theta}_{tg}$ denotes the average latent ability across all of the players in game $g$ within rating period $t$. Subtracting $\bar{\theta}_{tg}$ from each athlete's latent ability adjusts for the fact that competing against a stronger pool of opponents in a game naturally results in worse scores relative to the competition. To simplify notation for vector of game-centered scores $\tilde{\mathbf{y}}_t$, we write: \begin{equation} p(\tilde{\mathbf{y}}_{t} \mid {\bm{\theta}}_{t}, \sigma^2) = N(\bar{X}_t {\bm{\theta}}_{t}, \sigma^2 I_{n_t}), \label{eq:lik4} \end{equation} where $ \tilde{\mathbf{y}}_t \equiv \begin{bmatrix} \tilde{\mathbf{y}}_{t1} \\ \vdots \\ \tilde{\mathbf{y}}_{tq} \end{bmatrix} $ and $\bar{X}_t \equiv \begin{bmatrix} H_{n_{t1}} X_{t1} \\ \vdots \\ H_{n_{tg_t}} X_{tg_t} \end{bmatrix}$ for centering matrices $H_k \equiv I_{k} - \mathbf{1}_{k} \mathbf{1}_{k}^T$. For games $g = 1, \dots, g_t$ within rating period $t$, the $n_g \times 1$ vector of scores and $n_g \times p$ model matrix for game $g$ are written as $\tilde{\mathbf{y}}_{tg}$ and $X_{tg}$, respectively. \subsection{Addressing non-normal outcomes} In many games, we might also suspect that athletes' game-centered scores are not normally distributed around their game-centered latent abilities as assumed by Equation \ref{eq:lik4}. Instead, we may assume that some transformation of the athletes' scores is normally distributed around their latent abilities, so that the resulting model for transformed outcomes is: \begin{equation} p(\tau_{\bm{\lambda}}(\tilde{\mathbf{y}}_t) \mid {\bm{\theta}}_{t}, \sigma^2, {\bm{\lambda}}) = N(\bar{X}_t {\bm{\theta}}_{t}, \sigma^2 I_{n_t}). \nonumber \end{equation} The transformation $\tau_{\bm{\lambda}}(\cdot)$ is a function parameterized by a vector-valued parameter ${\bm{\lambda}}$. In this paper, we use the monotone spline transformation from \cite{Ramsay1988}, but any monotone transformation with a computable Jacobian would work as well. The monotone spline transformation is a polynomial spline built from an I-spline basis (see \cite{Ramsay1988} for more detail). For a given polynomial order $d$ and knot sequence $\mathbf{k}$, an I-spline basis consists of $B$ fixed, monotonically increasing basis functions $I_b(y \mid d, \mathbf{k})$, $b=1,\ldots,B$. The monotone spline transformation is then constructed as a linear combination of these basis functions: \begin{equation} \tau_{{\bm{\lambda}}}^{MS}(y) = \lambda_0 + \sum_{b=1}^B \lambda_b I_b(y \mid d, \mathbf{k}), \nonumber \end{equation} where $\lambda_0$ represents an intercept term and $\lambda_1, \dots, \lambda_B$ determine the shape of the transformation. We treat $\lambda_0$ as fixed, and define the transformation parameter ${\bm{\lambda}}$ as ${\bm{\lambda}} \equiv \begin{bmatrix} \lambda_1 \ \dots \ \lambda_B \end{bmatrix}^T$. Note that because each basis function $I_b(\cdot)$ is monotone increasing, constraining the parameters $\lambda_1, \dots, \lambda_B$ to be non-negative ensures that the resulting spline transformation is also monotone increasing. Also, the sum $\sum_{b=1}^B \lambda_b$ determines the range of the monotone spline transformation function, so we constrain it to equal a constant $c$ for identifiability. \subsection{Full Bayesian model} The full DLM with transformations is therefore as follows: \begin{align} p({\bm{\psi}}_t \mid {\bm{\theta}}_{t}, \sigma^2, {\bm{\lambda}}) &= N(\bar{X}_t {\bm{\theta}}_{t}, \sigma^2 I_{n_t}) \label{eq:likfin} \\ p({\bm{\theta}}_{t+1} \mid {\bm{\theta}}_{t}, \sigma^2, w) &= N({\bm{\theta}}_{t}, \sigma^2 w I_p), \label{eq:innovfin} \end{align} where we define ${\bm{\psi}}_t \equiv \tau_{\bm{\lambda}}(\tilde{\mathbf{y}}_t)$, the transformed, game-centered observations, suppressing dependence on ${\bm{\lambda}}$ to simplify notation. To complete the model specification, we specify prior distributions for ${\bm{\theta}}_1$, $\sigma^2$, $w$, and ${\bm{\lambda}}$: \begin{align} p({\bm{\theta}}_1 \mid \sigma^2, v_0) &= N(0, \sigma^2 v_0 I_p) \nonumber \\ p(\sigma^2) &= \text{Inv-Gamma}(a_0, b_0) \nonumber \\ p(w) &= \text{Half-Normal}(s_w^2) \nonumber\\ p({\bm{\lambda}}) &= c \cdot \text{Dirichlet}(\bm{\alpha}). \nonumbe \end{align} The hyperparameters $v_0$, $a_0$, $b_0$, $s_w$, $c$, and $\bm{\alpha}$ may be set to reflect prior beliefs about ${\bm{\theta}}_1$, $\sigma^2$, $w$, and ${\bm{\lambda}}$. In the absence of available prior information, we set $v_0 = 10$, $a_0 = b_0 = 0.1$, and $s_w = 1$ to keep the priors fairly uninformative about ${\bm{\theta}}_1$, $\sigma^2$, and $w$ \citep{gelman1995bayesian}. We set $\bm{\alpha}$ to be proportional to the $\lambda$ parameters corresponding to the identity transformation, but keep $\sum_{b=1}^B \alpha_b = 1$ to keep the prior diffuse. If more shrinkage toward the identity transformation is desired, $\sum_{b=1}^B \alpha_b$ can be set to a higher value. Finally, we set $\lambda_0$ to equal the lowest score in the data and $c$ to equal the range of the scores in the data so that the learned transformation roughly preserves the scale of the original data. The Dirichlet prior on ${\bm{\lambda}}$ constrains the transformation parameter ${\bm{\lambda}}$ to have nonnegative components that sum to $c$. In practice, we find that using a weakly regularizing unconstrained prior for ${\bm{\lambda}}$ often results in better performance: \begin{equation} p(\lambda_b) = N^{+}(\alpha_b, s^2_{\bm{\lambda}}) \text{ for } b = 1, \dots, B, \nonumber \end{equation} where $N^+$ indicates a normal distribution truncated below at 0. Theoretically, the shape of the monotone spline transformation is unidentifiable without a constraint on $\sum_{b=1}^B \lambda_b$; empirically, the mild regularization induced by the truncated normal priors effectively addresses these concerns. For this unconstrained model, we set $\bm{\alpha}$ equal to the $\lambda$ parameters corresponding to the identity transformation and control shrinkage toward the identity transformation using $s^2_{\bm{\lambda}}$. To allow maximum flexibility, we typically set $s^2_{\bm{\lambda}}$ to a large value. Figure \ref{fig:graph} displays the graphical model representing the relationships between the model parameters $\{{\bm{\theta}}_t\}_{t = 1, \dots, T}$, $\sigma^2$, $w$, and ${\bm{\lambda}}$ and the transformed data $\{{\bm{\psi}}_t\}_{t=1, \dots, T}$. \begin{figure}[ht] \centering \begin{tikzcd}[cells={nodes={draw=black, circle}}] & & & w \arrow[d] \arrow[rrd] \arrow[rrrrd] & & & & \\ & {\bm{\theta}}_1 \arrow[rr] \arrow[dd] & & {\bm{\theta}}_2 \arrow[rr] \arrow[dd] & & {\bm{\theta}}_3 \arrow[rr] \arrow[dd] & & \dots \arrow[dd] \\ \sigma^2 \arrow[ru] \arrow[rrru] \arrow[rrrrru] \arrow[rrrrrrru] \arrow[rd] \arrow[rrrd] \arrow[rrrrrd] \arrow[rrrrrrrd] & & & & & & & \\ & {\bm{\psi}}_1 & & {\bm{\psi}}_2 & & {\bm{\psi}}_3 & & \dots \\ & & & {\bm{\lambda}} \arrow[rrrru] \arrow[rru] \arrow[u] \arrow[llu] & & & & \end{tikzcd} \caption{Graphical model for DLM with transformations} \label{fig:graph} \end{figure} Figure \ref{fig:graph} displays the conditional independences implied by Equations \ref{eq:likfin} and \ref{eq:innovfin}, which allow us to factor the joint distribution of our untransformed data and model parameters as: \begin{align} &p(\mathbf{y}_{1:T}, {\bm{\theta}}_{1:T}, \sigma^2, w, {\bm{\lambda}}) = \nonumber \\ &J({\bm{\psi}}_{1:T} \to \mathbf{y}_{1:T}) \times p(\sigma^2) p(w) p({\bm{\lambda}}) \times \prod_{t=1}^{T} p({\bm{\psi}}_t \mid {\bm{\theta}}_t, \sigma^2, {\bm{\lambda}}) \times \prod_{t=1}^{T} p({\bm{\theta}}_t \mid {\bm{\theta}}_{t-1}, \sigma^2, w), \nonumber \end{align} where $J({\bm{\psi}}_{1:T} \to \mathbf{y}_{1:T})$ is the Jacobian of the inverse transformation, $\tau_{\bm{\lambda}}^{-1}(\cdot)$. The Jacobian term is vital to appropriately account for how the transformation rescales the data. For example, the Jacobian of the inverse monotone spline transformation is: \begin{equation} J^{MS}(\psi \to y) = \sum_{b=1}^B {\bm{\lambda}}_b M_b(y \mid d, \mathbf{k}). \nonumber \end{equation} The I-spline basis functions used for the monotone spline are constructed by integrating M-spline basis functions; here, $M_b(\cdot)$ are the M-spline basis functions corresponding to their respective I-splines. Figure \ref{fig:splines} displays the seven spline basis functions constructed for the biathlon relay training dataset we study in Section \ref{sec:results}. Figure \ref{fig:splines} shows how each I-spline basis function is constructed as the integral of an M-spline basis function. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{images/splines.png} \caption{Spline basis functions for biathlon relay training set} \label{fig:splines} \end{figure} \subsection{Head-to-head games} While the general multi-competitor setup introduced above can be used for games with any number of players, we introduce a slightly simpler setup for the special case of head-to-head games. In head-to-head games, it is generally more natural to consider monotone transformations of score differences, which have no direct analogues in the multi-competitor setting. Instead of modeling the $2g_t \times 1$ vector of athlete scores $\mathbf{y}_t$, we can model the $g_t \times 1$ vector of score differences $\mathbf{z}_t$, where we subtract the second athlete's score from the first athlete's score. The resulting model likelihood is: \begin{equation} p(\tau_{\bm{\lambda}}(\mathbf{z}_t) \mid {\bm{\theta}}_{t}, \sigma^2, {\bm{\lambda}}) = N(Z_t {\bm{\theta}}_{t}, \sigma^2 I_{n_t}). \nonumber \end{equation} Note that we have simply replaced the model matrix $\bar{X}_t$ with the model matrix $Z_t$, which is defined as a matrix with two nonzero entries per row: 1 in the column corresponding to the first athlete and -1 in the column corresponding to the second athlete. The observed score difference is taken as the first athlete's score minus the second athlete's score. \section{Model fitting} \label{sec:fitting} \subsection{Model-fitting procedure} We estimate the model parameters $({\bm{\theta}}_{1:T}, \sigma^2, w, {\bm{\lambda}})$ via a two-step procedure. In our application of interest, we would like to be able to quickly update athlete ratings (i.e., latent ability estimates) shortly after the results from a game. While we could estimate the full posterior distribution of all of the model parameters after each game, this may be a slow and computationally expensive process. Instead, we fit the model using the following procedure: \begin{enumerate} \item Estimate $w$ and ${\bm{\lambda}}$: using a training subset consisting of the first $T_{train}$ rating periods in the dataset, obtain estimates $\hat{w}$ and $\hat{{\bm{\lambda}}}$. \item Estimate ${\bm{\theta}}_{1:T}$ and $\sigma^2$: given $\hat{w}$ and $\hat{{\bm{\lambda}}}$, use the full dataset to obtain estimates $\hat{{\bm{\theta}}}_{1:T}$ and $\hat{\sigma}^2$. \end{enumerate} While step 1 is computationally expensive, step 2 can be accomplished quickly using standard Kalman filter updates, as will be described later in this section. When the results from a new game become available, we can just run step 2 using the previously learned values of $\hat{w}$ and $\hat{{\bm{\lambda}}}$ to quickly update ratings. The marginal posterior density of $w$ and ${\bm{\lambda}}$ can be expressed in closed form (up to a normalizing constant) as: \begin{align} \label{eq:objective} p(w, {\bm{\lambda}} &\mid \mathbf{y}_{1:T_{\text{train}}}) \nonumber \\ &\propto J({\bm{\psi}}_{1:T_{\text{train}}} \to \mathbf{y}_{1:T_{\text{train}}}) p(w) p({\bm{\lambda}}) \prod_{t=1}^{T_{\text{train}}} p({\bm{\psi}}_t \mid {\bm{\psi}}_{1:t-1}, w, {\bm{\lambda}}) \nonumber \end{align} for the priors on $w$ and ${\bm{\lambda}}$ and posterior predictive densities: \begin{equation} p({\bm{\psi}}_{t} \mid {\bm{\psi}}_{1:t-1}, w, {\bm{\lambda}}) = t_{2a_{t-1}}(\bar{X}_t \mathbf{m}_{t-1}, \frac{b_{t-1}}{a_{t-1}} [I_{n_t} + \bar{X}_t (V_{t-1} + w I_p) \bar{X}_t^T]), \nonumber \end{equation} where $\mathbf{m}_t$, $V_t$ $a_t$, and $b_t$ are computed using standard Kalman filter equations on the transformed data \citep{sarkka2013bayesian}: \begin{align} V_t &= ( (V_{t-1} + wI_p)^{-1} + \bar{X}_t^T \bar{X}_t)^{-1} \nonumber\\ \mathbf{m}_t &= V_t ((V_{t-1} + wI_p)^{-1} \mathbf{m}_{t-1} + \bar{X}_t^T {\bm{\psi}}_t) \nonumber\\ a_t &= a_{t-1} + \frac{1}{2} n_t \nonumber\\ b_t &= b_{t-1} + \frac{1}{2}[\mathbf{m}_{t-1}^T (V_{t-1} + wI_p)^{-1} \mathbf{m}_{t-1} + {\bm{\psi}}_t^T {\bm{\psi}}_t - \mathbf{m}_t^T V_t^{-1} \mathbf{m}_t] \nonumber \\ &= b_{t-1} + \frac{1}{2}({\bm{\psi}}_t - \bar{X}_t \mathbf{m}_{t-1})^T (I_{n_t} + \bar{X}_t (V_{t-1} + wI_p) \bar{X}_t^T)^{-1} ({\bm{\psi}}_t - \bar{X}_t \mathbf{m}_{t-1}). \nonumber \end{align} We can obtain samples of $w$ and ${\bm{\lambda}}$ from $p(w, {\bm{\lambda}} \mid \mathbf{y}_{1:T_{\text{train}}})$ using standard Markov Chain Monte Carlo (MCMC) methods. We implement our model in Stan, which uses Hamiltonian Monte Carlo. In practice, we can instead take a maximum a posteriori (MAP) approach using standard optimization routines to greatly reduce the computational burden. For most applications, we are only interested in estimating reasonable values for $w$ and ${\bm{\lambda}}$, not in conducting full Bayesian inference on them; as such, it often makes sense to simply find the posterior mode of $p(w, {\bm{\lambda}} \mid \mathbf{y}_{1:T_{\text{train}}})$. Importantly, integrating ${\bm{\theta}}_{1:T_\textbf{train}}$ and $\sigma^2$ out of the posterior instead of maximizing $p(w, {\bm{\lambda}}, {\bm{\theta}}_{1:T_\text{train}}, \sigma^2 \mid \mathbf{y}_{1:T_{\text{train}}})$ with respect to each parameter produces a better-informed posterior mode of $w$ and ${\bm{\lambda}}$. Any nonlinear optimization algorithm can be used to obtain MAP estimates. For the constrained optimization (with a Dirichlet prior on ${\bm{\lambda}}$), we choose to use the Augmented Lagrangian Adaptive Barrier Minimization Algorithm \citep{alabama}. For the unconstrained optimization (with normal priors on the $\lambda_b$ parameters), we find that the Nelder-Mead algorithm \citep{nelder1965simplex} generally gives the most stable results, though the L-BFGS algorithm \citep{liu1989limited} is much faster in practice. While these optimization-based approaches may theoretically get stuck in local modes, we find that in practice they produce reasonable and effective results. Given the MAP estimate $(\hat{w}, \hat{{\bm{\lambda}}})$, we can finally estimate the posterior distributions of $\{{\bm{\theta}}\}_{t=1, \dots, T}$ and $\sigma^2$ by transforming the full dataset using $t_{\hat{{\bm{\lambda}}}}(\cdot)$ and running the Kalman filter equations using $\hat{w}$. Full implementation details can be found in Appendix \ref{AppA}. \subsection{Smoothing} The Kalman filter equations produce estimates of the filtered latent ability parameter distributions $p({\bm{\theta}}_t \mid {\bm{\psi}}_{1:t}, \sigma^2, {\bm{\lambda}})$. In the second stage of the model-fitting procedure, we may also wish to calculate the smoothed latent ability parameter distributions $p({\bm{\theta}}_t \mid {\bm{\psi}}_{1:T}, \sigma^2, {\bm{\lambda}})$, where the full dataset informs each latent ability parameter estimate. The Rauch-Tung-Striebel smoother \citep{sarkka2013bayesian} provides a simple algorithm for doing so. We can compute the smoother updates as: \begin{align} p({\bm{\theta}}_t \mid {\bm{\psi}}_{1:T}, \sigma^2, {\bm{\lambda}}) &= N(\mathbf{m}_t^s, \sigma^2 V_t^s) \nonumber \end{align} for: \begin{align} \mathbf{m}_t^s &= \mathbf{m}_{t} + S_t (\mathbf{m}_{t+1}^s - \mathbf{m}_t) \nonumber \\ V_t^s &= V_t + S_t (V_{t+1}^s - V_t - wI) S_t^T \nonumber \end{align} for scaling matrix $S_t = V_t (V_t + wI)^{-1}$, where the $\mathbf{m}_t$ and $V_t$ values are the original $\mathbf{m}_t$ and $V_t$ values computed using the Kalman filter equations. The smoother updates are computed starting from rating period $t=T$ backwards to rating period $t=1$, starting from $\mathbf{m}_T^s = \mathbf{m}_T$ and $V_T^s = V_T$. \section{Empirical results} \label{sec:results} \subsection{USOPC athletic data} We illustrate our model on a variety of Olympic sport datasets provided to us by the US Olympic and Paralympic Committee. The data roughly span from 2004 to 2019 and include the score outcomes from selected national and international competitions. We briefly describe each dataset below. \paragraph*{Biathlon}{ The biathlon data come from the men's 20km individual biathlon and the men's $4 \times 7.5$ km relay. In the 20km biathlon, athletes ski a cross-country track between four rifle-shooting rounds. In each shooting round, they shoot at five targets. Each miss incurs a penalty, which may be extra time added or a penalty skiing lap, depending on the particular race's rules. The biathlon relay is similar, with two shooting rounds per relay leg. In both competitions, athletes compete to finish the race as quickly as possible, so we use each athlete's total time (in seconds) as their score. } \paragraph*{Diving}{ The diving data come from women's 3m springboard. In diving competitions, athletes receive scores for each of their dives in a round. After each round of diving, only the top-scoring athletes may qualify for the next round. Our dataset includes the total cumulative scores for athletes who compete in the final round, but only includes the relative rankings for athletes who are eliminated in earlier rounds. To try to make use of the full dataset, we naively impute scores for the eliminated athletes by assigning scores evenly spaced between zero and the minimum score in the final round, based the relative rankings. For example, if five athletes miss the final round, and the minimum score in the final round is 100, we would assign scores of 0, 20, 40, 60, and 80 to the five athletes, in order of their relative rankings.} \paragraph*{Fencing}{ The fencing data come from women's sabre fencing. In each bout, the first athlete to score fifteen points wins. We record the score difference between the two athletes as the outcome of each game. } \paragraph*{Rugby}{ The rugby data come from men's rugby sevens. We record the score difference between the two teams as the outcome of each game. } \subsection{Model fitting and validation} We fit our unconstrained model to the biathlon, biathlon relay, diving, fencing, and rugby datasets. For the multicompetitor sports (biathlon, biathlon relay, diving), we divide the sixteen years of data into six-month-long rating periods. The head-to-head sports (fencing, rugby) naturally have more games, so we divide them into roughly three-month-long rating periods. Choosing a shorter rating period allows more flexibility for athlete abilities to change, but reduces the number of games that can be used to infer abilities within the rating period. We conduct full MCMC (four chains, 1000 burn-in iterations, 1000 samples) as well as MAP estimation using the Nelder-Mead and L-BFGS optimization algorithms. To assess the convergence of our MCMC estimates of the posterior distributions of $w$ and ${\bm{\lambda}}$, we check trace plots and the $\hat{R}$ diagnostic for Hamiltonian Monte Carlo. Visual inspection of the trace plots does not suggest any evidence of non-convergence, and $\hat{R}$ is nearly equal to one for all model parameters. The Nelder-Mead and L-BFGS algorithms converge under default convergence tolerances. Before examining the DLM results, we first confirm that the MAP estimates produce similar results as full MCMC. Figure \ref{fig:transformations_comp} compares the posterior mean of the transformations learned using full MCMC to the transformations learned using MAP with the Nelder-Mead and L-BFGS algorithms. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{images/transformation_comparison.png} \caption{Comparison of algorithms for estimating ${\bm{\lambda}}$. Dotted line represents the identity transformation, $y=x$.} \label{fig:transformations_comp} \end{figure} We see that the learned transformations are essentially the same across all three algorithms for all five sports. The learned $w$ parameters (not shown) are also very similar. As a result, using MAP methods rather than full MCMC does not significantly impact model performance, while it can lead to significant speedups. For example, for the biathlon dataset (\textasciitilde 6000 observations from \textasciitilde 700 athletes over \textasciitilde 60 events), full MCMC took roughly 8 hours, but the Nelder-Mead optimization took only 30 minutes, and L-BFGS converged in less than one minute (all on a laptop with an Intel Core i7-8550U CPU). To simplify assessment, visualization, and discussion of our results, we will focus on the transformations estimated using full MCMC moving forward, though results are similar for the transformations estimated using MAP. Next, we assess the fit of the DLM on our transformed datasets. To do so, we use the first two-third rating periods as a training set to learn an appropriate transformation and leave the last one-third rating periods as a test set. We then visualize $({\bm{\psi}}_t - \bar{X}_t \mathbf{m}_{t-1})$, i.e., the one-step prediction residuals, on the test set. If the learned transformation is effective, the residuals should be approximately normally distributed. Figure \ref{fig:residuals} shows Q-Q plots of standardized test-set residuals against standard normal quantiles. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{images/residuals_mcmc_unc.png} \caption{Q-Q plots of test-set residuals compared to standard normal quantiles} \label{fig:residuals} \end{figure} While the residuals show some slight outliers at the extremes, they are largely normally distributed. The learned transformations thus appear to produce reasonable fits of the standard DLM to these datasets. \subsection{Case studies: biathlon and rugby} We use the biathlon and rugby datasets to illustrate the results from our model. For both datasets, we learn an unconstrained monotone spline transformation using MCMC, transform the dataset, and run the Kalman filter on the transformed data. Figure \ref{fig:ole} shows the smoothed latent ability point estimates for the 25 biathletes with the most biathlons entered. In the biathlon, low race times are better, so negative ability parameters indicate strong athletes. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{images/ole_vs_martin90.png} \caption{Smoothed estimated means of ability parameters of 25 biathletes with the most races entered, over time. 90\% central posterior interval shown for Ole Einar Bjørndalen and Martin Fourcade.} \label{fig:ole} \end{figure} Of the visualized latent ability trajectories, two stand out, and Figure \ref{fig:ole} additionally shows their 90\% confidence intervals. The trajectory in blue belongs to the ``King of Biathlon,'' Ole Einar Bjørndalen, the winningest biathlete of all time at the Olympics, Biathlon World Championships, and the Biathlon World Cup tour. Though the scope of our dataset does not include the start of his career in the 1990s, the model clearly notes his dominance in the early 2000s. The trajectory in red belongs to Martin Fourcade, who began serious international competition in 2008 and proceeded to put together a record-breaking string of seven overall World Cup titles in a row from 2011 to 2018. Bjørndalen and Fourcade are the two names considered in discussions of the greatest male biathlete of all time. Our model notes Fourcade's quick rise to prominence, projecting that he would begin to outperform Bjørndalen as early as 2009, though interestingly it never projects Fourcade's latent ability to exceed Bjørndalen's peak latent ability in 2005. Figure \ref{fig:rugby} shows the smoothed latent ability point estimates and corresponding 50\% posterior intervals for the national rugby teams of Fiji, New Zealand, and South Africa, three countries popularly known for their strong rugby teams. Here, more positive latent ability estimates represent stronger teams. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{images/rugby_comp.png} \caption{Smoothed estimated means of ability parameters and 50\% central posterior intervals shown for Fiji, New Zealand, and South Africa national teams.} \label{fig:rugby} \end{figure} While the three teams have similar estimated strengths during the time span of the data, we see that New Zealand enjoyed a brief stretch of relative dominance from roughly 2007-2013. \subsection{Model results} Another result of interest is the monotone spline transformation learned by the DLM with transformations. Figure \ref{fig:transformations} displays 100 posterior transformation samples for each of the five sports, under the unconstrained optimization. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{images/transformations_mcmc.png} \caption{Transformations learned by DLM with transformations} \label{fig:transformations} \end{figure} The transformations generally reflect conventional wisdom about scores from these sports. For example, we sometimes see very slow race times in the biathlon and biathlon relay, which occur when an athlete makes a few shooting mistakes and takes penalties. We expect to see a negative feedback loop in the biathlon where taking penalties causes athletes to get more frustrated or tired from penalty laps, which causes more penalties. This means that extremely slow race times may not reflect extremely poor skill. The learned transformation shrinks the very slow race times to be less extreme, which intuitively helps to make them better reflect athlete skill in the normal DLM. On the other hand, the transformation learned for fencing magnifies extreme score differences. In a fencing bout, points are scored one-at-a-time; after each touch (i.e., point scored), the fencers reset to their starting positions. This makes very one-sided matches relatively rare, since even outmatched fencers can typically score some number of lucky points in a match to fifteen points. The transformation notes this and indicates that when a fencer wins by many points, they are much stronger than their opponent, even more so than the large score gap may suggest. Table \ref{tab:wsig} shows the posterior means of the $\sqrt{w}$ and $\sigma$ parameters for each of the five sports. Recall that $\sigma^2$ represents the observation variance and $w$ represents the ratio of the innovation variance to the observation variance, so $\sigma$ and $\sigma \cdot \sqrt{w}$ would represent the observation and innovation standard deviations, respectively. For example, in the biathlon data, the model estimates that the innovation standard deviation is approximately 28\% as large as the observation standard deviation of 114.7. Note that the $\sigma$ values are on the scale of the transformed data, rather than the original scale of the data. \begin{table}[ht] \centering \begin{tabular}{|l|l|l|} \hline \rowcolor[HTML]{C0C0C0} Sport & $\sqrt{w}$ & $\sigma$ \\ \hline Biathlon & 0.28 & 114.7 \\ \hline Biathlon relay & 0.26 & 79.0 \\ \hline Diving & 0.24 & 58.5 \\ \hline Fencing & 0.07 & 3.2 \\ \hline Rugby & 0.18 & 14.7 \\ \hline \end{tabular} \label{tab:wsig} \end{table} \subsection{Comparing rating methods} To evaluate the DLM with transformations, we compared the accuracy of its predictions to predictions made by other models for multi-competitor and head-to-head athlete rating. For multi-competitor sports, we compared the DLM with transformations (LM-T) to the DLM without transformations (LM) and the dynamic rank-order logit model (ROL) from \citet{glickman2015stochastic}. For head-to-head sports, we compare to the Glicko rating system \citep[GLO;][]{glickman1999parameter}. We use the first two-third rating periods in each dataset as a training set to tune the model hyperparameters $w$ and ${\bm{\lambda}}$, fit the model on the full dataset, and finally evaluate its predictions for the test set (i.e., the last one-third rating periods). For multi-competitor games, we evaluate predictions using the Spearman correlations between the observed and predicted athlete rankings in each game. This approach was taken in \citet{glickman2015stochastic} to evaluate predictability on a test set. We summarize these game correlations $\rho_{tg}$ over the test set using a game-size-weighted average \citep{glickman2015stochastic}: \begin{equation} \rho = \frac{\sum_{t=\lceil\frac{2}{3}T\rceil}^{T} \sum_{g=1}^{g_t} (n_{tg} - 1) \rho_{gt}}{\sum_{t=\lceil\frac{2}{3}T\rceil}^{T} \sum_{g=1}^{g_t} (n_{tg} - 1)}. \label{eq:wtcor} \end{equation} Note that we limit ourselves to using rank-based metrics to facilitate comparison with the ROL model, which predicts athlete ranking probabilities rather than scores. For head-to-head games, we evaluate predictions using the average accuracy of winner predictions in the test set. Tables \ref{tab:multi} and \ref{tab:h2h} show the weighted Spearman correlation (Equation \ref{eq:wtcor}) of the ranking predictions for the multi-competitor sports and the accuracy of winner/loser predictions for the head-to-head sports. \begin{table}[ht] \centering \begin{tabular}{|c|ccc|} \hline \multicolumn{1}{|l|}{} & \multicolumn{3}{c|}{\cellcolor[HTML]{EC8F9C}Model} \\ \hline \rowcolor[HTML]{E9E5DC} \cellcolor[HTML]{EC8F9C}Sport & \multicolumn{1}{c|}{\cellcolor[HTML]{E9E5DC}LM-T} & \multicolumn{1}{c|}{\cellcolor[HTML]{E9E5DC}LM} & ROL \\ \hline \cellcolor[HTML]{E9E5DC}Biathlon & \multicolumn{1}{c|}{.64} & \multicolumn{1}{c|}{.61} & .61 \\ \hline \cellcolor[HTML]{E9E5DC}Biathlon Relay & \multicolumn{1}{c|}{.77} & \multicolumn{1}{c|}{.75} & .75 \\ \hline \cellcolor[HTML]{E9E5DC}Diving & \multicolumn{1}{c|}{.64} & \multicolumn{1}{c|}{.62} & .61 \\ \hline \end{tabular} \caption{Weighted spearman correlations of predictions \label{tab:multi}} \end{table} \begin{table}[ht] \centering \begin{tabular}{|c|ccc|} \hline \multicolumn{1}{|l|}{} & \multicolumn{3}{c|}{\cellcolor[HTML]{EC8F9C}Model} \\ \hline \rowcolor[HTML]{E9E5DC} \cellcolor[HTML]{EC8F9C}Sport & \multicolumn{1}{c|}{\cellcolor[HTML]{E9E5DC}LM-T} & \multicolumn{1}{c|}{\cellcolor[HTML]{E9E5DC}LM} & GLO \\ \hline \cellcolor[HTML]{E9E5DC}Fencing & \multicolumn{1}{c|}{.70} & \multicolumn{1}{c|}{.67} & .68 \\ \hline \cellcolor[HTML]{E9E5DC}Rugby & \multicolumn{1}{c|}{.72} & \multicolumn{1}{c|}{.72} & .70 \\ \hline \end{tabular} \caption{Accuracy of winner predictions \label{tab:h2h}} \end{table} Across the five datasets, we see some evidence that the LM-T model outperforms the other models in terms of predictive performance. While it is difficult to generally evaluate the relative empirical performance of the LM-T model with only five datasets, we see that it improves predictive accuracy across nearly all of the datasets. The only exception is the rugby dataset, where the LM-T model essentially learns an identity transformation, so we would not expect it to outperform the LM model. The results of athletic competitions are generally challenging to predict; small improvements in predictive performance can therefore be fairly valuable for generating a competitive edge. \section{Discussion} In this paper, we introduce a novel model to rate athletes who compete in head-to-head and multi-competitor sports with score outcomes. Using observed scores rather than rankings to rate athletes provides additional information, which generally improves predictions in the settings we consider. We can fit the model either using MCMC or an MAP approach, both which utilize the computational efficiency of the Kalman filter to learn an appropriate transformation to apply to the score outcomes. The simple normal DLM at the core of our model makes a variety of extensions possible. For example, we choose to use a normal random walk as the innovation process for athletes' latent abilities, which may easily be replaced by alternative innovation processes, such as a mean-preserving random walk \citep{glickman2015stochastic} or an autoregressive process \citep{glickman1998state}. Also, external covariates related to athletes (e.g., height, age, experience), events (e.g., weather conditions), and/or other factors may be assigned fixed or time-varying coefficients and straightforwardly incorporated into the normal likelihood and innovation equations. If transforming outcomes to normality is infeasible in a particular setting, the normal likelihood could be also extended to the likelihood of a generalized linear model \citep{west1985dynamic}. While we focus on athlete rating, our model may be used for a wide range of different problems. Dynamic linear models are very popular for analyzing time series data in fields ranging from economics and finance to health and ecology. In many of these applications, the normal likelihood in a standard normal DLM may be misspecified, which can be addressed by learning an order-preserving monotone transformation using the model introduced in this paper. The problem of athlete rating has interested organizations and individuals alike for many years. Appropriately using the information contained in game scores is an important but challenging task, due to the unusual features of score information in different games. This paper provides a general method to address these challenges, which can be applied to a wide range of multi-competitor and head-to-head games. \section*{Acknowledgements} We thank Dan Webb at the U.S. Olympic and Paralympic Committee for providing the data for this work. This research was supported in part by a research contract from the U.S. Olympic and Paralympic Committee, and by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1745303. \bibliographystyle{imsart-nameyear}
{ "attr-fineweb-edu": 2.302734, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUb6bxK02iP1lCXPS0
\section{Introduction} At sports tournaments, players and teams are usually expected to give their best in each game they play. However, they might withhold effort for strategic reasons: for example, a team might let weaker team members play a match they would probably lose anyway, so their strong team members are well rested when playing a more important match. A considerably more obscure scenario happened at the Olympic Games 2012: two badminton teams were both trying to lose a group-stage match, because the loser would meet a weaker team at the upcoming elimination match \cite{greene2012olympic}. \subsection{Gaming the Swiss System}\label{sec:swiss_gambit} The \emph{Swiss-system} tournament format is widely used in competitive games like most e-sports, badminton, and chess, the last of which this paper focuses on. In such tournaments, the number of rounds is predefined, while the pairing of players in each of these rounds depends on the results of previous rounds. In each round, players of similar score are paired against each other. Therefore, a weaker performance generally leads to weaker opponents. These rules provide an incentive to intentionally draw or even lose a match early in a tournament to subsequently play against weaker opponents and finish the tournament at a better rank than if that early match would have been won. This strategic move is colloquially called a \textit{Swiss Gambit}, referring to a gambit in chess, which means sacrificing a piece in order to gain an advantage~\cite{wikipedia2019gambit}. However, even though some observations based on intuition or minimalistic simulations, such as ``this strategy [\ldots] is just as likely to backfire as succeed''~\cite{Ead16} has been made, to the best of our knowledge, no research study has been conducted on the Swiss Gambit, despite the fact that it is well-known among competitive chess players~\cite{Ead16,Gen21,Gua21}, and the lack of ``serious research'' has been pointed out in their online discussions~\cite{Quo21,Red21,Sta21}. Intentionally losing a match to gain an advantage in a tournament is a highly controversial strategy, which is generally considered unethical or even cheating. In 2019, the five-time ex-world champion Vishy Anand has been tagged by punters, pundits, and commentators alike as having ``played the gambit'', when staging a remarkable recovery following his shock opening-round upset result at the Grand Swiss tournament on the Isle of Man~\cite{Hen21}. \subsection{Related Literature} The Swiss system regularly evokes interest in the AI and Economics communities~\citep{Van13,HAH16,BXH+18, LG22,SBC22,FCL22}. The works of \citet{Csato13,Csato17,Csato21} study the ranking quality of real-world Swiss-system tournaments, in particular, whether a fairer ranking could have been obtained by different scoring rules. The importance of winning or losing compared to drawing a lot of games was highlighted by~\citet{Bil06}. Computing player pairings at Swiss-system chess tournaments is also a popular topic. Automated matching approaches are proposed by \citet{glickman2005adaptive}, while \citet{kujansuu1999stable} use the stable roommates problem, see \cite{irving1985efficient}, to model a Swiss-system tournament pairing decision. \citet{olafsson1990weighted} and \citet{BFP17} attempt to implement the official FIDE criteria as accurately as possible. \citet{FCL22} propose a new approach to derive fair pairings at tournaments and they analyze the obtained ranking quality. Sports tournaments are by far not the only application area of the Swiss system. Self-organizing systems~\cite{FW09}, person identification using AI methods~\cite{WTW+15}, and choosing the best-fitting head-related transfer functions for a natural auditory perception in virtual reality~\cite{OVF19} all rely on the Swiss system as a solution concept. \subsection{Structure of the Paper and Our Contribution} In Section~\ref{sec:background}, we describe the Swiss system used at official chess tournaments organized by the International Chess Federation (FIDE). Then, in Section~\ref{sec:models} we introduce our two models to capture a tournament. Gambit heuristics to exploit these models are then discussed in Section~\ref{sec:identifying_gambit_possibilities}. We introduce how we measure the impact of gambits in Section~\ref{sec:measuring_the_impact} and describe our simulation settings in Section~\ref{sec:gambit_simulation_settings}. Finally, the results of our simulations are presented in Section~\ref{sec:gambit_simulation_results}. Our key insights on the impact of gambits are: \begin{enumerate} \item Gambits are possible even in very small tournaments (Section~\ref{sec:background}). \item There is an effective gambit heuristic even if match results can only be approximated (Section~\ref{sec:gambit_heuristics_in_the_probabilistic_model}). \item The gambit player must be able to estimate match results very accurately in order to identify a gambit possibility (Section~\ref{sec:number_of_gambit_possibilities}). \item Even with a successful gambit, the expected rank improvement is small. (Section~\ref{sec:mean_rank_difference}). \item Gambits are more likely to succeed if the players' strengths span a large range or if the tournament is long (Section~\ref{sec:total_rank_difference}). \item The impact of gambits on the ranking quality of all players is low in general (Section~\ref{sec:impact_of_gambits_on_ranking_quality}). \end{enumerate} All in all, our research establishes that even though the Swiss Gambit might lead to a higher rank in tournaments, under realistic conditions it cannot be used to reliably improve a player's rank or to derive a a largely false final ranking. \section{The Swiss System in Chess}\label{sec:background} \textit{Players} are entities participating in a Swiss-system tournament. Each player has an Elo rating, which is a measure designed to capture her current playing \emph{strength} from the outcome of her earlier matches \cite{elo1978rating}. In a \textit{match} two players, $a$ and $b$, play against each other. The three possible \textit{match results} are: $a$ wins and $b$ loses, $a$ and $b$ draw, $a$ loses and $b$ wins. The winner receives 1 point, the loser 0 points, while a draw is worth 0.5 points. A Swiss-system tournament consists of multiple \textit{rounds}, each of which is defined by a \textit{pairing}: a set of disjoint pairs of players, where each pair plays a match. At the end of the tournament, a strict ranking of the players is derived from the match results. \paragraph*{Computing the Pairing in Each Round} \label{sec:bbp_engine} A \textit{pairing engine} calculates the pairing of players for each round, based on the results of previous rounds. The pairing must adhere to specific rules, such as no two players play against each other more than once in the tournament, the number of games played with black and white pieces is balanced for each player, and opponents playing a match should have equal or similar score. The open-source and state-of-the-art pairing engine \emph{Dutch BBP}, developed by \citet{bierema2017bbp}, is endorsed by the FIDE~\cite[C.04.A.10. Annex-3]{fide2020handbook}. It implements the voluminous FIDE pairing criteria strictly \cite[C.04.3 and C.04.4.2]{fide2020handbook} for the so-called Dutch pairing system and outputs the unique pairing adhering to them. In all our experiments we use Dutch BBP for pairing the players in each round. Readers interested in the details of the Dutch pairing system can consult Appendix~\ref{app:dutch}. See Figure~\ref{fig:no_gambit} for an illustration of an example Swiss system chess tournament with its respective pairings for each round. This example is completed by the example of a successful Swiss Gambit depicted in Figure~\ref{fig:gambit}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{example_no_gambit} \caption{Example of a 3-round Swiss-system chess tournament paired with the Dutch pairing system. The players $A,B,C,D,E,F,G,H$ are labeled according to their strength, with player $A$ being the strongest. Arcs indicate the matches of the respective rounds. Arrows point from winner to loser while undirected arcs indicate a draw. The initial score and the color distribution ($\bullet$ for black and $\circ$ for white) in a round are shown in each column. The final ranking and final scores are shown on the right. All players play truthfully.} \label{fig:no_gambit} \end{figure} \paragraph*{Computing the Final Ranking} The major organizing principle for the final ranking of players is obviously the final score. Players with the same final score are sorted by tiebreakers. The FIDE \cite[Chapter C.02.13]{fide2020handbook} defines 14 types of tiebreakers, and the tournament organizer lists some of them to be used at the specific tournament. If all tiebreaks fail, the tie is required to be broken by drawing of lots. The tiebreakers we use for obtaining the final tournament ranking are based on the current FIDE recommendation \cite[C.02.13.16.5]{fide2020handbook}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{example_gambit} \caption{The tournament from Figure~\ref{fig:no_gambit} with player $D$ intentionally losing her first match. This is a successful Swiss Gambit since player $D$ gains one rank and one point compared to playing truthfully as in Figure~\ref{fig:no_gambit}.} \label{fig:gambit} \end{figure} \section{Our Models} \label{sec:models} We work on two models, a deterministic and a probabilistic one. In the latter more sophisticated model, the results of the individual matches are computed via a probabilistic calculation that is designed to be as realistic as possible. Due to the probabilistic setting, starting with the same set of players, the final ranking might be different in different runs. This is not possible in the deterministic model, where match results can be reliably calculated from the players' Elo rating and their color assignment. \subsection{Probabilistic Model}\label{sec:probabilistic_model} In the probabilistic model, match results are drawn at random from a suitably chosen probability distribution based on the players' Elo rating and on the assigned colors for the respective matches. In order to be able to derive the most realistic results, we use the probability distribution proposed by \citet{milvang2016prob}, which was featured in a recent news article of the FIDE commission System of Pairings and Programs~\cite{fide2020news}. Milvang's probability distribution was engineered via a Data Science approach that used real-world data from almost 4 million real chess matches from 50\,000 tournaments. According to Milvang's approach, the probability of a certain match outcome depends on the Elo rating of the involved players. Draw probability increases with mean Elo rating of the players. The probabilities also depend on colors, as the player playing with white pieces has an advantage. See Table~\ref{tab:example_probabilities} for some example values drawn from Milvang's distribution. \begin{table}[ht] \centering \begin{tabular}{rccc} Player Elo ratings & Win White & Win Black & Draw\\ 1200 (w) vs 1400 (b) & 26 \%& 57 \%& 17 \%\\ 2200 (w) vs 2400 (b) & 14 \%& 55 \%&31 \%\\ 2400 (w) vs 2200 (b) & 63 \%& 11 \%& 26 \% \end{tabular} \caption{Example match outcome probabilities drawn from Milvang's probability distribution \cite{milvang2016prob}. } \label{tab:example_probabilities} \end{table} \subsection{Deterministic Model}\label{sec:deterministic_model} The deterministic model is a variant of the probabilistic model: the same probabilities are calculated, but instead of drawing match results randomly, the match ends in a draw if the probability for a draw is at least 20\%, otherwise the player with higher Elo rating wins. The threshold of 20\% was chosen so that for a strength range size of 800 the relative number of draws in the probabilistic and deterministic model is equal. \section{Identifying Gambit Possibilities}\label{sec:identifying_gambit_possibilities} As usual in game theory, we assume that all players but one, who is called the \textit{gambit player}, are trying their best to win each game they play. Furthermore, the gambit player performs exactly one gambit. We assume that the decision whether to perform a gambit or not, is made after all other matches of that round are finished. In a real-world tournament, the gambit player could deliberately prolong her match until all other matches are finished, so she can make the gambit decision based on more information. A \textit{course of a tournament}, just called a \textit{course} in the following, includes the pairings and match results of the rounds that have already been paired/played. For a \textit{prefix} course, some rounds are still left to be paired/played, while a \textit{complete} course includes the pairings and match results of all rounds. A player's \textit{expected final rank} is the expected value of her final rank, given the prefix course of the tournament that was already played. The expected final rank of player $p$ is calculated as follows. \[ \mathbb{E}\left(\text{rank}(p, c')\right) = \sum_{c \in C(c')} \text{probability}(c) \cdot \text{rank}(p, c)\] In the formula, $c'$ is a prefix course which already includes player $p$'s match result for the current round, $C(c')$ is the set of all possible completed courses given the prefix course $c'$, and rank$(p, c)$ is the rank of player $p$ at the end of the completed course~$c$. In words, the expected final rank after a decision to gamble or not is calculated as the probability of each completed course times the final rank of $p$ at the end of this completed course, summed up over all completed courses that are still possible after the decision. Naturally, the gambit player is required to have multiple so-called \textit{match result options} to choose from: if she would normally win a match, then she can choose to win, draw or lose the match. If a match would normally end in a draw, we assume that the gambit player can choose between draw and loss. The match result without gambit is called the \textit{actual match result}, while other match result options are called \textit{gambit match result options}. The goal of a \textit{gambit heuristic} is to select the match result option that has the best expected final rank. If this differs from the actual match result option, then we say that the gambit is \textit{beneficial}. A prefix course is a \textit{gambit possibility} for a player, if she could improve her expected final rank by performing a Swiss Gambit in her match in the current round according to a gambit heuristic. \subsection{Gambit Heuristic in the Deterministic Model}\label{sec:gambit_heuristic_in_the_deterministic_model} In the deterministic model, see Section~\ref{sec:deterministic_model}, a gambit possibility can be identified as follows: for each match result option, simulate the rest of the tournament once. Choose the match result option which led to the best rank. Since the model is deterministic, the result from the simulation will be the same as the actual final result. For each match result option, only one tournament has to be simulated, so the computational cost is very low. This gambit heuristic is optimal, because it always identifies the match result option with the best expected final rank correctly. \subsection{Gambit Heuristic in the Probabilistic Model}\label{sec:gambit_heuristics_in_the_probabilistic_model} Identifying gambit possibilities in the probabilistic model, see Section~\ref{sec:probabilistic_model}, is far more difficult compared to the deterministic model. The na\"ive approach would be to calculate the expected final rank for each match result option. However, the number of possible courses grows exponentially with the number of players and rounds, which makes it infeasible to calculate the probabilities and ranks for all possible courses. Instead, we sample the set of all possible courses of the given tournament by simulating the tournament several times. Consider a player $p$ winning a match in round~$x$. In this situation $p$ could intentionally lose instead of winning. We first simulate the rest of the tournament $n$ times assuming that $p$ wins in round $x$ and we record $p$'s final rank each time. This array of final ranks is the sample where ``win'' is the actual match result. Then we generate the sample where ``lose'' is the gambit match result in the exact same way, except that $p$ now loses in round~$x$. Based on these two samples, a gambit decision can be made in a variety of ways. We call these ways gambit heuristics. One could e.g.\ compare the means or medians of the samples. In our simulations, the \textit{$p$-value heuristic} proved to be the most successful. This heuristic only suggests to do the gambit if the final ranks in the gambit sample are significantly better than in the actual match result sample. Using the samples for the gambit match result option and the actual match result, we calculate a $p$-value using Welch's one-tailed $t$-test and reject the null hypothesis ``a gambit does not improve the expected final rank of the respective player'' if $p < 0.05$. Our samples do not have equal variances, so we use Welch's $t$-test~\cite{MS92}, which is an adaptation of Student's $t$-test~\cite{Stu08} intended for two samples having possibly unequal variances. Both tests measure the mean equality for two groups. \paragraph{Further Gambit Heuristics in the Probabilistic Model} The expected final rank in the probabilistic model can be approximated from the computed samples, e.g., by calculating the mean or the median of the final ranks of the samples. A larger set of samples allows more precise approximation, but it also requires more computational power. The corresponding gambit heuristic is called \textit{mean heuristic} or \textit{median heuristic}, depending on the aggregation. Another gambit heuristic is based on expected values as match results: for each match result, simulate the rest of the tournament once and record the gambit player's final rank. However, instead of randomly drawing match results from a probability distribution based on the players' Elo rating, both players get fractional points according to their expected values, which are calculated from their Elo rating. Assume player $a$ plays a match against player $b$, and $a$ is slightly stronger than $b$. Then $a$ might have a score of 0.52 while $b$ might have a score of 0.48 after the match. This gambit heuristic is called \textit{expected value heuristic} and it returns the match result option which led to the best final rank. In contrast to the previous approach, this approach requires very little computational power, since only one tournament simulation is needed for each match result option. We implemented and simulated both of the above described gambit heuristics for the probabilistic model, but results were not convincing. In contrast to the $p$-value heuristic from Section~\ref{sec:gambit_heuristics_in_the_probabilistic_model}, gambit players actually lost ranks on average when using one of the other heuristics. \section{Measuring the Impact of Gambits} \label{sec:measuring_the_impact} To quantify the impact of gambits on a specific tournament, we consider four measures: the number of gambit possibilities, the mean rank difference, the total rank difference, and the ranking quality. In this section we define and illustrate these measures using the example from Figures~\ref{fig:no_gambit} and~\ref{fig:gambit}. \subsection{Number of Gambit Possibilities} The \textit{number of gambit possibilities} is the total number of times the employed gambit heuristic came to the conclusion that a gambit is beneficial. Naturally, a high number of gambit possibilities means that the system can often be gamed. One gambit possibility is shown in Figure~\ref{fig:gambit}. In the deterministic model, where match results can be predicted, the tournament in Figure~\ref{fig:no_gambit} offers no further gambit possibility. \subsection{Rank Difference} The \textit{rank difference} of a gambit possibility is the gambit player's final rank in the simulation with a gambit minus her rank in the simulation without a gambit. A rank difference of -2 means that the gambit player improves by two ranks, e.g., she finishes at rank 4 without a gambit and ends up at rank 2 with a gambit. A tournament's \textit{mean rank difference} is the mean of all the rank differences of all gambit possibilities. A mean rank difference of -1.4 means that in an average gambit possibility, the gambit player will improve by 1.4 ranks. However, since the number of gambit possibilities is not taken into account, this measure can be misleading: in a tournament with a single gambit possibility with rank difference -10, the mean rank difference will also be -10 for the whole tournament, even though gambits are virtually never beneficial. As the only gambit possibility leads to the gambit player improving her rank by 1 in the example in Figure~\ref{fig:gambit}, the mean rank difference is~-1. A tournament's \textit{total rank difference} is the sum of all the rank differences of all gambit possibilities. Strongly negative values indicate that players are strongly incentivized to perform a gambit. The total rank difference is equal to the number of gambit possibilities multiplied by the mean rank difference, so it provides the best big-picture indicator for how much players are incentivized to gambit in a given tournament. As the only gambit possibility leads to the gambit player improving her rank by 1 in the example in Figure~\ref{fig:gambit}, the total rank difference is~-1. The mean rank difference can be misleading at times, for example when very few gambits are possible. However, on other classes of instances also the total rank difference can be misleading, for example if many insignificant improvements can be made. Hence, we analyze both measures. As we will see, the obtained insights are indeed similar, which proves that even though both metrics can be misleading, our results are not due to those misleading instances. \subsection{Ranking Quality} The ranking quality measures how similar the tournament's final ranking is to the \emph{ground-truth ranking}, which sorts the players by their Elo rating. The most popular measure for the similarity of two rankings is presumably the Kendall $\tau$ distance \cite{kendall1945treatment}. It counts the number of discordant pairs: these are pairs of elements $x$ and $y$, where $x < y$ in one ranking, but $y < x$ in the other ranking. We use its normalized variant, where $\tau \in [-1, 1]$, and $\tau = 1$ means that the rankings are identical, while $\tau = -1$ means that one ranking is the inverse of the other ranking. A higher Kendall $\tau$ distance is better, because it indicates a larger degree of similarity between the ground-truth and the output ranking. A gambit possibility's \textit{Kendall $\tau$ difference} is the difference of two Kendall $\tau$ distances from the ground-truth ranking. It is calculated by subtracting the Kendall~$\tau$ distance of the final ranking without the gambit and the ground-truth ranking from the Kendall~$\tau$ distance of the final ranking with the gambit and the ground-truth ranking. A positive Kendall~$\tau$ difference means that the ranking with gambit is closer to the ground-truth ranking than the obtained ranking without gambit. The \textit{mean Kendall~$\tau$ difference} is the mean of the Kendall~$\tau$ differences of all gambit possibilities. It indicates how much the ranking quality is changed by gambits. A mean Kendall~$\tau$ difference of zero means that gambits have no effect on the ranking quality. A positive mean Kendall~$\tau$ difference means that gambits improve ranking quality, which indicates a poor ranking quality of the tournament format in general. A negative mean Kendall~$\tau$ difference would be expected, because the gambit player misuses her actual Elo rating. However, the rank of the gambit player has only a small effect on ranking quality, so a strongly negative mean Kendall~$\tau$ difference mostly indicates that the non-gambit players are ranked poorly due to the gambit. The final ranking in Figure~\ref{fig:no_gambit} contains only one discordant pair, and therefore is of normalized Kendall~$\tau$ distance $1 -\frac{2\cdot 1}{21}$ from the ground-truth ranking. The final ranking with the gambit, depicted in Figure~\ref{fig:gambit}, contains four discordant pairs, and is of normalized Kendall~$\tau$ distance of $1 - \frac{2\cdot 4}{21}$ from the ground-truth ranking. Therefore, the Kendall~$\tau$ difference of the gambit possibility is $\left(1 -\frac{2\cdot 4}{21} \right) - \left(1 -\frac{2\cdot 1}{21} \right) = -\frac{2}{7}$. As the tournament only admits one gambit possibility in total, it is also the mean Kendall~$\tau$ difference. \section{Experimental Setup}\label{sec:gambit_simulation_settings} We present the details of our agent-based simulations. \subsection{Simulation Parameters} We ran our simulations with the optimal gambit heuristic described in Section~\ref{sec:gambit_heuristic_in_the_deterministic_model} for the deterministic model, and with the $p$-value heuristic from Section~\ref{sec:gambit_heuristics_in_the_probabilistic_model} for the probabilistic model. The following parameters were used, unless stated otherwise: \begin{itemize} \setlength\itemsep{0mm} \item number of players: 32 \item number of rounds: 5 \item number of tournaments: 1000 \item probabilistic model sample size: 200 \item strength range: 1000--2600 \end{itemize} Most non-professional tournaments take place on one day only and the pool of players is rather diverse. The above values were chosen to be as realistic as possible for such an event, based on parameters of more than 320\,000 real-world tournaments uploaded to the website \url{chess-results.com}.\footnote{The data was kindly provided by Heinz Herzog, author of the FIDE-endorsed tournament manager \url{Swiss-Manager} \citep{herzog2020swiss} and \url{chess-results.com} \citep{herzog2020chess}.} To draw a complete picture, we also tested our models on tournaments with more rounds and a smaller strength range for comparison---see the plots in Section~\ref{sec:gambit_simulation_results}. Players were sampled uniformly at random from the strength range. For each given tournament, we first used the Dutch BBP engine to pair the players as the rules of the FIDE prescribe. The results of each match were then calculated according to the rules of the deterministic or the probabilistic model. The lead us to the final ranking. We remind the reader that the final ranking might be different for different runs of the same tournament in the probabilistic model. \subsection{Computational Load} In order to identify all gambit possibilities and measure their effect, we reran the tournament several times, always assuming that a gambit is performed in a chosen match. If the match has a winner, then this player has three match result options: win, draw, and lose, which results in 600 tournament simulations in total. For a match that ends in a draw, 800 tournament simulations are required, as both players can decide between draw and loss. The prefix course consists of all other matches in the current round and all matches in previous rounds. We forwent calculating gambit heuristics in the last two rounds, because gambits are never beneficial in the last round and almost never in the second to last round. Therefore, a tournament with 32 players and 5 rounds consists of a total of $\frac{32}{2} \cdot (5-2) = 48$ matches in which the gambit heuristic is calculated. This means that for a single complete tournament simulation with all gambit possibilities, at least $48 \cdot 600 = 28\,800$ prefix tournament simulations are needed. However, these are not full tournament simulations, since after a gambit, only the remainder of the tournament needs to be simulated. In the probabilistic model, each match result option (with a prefix tournament) was completed to a full tournament 200 times. The choice of sample size 200 for the probabilistic model is based on initial experiments with sample size 50 and sample size 100. For these values, we observed very similar results that are in line with the results we present here. We chose sample size 200 since this was the maximum computational load our compute server could handle. Given our observations with different sample sizes, we expect that larger sample sizes yield very similar results but at an extraordinarily high computational cost. Thus, our choice seems to be sufficient for observing what can be observed. Hence, a complete simulation including gambit possibilities is very expensive in terms of the required computing power, especially in the probabilistic model. This is why we only simulate 1000 tournaments with 5 rounds each. The experiments were run on a computer server using version 20.04.1 of the Ubuntu operating system. It is powered by 48 Intel Xeon Gold 5118 CPUs running at 2.3 GHz and 62.4 GiB of RAM. With this server, simulating 1000 tournaments with 5 rounds each took approximately 100 seconds in the deterministic and 250 minutes in the probabilistic model. \subsection{Presentation of the Results} We ran experiments in the deterministic and the probabilistic models. We discuss the effect of gambits in Swiss-system chess tournaments consisting of few or many rounds, and having a narrow or wide player strength range. Data is presented in the form of \textit{violin plots} \cite{hintze1998violin} and via \textit{boxen plots}, which were invented by Heike et al.~\cite{heike2017letter}. For violin plots, kernel density estimation is used to show a smoothed probability density function of the underlying distribution. Additionally, similar to box plots, quartiles are shown by dashed lines. Boxen plots are enhanced box plots that show more quantiles. Unlike violin plots, they are suitable for discrete values, because all shown values are actual observations and there is no smoothing. \section{Simulation Results}\label{sec:gambit_simulation_results} In this section, we consider our four measures of gambit impact and elaborate on the obtained simulation results. \subsection{Number of Gambit Possibilities} \label{sec:number_of_gambit_possibilities} First we ran experiments in our standard setting, changing only the number of rounds and letting the 32 players engage in more than 5 games each. \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{n_gambits_rounds_det_n32_elo1600_sam100.pdf} \caption{Number of gambit possibilities as a function of the number of rounds. Results for the deterministic model are shown in orange, while results for the probabilistic model are shown in blue.} \label{fig:gambits_vs_rounds} \end{figure} In the deterministic model, there are 12 gambit possibilities on average in our standard setting, as Figure~\ref{fig:gambits_vs_rounds} shows. This number increases to 54 if there are 11 rounds. We can thus observe that the length of the tournament is a decisive factor in the number of possible gambits. This is to be expected, as the gambit player has more matches to capitalize on her gambit in longer tournaments. \begin{figure}[htb] \centering \includegraphics[width=0.75\linewidth]{n_gambits_strength_range_det_n32_r5_sam100.pdf} \caption{Number of gambit possibilities when player strengths are drawn from strength ranges of different size. Results for the deterministic model are shown in orange, results for the probabilistic model are shown in blue.} \label{fig:gambits_vs_strength_range_size} \end{figure} A less significant increase in the number of gambit possibilities is shown in Figure~\ref{fig:gambits_vs_strength_range_size}. As the strength range grows from 800 to 1600, the number of gambit possibilities also doubles. The center Elo was set to 1800 in all three settings, e.g., strength range size 800 means the interval [1400, 2200]. The reason for this correlation is not as straightforward as in the previous case. The increase in the number of gambit possibilities is driven by the fact that with a larger strength range size and given that we sample the player strengths uniformly in the strength range interval, the number of matches that result in a draw decreases. Thus, stronger players win more often which gives them two gambit options (choosing to draw or to lose) instead of only one (choosing to lose). Also, the lower number of draws also means that after a gambit, a strong player can earn more points and thus gambits have a higher chance to succeed. In the probabilistic model, almost no gambit is possible, however, their sporadic occurrence becomes somewhat more frequent if the tournament consists of many rounds (see Figure~\ref{fig:gambits_vs_rounds}) or the strength range is large (see Figure~ \ref{fig:gambits_vs_strength_range_size}). \begin{quote} \textbf{Main Take-Away:} Under realistic conditions, i.e., in the probabilistic setting with a rather small strength range size, there are very few occasions when a gambit can improve a player's final rank. Moreover, the gambit player must be able to estimate match results very accurately in order to identify a gambit possibility. \end{quote} \subsection{Mean Rank Difference} \label{sec:mean_rank_difference} We measured the mean rank difference achieved by the gambit player. For example, a mean rank difference of -2 means that in an average gambit possibility, the gambit player will improve her final rank by two places. In the deterministic model, as the number of rounds increases, we observe that gambits have a larger effect, but even for tournaments consisting of 11 rounds, an improvement of three places is to be expected, as shown by Figure~\ref{fig:mean_rank_diff_det_vs_number_of_rounds}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{mean_rank_diff_rounds_det_n32_elo1600_sam100.pdf} \caption{Mean rank difference for different numbers of rounds in the deterministic model.} \label{fig:mean_rank_diff_det_vs_number_of_rounds} \end{figure} Moreover, the larger the strength range size is, the more is to be gained through a gambit, as Figure~\ref{fig:mean_rank_diff_det_vs_strength_range_size} demonstrates. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{mean_rank_diff_strength_range_det_n32_r5_sam100.pdf} \caption{Mean rank difference for different strength range sizes in the deterministic model.} \label{fig:mean_rank_diff_det_vs_strength_range_size} \end{figure} The difference is rather small: In Swiss-system chess tournaments with a strength range of 800 Elo points, a bit less than half of the gambits lead to an improvement of at most two places in the final ranking, while it is a little less than a quarter of the gambits for the strength range of 1600 Elo points. To summarize: in the deterministic model, performing a gambit is more beneficial if the tournament is longer or the players' strength level is very diverse. \paragraph{Mean Rank Difference in the Probabilistic Model} Mean rank differences in the probabilistic model are much closer to zero, as Figures~\ref{fig:mean_rank_diff_prob_vs_number_of_rounds} and \ref{fig:mean_rank_diff_prob_vs_strength_range_size} show. Note that due to the probabilistic nature of the model, gambits that work in expectation still frequently lead to a positive rank difference. Ending up at rank difference close to 0 on average means that our chosen heuristic performs well, considering that the gambit player gives up a safe (half) point when performing a gambit. Figure~\ref{fig:mean_rank_diff_prob_vs_number_of_rounds} shows that the number of rounds does not seem to influence the mean rank difference much in the probabilistic model: the variance grows only very slightly with the number of rounds. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{mean_rank_diff_rounds_prob_n32_elo1600_sam100.pdf} \caption{Mean rank difference for different numbers of rounds in the probabilistic model.} \label{fig:mean_rank_diff_prob_vs_number_of_rounds} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{mean_rank_diff_strength_range_prob_n32_r7_sam100.pdf} \caption{Mean rank difference for different strength range sizes and 7 rounds in the probabilistic model.} \label{fig:mean_rank_diff_prob_vs_strength_range_size} \end{figure} The strength range size slightly influences the mean rank difference, as Figure~\ref{fig:mean_rank_diff_prob_vs_strength_range_size} demonstrates. As the strength range size grows, the variance grows as well, suggesting that performing a gambit is more risky for a diverse player set. Notice that giving up points without using any heuristic would lead to a strong positive rank difference: players would lose several ranks. The fact that the mean rank difference is approximately zero in the probabilistic model shows the main message of this subsection, namely that the $p$-value heuristic can identify gambit possibilities with reasonable accuracy. These gambits are slightly riskier if the tournament has many rounds (Figure~\ref{fig:mean_rank_diff_prob_vs_number_of_rounds}) or if the strength range is large (Figure~\ref{fig:mean_rank_diff_prob_vs_strength_range_size}). \begin{quote} \textbf{Main Take-Away:} If all match results can be accurately predicted, i.e., in the deterministic model, gambits are slightly more beneficial the more rounds are played and the larger the strength range of the players is. This situation drastically changes in the probabilistic model. There, gambits detected by our $p$-value heuristic yield a mean rank difference of zero in expectation, i.e., they are not profitable. Note that other natural gambit detection heuristics fare much worse in this regard. \end{quote} \subsection{Total Rank Difference}\label{sec:total_rank_difference} The total rank difference incorporates the number of gambit possibilities and the mean rank difference, thus it provides a general indicator for how much gambits are incentivized. A total rank difference close to zero indicates that gambit possibilities are mostly prevented. Figure~\ref{fig:total_rank_diff_det_vs_number_of_rounds} shows that in the deterministic model and standard setting the total rank difference sharply increases with the number of rounds. For example, for 11 rounds we obtain on average a total rank difference of 170, compared to 28 for 5 rounds. We also see in Figure~\ref{fig:total_rank_diff_det_vs_strength_range_size} that the total rank difference increases with the strength range size. Both observations are in line with our results from Sections~\ref{sec:number_of_gambit_possibilities} and~\ref{sec:mean_rank_difference}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{total_rank_diff_rounds_det_n32_elo1600_sam100.pdf} \caption{Total rank difference for different number of rounds in the deterministic model.} \label{fig:total_rank_diff_det_vs_number_of_rounds} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{total_rank_diff_strength_range_det_n32_r5_sam100.pdf} \caption{Total rank difference for different strength range sizes in the deterministic model.} \label{fig:total_rank_diff_det_vs_strength_range_size} \end{figure} \paragraph{Total Rank Difference in the Probabilistic Model} As we have shown in Section~\ref{sec:number_of_gambit_possibilities}, gambits occur very rarely in the probabilistic model. Therefore, drawing consequences from the few data we could collect on them is somewhat hard. In the probabilistic model, players can only improve their rank marginally by performing a gambit. Figure~\ref{fig:total_rank_diff_prob_vs_number_of_rounds} displays an unexpected correlation between the total rank difference and the number of rounds in the probabilistic model. As the number of rounds grow, there is very slightly less to win. The reason for this unexpected behavior might be that with an increasing number of rounds, it becomes less likely to correctly predict the remaining tournament rounds. This then increases the risk of losing ranks by performing a gambit, yielding a positive total rank difference. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{total_rank_diff_rounds_prob_n32_elo1600_sam100.pdf} \caption{Total rank difference for different number of rounds in the probabilistic model.} \label{fig:total_rank_diff_prob_vs_number_of_rounds} \end{figure} In order to compare different strength range sizes for the probabilistic model, we had to deviate from our standard setting and set the number of rounds to 11, because for shorter tournaments, the strength range size did not make a noticeable difference. Figure~\ref{fig:total_rank_diff_prob_vs_strength_range_size} shows that for 11 rounds, a more diverse player set leads to a somewhat larger variance in the total rank difference. The same is shown in Figure~\ref{fig:total_rank_diff_det_vs_strength_range_size}, which is the corresponding plot in the deterministic model. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{total_rank_diff_strength_range_prob_n32_r11_sam100.pdf} \caption{Total rank difference for 11 rounds and different strength range sizes in the probabilistic model.} \label{fig:total_rank_diff_prob_vs_strength_range_size} \end{figure} \begin{quote} \textbf{Main Take-Away:} The results regarding the total rank difference are in line with the results regarding the mean rank difference. In the deterministic model, gambits become slightly more beneficial with increasing number of rounds or with larger player strength range. In the probabilistic model, gambits are not beneficial in expectation. \end{quote} \subsection{Ranking Quality}\label{sec:impact_of_gambits_on_ranking_quality} We investigate the ranking quality difference between tournaments with and without gambits. In each of the 1000 simulated tournaments, the value without gambit is the Kendall $\tau$ of the simulation without gambits, while the value with gambit is the mean of the Kendall $\tau$ values of all gambit possibility simulations. \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{ranking_quality_gambits_strength_range_det_n32_r5_sam100.pdf} \caption{Ranking quality with and without gambits for different strength ranges in the deterministic model.} \label{fig:ranking_quality_with_vs_without_gambit_det_strength} \end{figure} Figures~\ref{fig:ranking_quality_with_vs_without_gambit_det_strength} and \ref{fig:ranking_quality_with_vs_without_gambit_det_rounds} show that the impact of gambits on the ranking quality is low in general in the deterministic model. However, gambits spoil the ranking quality at some level, especially if the strength range is large or the tournament is longer. Also, note that in our plots the results for the variant with gambits seems stronger concentrated around the mean Kendall $\tau$ values than the results without gambit. The reason for this is that without gambit the Kendall $\tau$ value of a single simulation is shown while with gambit, the mean Kendall $\tau$ values of all gambit possibility simulations is plotted. This naturally concentrates the values around the mean. \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{ranking_quality_gambits_rounds_det_n32_elo1600_sam100.pdf} \caption{Obtained ranking quality for different numbers of rounds in the deterministic model. } \label{fig:ranking_quality_with_vs_without_gambit_det_rounds} \end{figure} \paragraph{Ranking Quality in the Probabilistic Model} Figures~\ref{fig:ranking_quality_with_vs_without_gambit_prob_strength} and~\ref{fig:ranking_quality_with_vs_without_gambit_prob_rounds} show that the impact of gambits on ranking quality is low in general in the probabilistic model as well. \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{ranking_quality_gambits_strength_range_prob_n32_r11_sam100.pdf} \caption{Obtained ranking quality for different strength range sizes in the probabilistic model.} \label{fig:ranking_quality_with_vs_without_gambit_prob_strength} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{ranking_quality_gambits_rounds_prob_n32_elo1600_sam100.pdf} \caption{Obtained ranking quality for different number of rounds in the probabilistic model after 11 rounds. } \label{fig:ranking_quality_with_vs_without_gambit_prob_rounds} \end{figure} Gambits spoil the ranking quality in most, but not all cases. Just as in Section~\ref{sec:total_rank_difference}, we had to deviate from our standard setting and set the number of rounds to 11 for the Figure~\ref{fig:ranking_quality_with_vs_without_gambit_prob_strength}, because no noticeable difference was to be seen for shorter tournaments. \begin{quote} \textbf{Main Take-Away:} The impact of gambits on the ranking quality is low in both the deterministic and the probabilistic model. \end{quote} \section{Conclusion} We have shown that even though the Swiss Gambit is possible in theory, identifying the match in which to perform it is extremely challenging and even a beneficial gambit comes with a relatively low rank improvement. As we are the first ones to study the Swiss Gambit from a scientific point of view, our work raises various open questions. \begin{itemize} \item Our simulations can be run on other types of tournaments, such as longer events for professional players, which would require more rounds and a smaller strength range. Clearly, gambling the Swiss-system in other sports or games could also be analyzed, if sufficient data is available. \item We identified an effective gambit heuristic in the probabilistic model, but there might be even smarter heuristics---possibly depending on the tournament type. \item Based on the players' Elo scores, one might attempt to identify Swiss Gambits---or very fortunate unexpectedly poor match results---performed at real tournaments; these are unexpected losses or draws from strong players who then reached a higher rank than they would have otherwise. \item With our tournament simulation we estimate the importance of a chosen match result. What is the expected final rank difference of the upcoming match is lost versus if it is won? \item Our investigation can be tailored to answer questions about cheating during a match~\citep{Kee22} or bribing the opponent~\citep{Win22}, both being actively discussed in the chess community, especially since computers significantly changed the way chess is played~\citep{Won22}. How much is there to win in expectation if a chosen match result is turned positive? \item Finally, a strategyproof modification of the Swiss system that prevents Swiss Gambits would be highly valuable. Although our results indicate that gambits might not be a problem in practice, the online discussions and newspaper articles mentioned in the introduction demonstrate that the sheer (theoretical) possibility of a Swiss Gambit already irks the community. \end{itemize} \section*{Acknowledgments} \'{A}gnes Cseh's work was supported by OTKA grant K128611, the J\'anos Bolyai Research Fellowship, and \'UNKP-22-5-ELTE-1166 grant. \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 2.503906, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdOY4uzqh_RadJLOb
\section{Introduction} American Football is the most popular sport in the United States. National Football League (NFL) is a professional American football league that consists of 32 teams, divided in two conferences and many divisions. Each team plays a regular part of the season and then, if successful, proceeds to the playoffs. On the field, there are 11 players in every team and the goal of the game is to win as many yards as possible and ultimately score more points than the other team. An American football field is 100 yards (91.44 m) long and 160 feet (48.8 m) wide. Points are awarded on touchdown (6 points), field goal (3 points), safety (2 points) and try after touchdown (1-2 points)~\footnote{\url{https://operations.nfl.com/the-rules/2020-nfl-rulebook/} (last accessed on 26. May 2021)}. One of the biggest game-changers in NFL is the defensive pass interference (DPI) penalty which applies from the time the ball is thrown until the ball is touched. The penalty for DPI is an automatic first down at the spot of the foul. The official rule for pass interference is any act performed by a player more than one yard beyond the line of scrimmage which significantly hinders an eligible player's opportunity to catch the ball. Pass interference can only occur when a forward pass is thrown from behind the line of scrimmage, regardless of whether the pass is legal or illegal, or whether it crosses the line~\footnote{\url{https://operations.nfl.com/the-rules/nfl-video-rulebook/defensive-pass-interference/} (last accessed on 26. May 2021)}. NFL Data Bowl 2019 provided data of predicted catch probabilities. This also opened up an opportunity to analyse $\approx$6,000 catch plays from a sample of 91 games. Peter Wu and Brendon Gu found the bimodal shape of the defensive pass interference (DPI) distributions, which could be a result of differing standards for what a DPI entails \cite{DIRECT}. In their paper "DIRECT: A Two-Level System for Defensive Pass Interference Rooted in Repeatability, Enforceability, Clarity, and Transparency", they have presented a new approach for lowering the influence of DPI on the game~\cite{DIRECT}. In the proposed paper, only the obvious fouls, where the defender had no intention to play the ball, could be penalised in a spot foul. In recent years, data analytics in NFL has become quite important, and more and more data is made publicly available for enthusiasts and scientists to explore. This year's Data Bowl (2021), among other questions, asked if there is any way player tracking data can be used to predict whether or not specific penalties - for example, defensive pass interference - will be called? This paper is, to the best of our knowledge, the first attempt in predicting DPI using tracking (GPS) data. \section{Materials and Methods} \subsection{Data Acquisition} Data was acquired from "NFL Big Data Bowl 2021" which was hosted on \textit{Kaggle}~\footnote{\url{https://www.kaggle.com/c/nfl-big-data-bowl-2021/}}. The data was provided for competition, non-commercial and academic usage. The 2021 Big Data Bowl data contains player tracking, play, game, and player level information for all possible passing plays during the 2018 regular season. This means that 17 weeks of a typical NFL season contain only 259 actions that resulted in a DPI, which is 1.46\% of all actions (17,703). Such a data set is highly imbalanced, which can be compared with the problem of card fraud detection classification~\cite{card_frauds}. \subsection{Data Processing}\label{sec:data_processing} The raw data set contains the information concerning the majority of players involved in that particular action play. Every play is divided into timestamp segments. Every timestamp contains tracking data of all defenders, attackers, and the ball. This can give a maximum of 23 records in a single timestamp. Keeping in mind the amount of data, that is simply too much information for a model to generalise on. A model would have to learn the change of patterns for each variable (distance, speed, acceleration, etc.) for every player and additionally, a relationship of these variables between the players. DPI occurs between players competing for the ball. Therefore, the focus in processing should be on extracting the most relevant players and ball information from every given play. An example of DPI play is shown on image \ref{fig:dpi_play}. Blue and red circles represent the home and the away team respectively. The ball is depicted with a green circle, and the blue team has possession. The width of the pitch is represented on the y-axis. Height is subtracted from the original 120 yards to 50 yards, to provide better visualisation of this particular play. DPI can occur from the moment the ball is thrown forward by a quarterback (QB) and that moment is represented in image~\ref{subfig:qb_pass}. In the image~\ref{subfig:ball_air}, the ball is in the air and on its way to the target wide receiver (WR). Image~\ref{subfig:pass_arrived} shows the last moments of the play, where cornerback (CB) and WR are fighting for the ball and CB is committing a DPI. \begin{figure}[!tb] \begin{subfigure}[b]{1\columnwidth} \centering \includegraphics[width=1\columnwidth]{images/dpi_play_example_qb_pass.png} \subcaption{QB throws the ball to the WR.} \label{subfig:qb_pass} \end{subfigure} \begin{subfigure}[b]{1\columnwidth} \centering \includegraphics[width=1\columnwidth]{images/dpi_play_example_middle_pass.png} \subcaption{WR and CB fighting for the ball which is in the air.} \label{subfig:ball_air} \end{subfigure} \begin{subfigure}[b]{1\columnwidth} \centering \includegraphics[width=1\columnwidth]{images/dpi_play_example_end_pass.png} \subcaption{Ball arrives and DPI is committed.} \label{subfig:pass_arrived} \end{subfigure} \caption{Example of play which resulted in DPI} \label{fig:dpi_play} \end{figure} Processing starts with merging plays, games and 17 weeks of tracking data. As DPI can only occur when a pass is played forward, every play that does not contain the "pass\_forward" event is discarded. Tracking data contains some duplicate timestamp values, therefore these had to be cleaned. All records were grouped according to game id, timestamp and frame id. Iterating through grouped data, records that had the same timestamp but different frame id were split by adding 10 ms on the previous timestamp. The minimal difference between timestamps is 100 ms, so this way, we have ensured that every timestamp within the play corresponds to the unique moment in the game. Next, all data which occurred before the "pass\_forward" event is discarded. Every play starts with the "pass\_forward" (image~\ref{subfig:qb_pass}) event and afterwards is a "None" event (image~\ref{subfig:ball_air}) which indicates that nothing important has happened. Next event, following these two, can be any: "tackle", "pass\_outcome\_caught", "pass\_unsuccessful", etc. That event, which follows, is interpreted as end-of-play, and that is the moment in which we choose the most important players responsible for the DPI. This situation is shown on image~\ref{subfig:pass_arrived}. The assumption is that, in a majority of cases, opposing players which are closest to the ball are the ones to watch for a DPI event. That is why Euclidean distance is calculated between the position of the ball and all the players. From each team, players having the smallest distance to the ball, in that particular moment, are picked (along with the ball), and a new data set is created. Additionally, data were normalised according to the play direction. That way, all the attacks lead in the same direction -- right. Also, it was important to distinguish the attacker from the defender, so that information was created from the available information concerning the team in possession. Attacker, defender and ball information were all merged in a single row, and that included acceleration, speed, orientation and direction information. Data that could not be merged easily was a current x and y position on the field. This was solved by calculating the Euclidean distance between all three parties: defender and attacker, attacker and the ball, defender and the ball. In addition to this data, most meaningful events were added as binary static variables. These events included: "pass\_arrived", "pass\_outcome\_caught", "tackle", "first\_contact", "pass\_outcome\_incomplete" and "out\_of\_bounds". All processing steps are shown in Fig.~\ref{fig:preprocessing}. \begin{figure}[!tb] \centering \includegraphics{images/Preprocessing.png} \caption{Preprocessing steps} \label{fig:preprocessing} \end{figure} Players whose distance from each other (attacker and defender) is high, are unlikely to commit a foul as they are not close enough. In order to determine the threshold value for acceptance of a given play, DPI records were further examined. By examining the whole dataset, it proved that 90\% of DPI classified plays were below the value of 5.56 for the maximal distance between the defender and the attacker in a particular play. This threshold was then used on the whole dataset which resulted in the final distribution where the number of non-DPI plays was 9,529 (97.68\%) and the number of DPI plays was 231 (2.32\%). This filter eliminated possible outliers and enabled the focus to be shifted on the relevant samples. Everything was split into training (56\%), validation (14\%) and test (30\%) set with distributions shown in Table~\ref{table:data_distribution}. \begin{table}[!tb] \caption{Data set class distribution.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{} & \textbf{Training} & \textbf{Validation}& \textbf{Test} & \textbf{\%}\\ \hline \textbf{non-DPI} & 5336 & 1334 & 2859 & 97.6 \\ \hline \textbf{DPI} & 130 & 32 & 69 & 2.4 \\ \hline \end{tabular} \end{center} \label{table:data_distribution} \end{table} \subsection{Prediction Models} A sequence of events is important when considering DPI. That is why we opted for using time-series models for binary classification. The following include: Long-Short Term Memory (LSTM), Gated Recurrent Unit (GRU), Attention Neural Network (ANN) and, finally, multivariate LSTM Fully Convolutional Network (MLSTM-FCN)\cite{mlstm}. LSTM is a recurrent neural network (RNN), commonly used for tasks such as handwriting recognition, speech recognition and anomaly detection. An LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. They are well suited for making predictions based on time series data and are able to cope with the vanishing gradient problem which can occur when using regular RNNs~\cite{lstm_time_series}. The GRU is somewhat similar to an LSTM unit -- it has a forget gate, however, it contains fewer parameters, and has no output gate. It is used in polyphonic music modelling, speech signal modelling and natural language processing (NLP). The GRU was picked because it has been shown to exhibit better performance on smaller datasets~\cite{gru_performance}. The attention mechanism in an ANN model focuses on the important parts of the data and fades out the rest. ANNs are mainly used in NLP and computer vision. It is an upgrade on the LSTM approach and it was extensively used in transformer networks~\cite{attention_transformer}. MLSTM-FCN is an approach using a combination of convolutional layers and LSTM units \cite{mlstm}. It showed very good results on various, even unbalanced datasets. The authors have also provided their code implementation of MLSTM-FCN~\footnote{\url{https://github.com/titu1994/LSTM-FCN} (last accessed on 26. May 2021)}. LSTM and GRU models were implemented using Keras deep learning API and Tensorflow 2.0 library. ANN model was constructed using Keras Attention Mechanism library~\footnote{\url{https://github.com/philipperemy/keras-attention-mechanism} (last accessed on 26. May 2021)} built on top of an LSTM model. All models consisted of only one hidden layer, having either 8, 64 or 128 neurons. Experimental testing proved that increasing the number of layers did not improve model performance. Furthermore, it significantly increased model training time, thus in this paper we focus only on models having only one layer and a variable number of hidden cells. \subsection{Handling imbalanced data} Dataset for predicting DPI is highly imbalanced which needs to be taken into consideration when training a model and interpreting the results. This issue is usually solved by using techniques such as oversampling the minority class, undersampling the majority class, creating new artificial data using algorithms such as SMOTE and by manipulating class weights~\cite{handling_imbalanced}. The process of undersampling was tested but it did not result in a satisfying performance outside the training set. Generating new artificial data is difficult, considering multiple variables changing over time, so this approach was not considered. Oversampling is very similar to changing class weights so we have focused on the latter. Several weight factors were tested and, in the end, class weight formula~\ref{eq:weight_factor} proved to work the best with class 0 (non-DPI) weights being 0.51 and class 1 (DPI) being 20.52. It is calculated using the following expression: \begin{equation} \label{eq:weight_factor} w_{class}=\frac{n\_inst}{n\_classes * n\_inst_{class}}, \end{equation} where $w_{class}$ represents the calculated weight for a given class, $n\_inst$ denotes the number of all instances in a dataset, $n\_classes$ denotes the number of distinct classes, and $n\_inst_{class}$ denotes the number of instances of a given class. \section{Results}\label{sec:results} Having in mind the imbalanced nature of the problem, classification accuracy was not considered as an appropriate model evaluation metric. Instead, we used precision, recall, area under the curve (AUC) and F1 score. From the mentioned metrics, we marked recall to be the most important one because we want as few missed DPI classifications as possible. All 12 model combinations have been trained 5 times each, and from these, only those exhibiting the best performance on the validation set were picked. Models tried to reproduce the best precision at recall threshold being 0.8. A recall is the most important metric as it would be better to predict a false DPI and then check it manually (using video replay of the action) than to miss it. Model performance on the test set can be seen in Table~\ref{table:model_results}. \begin{table}[!tb] \caption{Model performance on the test set.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Model} & \textbf{Cells} & \textbf{Recall} & \textbf{Precision}& \textbf{F1} & \textbf{AUC}\\ \hline LSTM & 8 & 0.826 & 0.08 & 0.147 & 0.796 \\ \hline LSTM & 64 & 0.855 & \textbf{0.091} & \textbf{0.164} & \textbf{0.821} \\ \hline \rowcolor{lightgray} LSTM & 128 & \textbf{0.884} & 0.0748 & 0.138 & 0.807 \\ \hline ANN & 8 & 0.841 & 0.072 & 0.133 & 0.787 \\ \hline ANN & 64 & \textbf{0.884} & 0.073 & 0.135 & 0.803 \\ \hline ANN & 128 & 0.855 & 0.076 & 0.139 & 0.798 \\ \hline GRU & 8 & 0.855 & 0.023 & 0.046 & 0.487 \\ \hline GRU & 64 & 0.87 & 0.074 & 0.137 & 0.801 \\ \hline GRU & 128 & \textbf{0.884} & 0.072 & 0.133 & 0.801 \\ \hline LSTM-FCN & 8 & 0.551 & 0.079 & 0.137 & 0.695 \\ \hline LSTM-FCN & 64 & 0.609 & 0.068 & 0.122 & 0.701 \\ \hline LSTM-FCN & 128 & 0.855 & 0.05 & 0.095 & 0.727 \\ \hline \end{tabular} \end{center} \label{table:model_results} \end{table} Best performing scores are emphasised. Three models achieved the best recall of 0.884 on the test set. In order to provide additional information about generalisation quality of models, both validation (V) and test (T) set performance is presented in Table~\ref{table:model_results_compare}. The best model according to all other metrics (precision, F1 and AUC) was the LSTM model having 64 hidden-layer neurons. Out of the models exhibiting the highest recall, the best performing model considering precision is the LSTM model having 128 hidden-layer neurons. Classification confusion matrix for this model is shown in Fig.~\ref{fig:cm_lstm_128}. Model performance is discussed in more detail in section~\ref{sec:discussion}. In addition to the presented results, we provide full code for preprocessing, training and evaluating models that will simplify future research in this area and enable easier replication of results~\footnote{The code is available at \url{https://github.com/askoki/nfl_dpi_prediction}}. \begin{table}[!tb] \caption{Model performance on test (T) versus validation (V) set} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Model} & \textbf{Recall(T)} & \textbf{Recall(V)} & \textbf{Precision(T)} & \textbf{Precision(V)}\\ \hline LSTM (64) & \textbf{0.855} & 0.844 & \textbf{0.091} & 0.085 \\ \hline LSTM (128) & 0.884 & \textbf{0.938} & 0.075 & \textbf{0.078} \\ \hline ANN (64) & 0.884 & \textbf{0.969} & 0.073 & \textbf{0.079} \\ \hline GRU (128) & 0.884 & \textbf{0.938} & 0.072 & \textbf{0.077} \\ \hline \end{tabular} \end{center} \label{table:model_results_compare} \end{table} \begin{figure}[!tb] \centering \includegraphics[width=0.45\textwidth]{images/LSTM_128_cm_test.png} \caption{Classification confusion matrix for the LSTM (128) model on the test set.} \label{fig:cm_lstm_128} \end{figure} \section{Discussion}\label{sec:discussion} The models presented in this work fail to solve the problem of detecting seldom occurring events such as DPI. Changing weight factors, increasing model complexity or changing the sequence model algorithm all fail to deliver better results, and an F1 score significantly above 0.15. By looking at the source of the data -- GPS sensors -- this comes as no surprise. When players are close to each other, there is no information from which one can determine if a DPI was made. Our initial thought was that players' trajectories might obfuscate the information concerning a possible DPI, but this was just not enough to detect this event with high prediction confidence. Initially, all players (on the pitch) were involved as part of the information from which the models were trained. This, however, did not give any meaningful results -- everything was classified as non-DPI (the dominant class). We then switched our focus to include only those players having the highest probability to commit a DPI, which was described in section~\ref{sec:data_processing}. Changes in data processing resulted in the improvement of model performance but all of them failed to deliver a recall greater than 0.88 as presented in section \ref{sec:results}. An additional challenge when dealing with this problem was the high data imbalance. Furthermore, there were only 259 possible plays that resulted in a DPI. This amount of data might not be enough for the ML models to build on~\cite{imbalanced_dataset}. Gathering DPI plays from more seasons could be beneficial in improving model efficiency, at least in recall metrics. Improving recall, while keeping a satisfying precision score, would be a useful automatic preprocessing step for a follow-up video analysis filter. Images capture exact events on the pitch, therefore models could learn if a player made an illegal act or not~\cite{video_pattern_recognition}. On the other hand, tracking data does not provide any additional information when players are close to each other and competing for the ball. Analysing sequences of images would provide more features for a model to work with, which could consequently lead to better classification performance. Again, the number of DPI data instances would have to be greater, but this approach does not suffer from a lack of information. European football (soccer) is using video recordings for a better understanding of the game~\cite{soccer_video}. In future work, combining model results from GPS and video systems might lead to a solution to this highly-complex problem~\cite{combine_gps_video}. \section{Conclusion} The work presented in this paper is the first approach to predict DPI events in American football using GPS tracking data. DPI is a very rare event occurring in only 1.46\% of all action plays. However, the impact of this penalty call can be game-changing. By automating this penalty call, the possibility of one referee call being a game changer would be drastically decreased. The results show that the prediction of this event, using ML sequence models, has limited applicability. The models failed to achieve a recall greater than 0.884 with precision being very low, usually around 0.08. The data set used for training contained only 259 DPI events. By increasing the sample size, the recall score could be potentially improved and this approach could be used as a first step filter for later video sequence analysis of remaining DPI candidates. On the given dataset, GPS tracking data alone does not contain enough information in order to classify this complex event correctly. Future work should take into consideration the amount of DPI plays which is available and try different approaches, such as video analysis, in order to try to improve model performance. \section*{Acknowledgment} This work was supported by the Horizon 2020 project EuroCC 951732 \textit{National Competence Centres in the Framework of EuroHPC}, and by the University of Rijeka, Croatia [grant number uniri-tehnic-18-15 and uniri-tehnic-18-17]. \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 2.984375, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUegjxK1ThhBMLhgZE
\section{Introduction} \label{sec:introduction} The countermovement jump (CMJ) is commonly used to measure \replaced{lower-body explosive power}{the reactive strength of the lower limbs} and is characterized by an initial downward movement of the center of mass (COM), known as \emph{countermovement}, before toe-off \cite{cmj_review}. \deleted{CMJ is a key task for assessing sports performance and injury risk mitigation. }\added[comment=We have added a transition here between the definition of CMJ and capture techniques.]{Performance assessment with CMJ often involves motion capture and measurement of metrics such as peak velocity and vertical jump height.} Traditionally, motion capture is performed using wearable sensors, expensive optical motion capture (OMC) equipment and force plates. Although force plates and OMC\deleted[comment=We have revised this paragraph for accuracy.]{and sensors} are highly accurate, they are expensive, not readily portable, and their operation requires specialized knowledge. In addition, OMC requires physical body markers, which can be affected by skin and clothing artifacts. Moreover, \added{wearable sensors,} physical markers, and the awareness of being under observation may alter the real performance of subjects \cite{wade_needham_mcguigan_bilzon_2022, geh_beauchamp_crocker_carpenter_2011}. \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{images/experiment_pipeline.pdf} \caption{Experiment setup showing simultaneous motion capture, preprocessing, and comparison with ground truths.} \label{fig:experiment_setup} \end{figure*} Recent advances in computer vision research have enabled \textit{markerless motion capture} (MMC) from videos. MMC often relies on human pose estimation (HPE) algorithms such as AlphaPose \cite{fang2017rmpe}, OpenPose \cite{openpose}, and DeepLabCut \cite{mathis2018deeplabcut}. These MMC techniques have shown potential to replace OMC, especially since smartphones are ubiquitous. However, there is still a lot to be done in evaluating the accuracy and usability of MMC. Existing MMC approaches can be categorized based on \textit{capture plane} (2D or 3D) and \textit{number of cameras} (multi- or single-camera). 2D monocular (single-camera) techniques have been used for quantifying limb kinematics during underwater running \cite{under_water_running} and sagittal plane kinematics during vertical jumps \cite{outside_the_lab}. However, these works rely on deep learning approaches, where the generalization ability depends on the size and diversity of the data and the model architecture. For example, trained athletes, casual trainers, and rehabilitation patients will exhibit different performance ranges. Since collecting large quantities of representative data is difficult, we take an alternative approach here, a quantitative approach,\added{ and we focus on the ease of deployment in practice and ease of use}. The \emph{My Jump2} app has been deployed for measuring jump height using a single smartphone. However, it requires manual selection of jump start and end frames \cite{my_jump}. Some other studies perform 3D MMC using multiple cameras \cite{nakano2020evaluation, corazza_mmc}. However, the 3D multi-camera approach requires careful calibration and reconstruction of 3D poses from multiple 2D camera angles, which is not feasible for wide deployment in practice. Therefore, this study evaluates how accurately a single-smartphone-based MMC can measure bilateral and unilateral countermovement jump height. \textbf{Our main contributions are:} \begin{enumerate} \item We use a simple setup with a single smartphone, with no strict requirements on view perpendicularity and subject's distance from the camera. This is a more realistic application setting where MMC is used outside the lab, without specialized equipment. \item We show how to exploit gravity as reference for pixel-to-metric conversion as proposed in \cite{gravity_ref}, removing the need for reference objects or manual calibration. \item We analyze how accurately MMC measures jump heights compared with OMC and force plates. \item We propose use cases of MMC depending on domain-specific accuracy requirements. \end{enumerate} \section{Materials and Methods} \subsection {Participants} \replaced{Sixteen}{Five} healthy adults (mean age: \replaced{30.87$\pm$7.24}{26.60} years; mean BMI: \replaced{23.14$\pm$2.55}{22.80} $kg/m^2$) volunteered to participate in this study. The dominant foot of each participant was determined based on the foot with which they kick a ball \cite{kick_a_ball}. Each participant signed the informed consent form approved by the Human Research Ethics Committee of University College Dublin with Research Ethics Reference Number LS-C-22-117-Younesian-Caulfield. \subsection {Tasks} After a five-minute warm-up, each participant performed three repetitions each of CMJ bilateral \replaced{bilateral (BL) and unilateral (UL)}{BL and UL} while simultaneous motion capture was performed using force plates, OMC, and MMC (Fig. \ref{fig:experiment_setup}). \subsection {Apparatus} \subsubsection {Force Plate} \label{sec:force_plate} Force plates \added{sampling at 1000 Hz} are used as the first ground truth. \replaced{To}{For each jump, we} obtain the flight time $T_f$ for each jump, \replaced[comment=We have revised the method of obtaining flight time and described it in more detail.]{we first selected the flight phases using a threshold of $<$5\% of the resting force}{with a force threshold of 30\emph{N}}. \added{We then obtained $T_f$ by differentiating the flight phases with respect to time.} We obtain the jump height in centimeters as \begin{equation} h = 100gT_f^2/8 \end{equation} where $g$ is the acceleration due to gravity \cite{jump_from_flight}. \subsubsection {Optical Motion Capture} \label{sec:omc} Optical motion capture was performed using four synchronized CODA\footnote{Charnwood Dynamics, UK (\url{https://codamotion.com})} 3D cameras sampling at 100 Hz. Four clusters, each consisting of four light-emitting diode (LED) markers, were placed on the left and right lateral sides of the thigh and shank (Fig.~\ref{fig:noisy}). Moreover, six LED markers were placed on the anterior superior iliac crest (anterior and posterior), and greater trochanter (left and right). Three LED markers were attached to the lateral side of the calcaneus and on the first and fifth metatarsals of the dominant foot. For a motor task with duration $T$ seconds and $K$ tracked joints, CODA outputs a sequence of 3D coordinates \(\{(x_i^t,y_i^t,z_i^t ) | i=1,...,K;t=1,...,100T\}\) in \textit{millimeters}; where $z$ is the vertical axis, and 100 is the sampling rate. \subsubsection {Markerless Motion Capture} \label{sec:mmc} Markerless motion capture was performed \added{in the side view} using one Motorola G4 smartphone camera with a resolution of 720p and a frame rate of 30 frames per second (fps). The smartphone was placed on a tripod perpendicular to the dominant foot of the participant. We place\added{d} no strict requirements on camera view perpendicularity and distance to the participant. However, we ensure\added{d} that the \added[comment=We have added this to clarify the camera placement requirement.]{camera remained stationary and} participant\added{s} \replaced{remained}{is always} fully visible in the camera view. To obtain motion data from the recorded videos, we perform\added{ed} 2D HPE using OpenPose \cite{openpose}. The HPE algorithm outputs a sequence \(\{(x_i^t,y_i^t,c_i^t )| i=1,...,K;t=1,...,30T\}\), where 30 is the frame rate, \((x_i^t,y_i^t)\) are the 2D coordinates in \textit{pixels}, and \(c_i^t \in [0,1]\) is the probability for joint $i$ in frame $t$. \subsection{Data Preprocessing} \label{sec:data-preprocessing} During preprocessing, we perform\added{ed} denoising, segmentation, resampling, and rescaling. \begin{figure}[ht] \centering $\begin{array}{cc} \subfloat[]{% \includegraphics[width=1\linewidth]{images/fail_1.pdf} } \\ \subfloat[]{% \includegraphics[width=1\linewidth]{images/fail_2.pdf} } \end{array}$ \caption{Examples of noise in unilateral jumps during pose estimation as seen by observing the limb heatmap colors. (a) In frame 2, the left and right limbs are swapped. In frame 3, the right limb is wrongly detected as two limbs. (b) A failure case showing movements that are not characteristic of countermovement jumps.} \label{fig:noisy} \end{figure} \subsubsection{Denoising} As shown in Fig.~\ref{fig:noisy}(a), occasional false detections in pose estimation appear as spikes on the motion time series. \added{In most cases, these spikes could be removed by smoothing. However, 19 unilateral jumps such as Fig.~\ref{fig:noisy}(b) showed uncharacteristic movements and were removed as failure cases.} To avoid filtering out important motion data, we perform\added{ed} smoothing of the OMC and MMC time series using z-score smoothing \cite{aderinola}, proposed specifically for spike removal in motion sequences, and a second-order Savitzy-Golay \cite{savitzky1964smoothing} (Savgol) filter. The Savgol filter is known to smooth\deleted{en} data with little distortion \cite{Guin2007MovingAA}, and we \replaced{chose}{choose} a window size of 21 to preserve the main \replaced{maxima}{peaks} and minima of the time series for accurate segmentation. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{images/sync.pdf} \caption{Segmentation of jumps repetitions. (a) Raw hip vertical motion signal with peaks and selected jump windows. (b) Segmented \added{and synchronized} jumps based on selected windows.} \label{fig:segment_and_sync} \end{figure} \subsubsection{Segmentation and Resampling} Each jump repetition is characterized by a dominant peak corresponding to the maximum vertical height attained by the hip (Fig.~\ref{fig:segment_and_sync}). Using these peaks as reference, we segment\added{ed} each jump with a window $t$ \emph{secs} to either side of each peak, where $t$ is based on exercise duration and capture frequency. This enable\replaced{d}{s} synchronization of OMC and MMC based on start and stop times for each task. After segmentation, we upsample the MMC time series to match the length of the OMC time series using Fast Fourier Transform (FFT) resampling \cite{fourier_resample}, which minimize\replaced{d}{s} distortion. \subsubsection{Rescaling} \label{sec:rescaling} Two approaches \replaced{were}{are} taken to rescale MMC from pixels (\emph{px}) to a metric scale, namely \emph{reverse minmax} (RMM) and \emph{pixel-to-metric} (PTM). \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{images/ptm.pdf} \caption{Converting \emph{pixel} to \emph{millimetre} metric scale using gravity as reference.} \label{fig:free-fall} \end{figure} \textbf{\textit{Reverse MinMax (RMM)}} involve\replaced{d}{s} using OMC as reference to rescale MMC into \added{metric} \emph{mm}. This \replaced{was}{is} done by applying MinMax on both OMC and MMC, and then rescaling MMC into \emph{mm} using the scaling factor obtained from OMC. Let vectors $\mathbf{p_{mm}}$ and $\mathbf{q_{px}}$ represent the OMC (in mm) and MMC (in px) time series respectively. We obtain\added{ed} $\mathbf{q^*} = \textsc{minmax}(\mathbf{q})$, where $\mathbf{q^*_i \in [0,1]}$, as \begin{equation} \mathbf{q^*} = \left\{ \frac{\mathbf{q_{px}}_i - \textsc{min}(\mathbf{q_{px}})}{\textsc{max}(\mathbf{q_{px}}) - \textsc{min}(\mathbf{q_{px}})} \right\} \label{eq:minmax} \end{equation} where $i=1, ..., N$, and $N$ is the length of $\textbf{q}$. We then obtain\added{ed} $\mathbf{q_{px}}$ in \textit{mm} scale as \begin{equation} \begin{split} \mathbf{q}_{mm} = \{\mathbf{q^*_i}[\textsc{max}(\mathbf{p_{mm}})-\textsc{min}(\mathbf{p_{mm}})]\\ +\textsc{min}(\mathbf{p_{mm}}) | i=1,..., N\} \label{eq:rescale} \end{split} \end{equation} Since \replaced{RMM}{it} requires OMC as reference, \replaced{it}{RMM} \replaced{can be used}{is only usable} for evaluation purposes \added{only}. \textbf{\textit{Pixel-to-Metric (PTM) Conversion}} \replaced{was}{is} performed based on the `free-fall' of the centre of mass during a vertical jump. PTM uses $g$, the universal acceleration due to gravity as reference as proposed in \cite{gravity_ref}. From Newton's law of motion, the motion of a rigid body\footnote{\added{Although the human body is not perfectly rigid, the deformations around the centre of mass are negligible in this instance.}} in free fall is described by \begin{equation} d(t) = d_0 + v_0t + \frac{1}{2}gt^2 \label{eq:gravity} \end{equation} where $d_0$ is the initial position in \emph{metres (m)}, $v_0$ is the velocity in \emph{m/s}, and $t$ is the elapsed time in \emph{seconds (secs)}. \replaced[comment=Eqns 5 and 6 have been updated to reflect the change from 0.1 to T secs.]{We set the free-fall duration, $T$, to depend on total hip vertical displacement, such that the hip's non-free-fall motion is not captured.}{To capture short jumps, we consider a 0.1 sec free fall.}. At the peak \deleted{of the jump}, $v_0= d_0 =0$ \added{(Fig. \ref{fig:free-fall})}. \added{After $T$ secs free fall,} $d_{T}=(500T^2g)$\emph{mm}, \deleted{which is equivalent to $|d_0-d_{T}|$ in \emph{pixels} (Fig. \ref{fig:free-fall}),} such that \begin{equation} (500T^2g)mm = |d_0 - d_{T}|px \end{equation} Hence, 1 pixel $\equiv \mathcal{R}$ \emph{mm}, where: \begin{equation} \mathcal{R} = \frac{500T^2g}{|d_0 - d_{T}|} \end{equation} From this, we obtain\added{ed} $\mathbf{q_{mm}}$ in \textit{mm} scale as \begin{equation} \mathbf{q}_{mm} = \{\mathcal{R}(\mathbf{q_{px_i})} | i=1,..., N\} \label{eq:rescale_gravity} \end{equation} \begin{table*}[ht] \caption{Jump Heights From Force Plate, OMC, and MMC} \label{tab:all_jumps} \begin{threeparttable} \begin{tabular}{l|cccc|cccc} \hline & \multicolumn{4}{c|}{\textbf{Mean bilateral jumps} (cm)} & \multicolumn{4}{c}{\textbf{Mean unilateral jumps} (cm)} \\ ID & FP & OMC & RMM & PTM & FP & OMC & RMM & PTM \\ \hline P01 & 23.81 & 26.95 & 26.37 & 27.91 & 14.32 & 17.93 & 15.06 & 18.09 \\ P02 & 11.81 & 12.58 & 10.96 & 12.43 & 8.64 & 10.68 & 8.60 & 9.60 \\ P03 & 16.46 & 18.49 & 17.51 & 18.86 & 11.06 & 13.70 & 11.51 & 10.12 \\ P04 & 18.19 & 21.54 & 20.52 & 21.54 & E & E & E & F \\ P05 & 15.79 & 16.15 & 15.48 & 17.37 & 10.01 & 16.57 & 15.33 & 11.94 \\ P06 & 15.88 & 17.64 & 16.76 & 19.35 & 8.22 & 10.68 & 10.24 & 11.48 \\ P07 & 11.73 & 13.10 & 11.50 & 13.79 & 7.56 & 10.28 & 9.19 & 9.72 \\ P08 & 13.70 & 15.63 & 12.80 & 12.99 & 5.88 & 7.73 & 5.41 & 5.65 \\ P09 & 18.71 & 25.45 & 24.19 & 24.01 & E & E & E & F \\ P10 & 18.50 & 20.24 & 19.49 & 20.62 & 12.10 & 16.39 & 13.81 & 14.10 \\ P11 & 28.99 & 32.09 & 30.03 & 31.39 & E & E & E & F \\ P12 & 15.15 & 20.99 & 17.98 & 18.10 & 6.72 & 12.42 & 9.47 & 8.99 \\ P13 & 26.93 & 28.96 & 26.90 & 26.47 & 15.80 & 17.37 & 14.38 & 15.26 \\ P14 & 33.96 & 37.99 & 36.68 & 35.99 & 13.43 & 16.01 & 14.35 & 15.67 \\ P15 & 45.22 & 55.65 & 54.51 & 50.94 & 21.33 & 25.64 & 24.45 & 23.25 \\ P16 & 26.22 & 26.50 & 24.82 & 21.12 & E & E & E & F \\ \hline Mean & 21.32$\pm$8.80 & 24.37$\pm$10.56 & 22.91$\pm$10.64 & 23.31$\pm$9.52 & 11.54$\pm$4.19 & 14.58$\pm$4.61 & 12.62$\pm$4.66 & 12.82$\pm$4.55 \\ \hline \end{tabular} \begin{tablenotes} {\item \textbf{F}: Failure cases (Fig.~\ref{fig:noisy}). \textbf{E}: The corresponding FP, OMC, and RMM unilateral jumps are excluded from analysis.} \end{tablenotes} \end{threeparttable} \end{table*} \subsection{Quantifying Jump Height} \label{sec:jump_height} We measure jump heights directly from the OMC and the rescaled MMC time series as the maximum vertical displacement of the fifth metatarsal ($toe_{vd}$). We believe this approach is more straightforward than basing measurements on the flight time of the centre of mass ($COM_{ft}$), and we show in Section~\ref{sec:com_vs_toe} that $COM_{ft}$ overestimates jump heights by taking into account the motion of the hip before toe-off. \section{Results} \label{sec:results} The jump height reported for each participant is the mean of all three repetitions performed for each task (Table~\ref{tab:all_jumps}). Each MMC measurement was obtained using the reverse-minmax (RMM) and pixel-to-metric (PTM) approaches as described in Section~\ref{sec:rescaling}. The mean $\mathcal{R}$ across all the participants was \replaced{3.43}{3.5}\emph{mm/px}. In cases of errors like the one shown in Fig.~\ref{fig:noisy}, the mean value of $\mathcal{R}=3.43$ was used. Section~\ref{sec:com_vs_toe} compares jump heights obtained based on the flight time of the COM with jump heights obtained from the vertical displacement of the toe. Section~\ref{sec:comparisons} presents an evaluation of MMC accuracy using OMC and Force Plate as ground truths. \subsection{Measuring Jump Height: COM vs Toe} \label{sec:com_vs_toe} The flight time of the COM ($COM_{ft}$) is commonly used in estimating vertical jump height. In this section, we compare the differences in jump heights obtained by using $COM_{ft}$, and those obtained directly from the toe vertical displacement ($toe_{vd}$), both measured from OMC. Table~\ref{tab:com_vs_toe} shows that, compared to $toe_{vd}$, $COM_{ft}$ overestimates jump heights with absolute errors six times greater for bilateral jumps, and \replaced{four}{three} times greater for unilateral jumps. \begin{table*}[ht] \centering \caption{Jump Heights from COM vs. Jump Heights from Toe using Optical Motion Capture} \label{tab:com_vs_toe} \begin{tabular}{r|ccccc|ccccc} \hline & \multicolumn{3}{c|}{Mean bilateral jumps (cm)} & \multicolumn{2}{c|}{Errors (cm)} & \multicolumn{3}{c|}{Mean unilateral jumps (cm)} & \multicolumn{2}{c}{Errors (cm)} \\ ID & FP & $COM_{ft}$ & $toe_{vd}$ & $COM_{ft}$ & $toe_{vd}$ & FP & $COM_{ft}$ & $toe_{vd}$ & $COM_{ft}$ & $toe_{vd}$ \\ \hline P01 & 23.81 & 40.94 & 26.95 & 17.13 & 3.14 & 14.32 & 32.3 & 17.93 & 17.98 & 3.61 \\ P02 & 11.81 & 40.17 & 12.58 & 28.36 & 0.77 & 8.64 & 25.41 & 10.68 & 16.77 & 2.04 \\ P03 & 16.46 & 29.01 & 18.49 & 12.55 & 2.03 & 11.06 & 27.84 & 13.7 & 16.78 & 2.64 \\ P04 & 18.19 & 31.28 & 21.54 & 13.09 & 3.35 & 6.93 & 18.95 & 9.35 & 12.02 & 2.42 \\ P05 & 15.79 & 35.14 & 16.15 & 19.35 & 0.36 & 10.01 & 28.37 & 16.57 & 18.36 & 6.56 \\ P06 & 15.88 & 31.71 & 17.64 & 15.83 & 1.76 & 8.22 & 22.61 & 10.68 & 14.39 & 2.46 \\ P07 & 11.73 & 30.68 & 13.1 & 18.95 & 1.37 & 7.56 & 20.91 & 10.28 & 13.35 & 2.72 \\ P08 & 13.7 & 30.6 & 15.63 & 16.9 & 1.93 & 5.88 & 17.25 & 7.73 & 11.37 & 1.85 \\ P09 & 18.71 & 38.74 & 25.45 & 20.03 & 6.74 & 7.24 & 23.46 & 14.57 & 16.22 & 7.33 \\ P10 & 18.5 & 32.4 & 20.24 & 13.9 & 1.74 & 12.1 & 27.54 & 16.39 & 15.44 & 4.29 \\ P11 & 28.99 & 73.26 & 32.09 & 44.27 & 3.1 & 13.87 & 41.31 & 17.38 & 27.44 & 3.51 \\ P12 & 15.15 & 27.42 & 20.99 & 12.27 & 5.84 & 6.72 & 16.63 & 12.42 & 9.91 & 5.7 \\ P13 & 26.93 & 57.23 & 28.96 & 30.3 & 2.03 & 15.8 & 42.52 & 17.37 & 26.72 & 1.57 \\ P14 & 33.96 & 78.12 & 37.99 & 44.16 & 4.03 & 13.43 & 36.72 & 16.01 & 23.29 & 2.58 \\ P15 & 45.22 & 87.45 & 55.65 & 42.23 & 10.43 & 21.33 & 41.53 & 25.64 & 20.2 & 4.31 \\ P16 & 26.22 & 51.59 & 26.5 & 25.37 & 0.28 & 9.51 & 33.32 & 11.07 & 23.81 & 1.56 \\ \hline \multirow{2}{*}{Mean} & 21.32 & 44.73 & 24.37 & 23.42 & 3.06 & 10.79 & 28.54 & 14.24 & 17.75 & 3.45 \\ & $\pm$8.80 & $\pm$18.65 & $\pm$10.56 & $\pm$10.96 & $\pm$2.58 & $\pm$4.02 & $\pm$8.38 & $\pm$4.31 & $\pm$5.14 & $\pm$1.71\\ \hline \end{tabular} \end{table*} \subsection{Comparative Analysis} \label{sec:comparisons} We consider all jump repetitions from all participants as individual measurements, thereby recording 6 jumps per participant and \replaced{96 jumps in total, of which 77 (48 bilateral and 29 unilateral) were valid and used for analysis}{30 jumps in total. However, the invalid jumps are excluded from the analysis}. \added[comment=We have edited this portion to more clearly indicate that correlation plots are used for qualitative comparison only.]{Fig.~\ref{fig:correlation} shows qualitatively how much MMC agrees with the Force Plate and OMC.} \begin{figure}[ht] \centering $\begin{array}{cc} \subfloat[]{% \includegraphics[width=0.46\linewidth]{images/RMM_vs_omc.pdf} } & \subfloat[]{% \includegraphics[width=0.46\linewidth]{images/PTM_vs_omc.pdf} } \\ \subfloat[]{% \includegraphics[width=0.46\linewidth]{images/RMM_vs_fp.pdf} } & \subfloat[]{% \includegraphics[width=0.46\linewidth]{images/PTM_vs_fp.pdf} } \end{array}$ \caption{Correlation between (a) OMC and MMC (RMM); (b) OMC and MMC (PTM); (c) Force Plate and MMC (RMM); and (d) Force Plate and MMC (PTM) for jump height measurement. \added{Best viewed in color.} Each datapoint in each scatterplot represents a single jump repetition.} \label{fig:correlation} \end{figure} \begin{table*}[ht] \centering \caption{Comparative Analysis and Benchmark} \label{tab:comparison} \resizebox{0.9\textwidth}{!}{% \begin{threeparttable} \begin{tabular}{l|lccc|cccc} \hline & \textbf{Method} & \textbf{Ground Truth} & \textbf{Segmentation} & \textbf{Calibrate?} & \added{\textbf{MAE} (cm)} & \textbf{ICC} & \textbf{bias} (cm) & \textbf{LOA} (cm) \\ \hline \multirow{2}{5em}{SoTA$^1$ (Bilateral)} & MMC \cite{Webering_2021_CVPR} & OMC & Auto & Yes & - & 0.68 & \textbf{0.15} & 2.75\\ & MyJump2 \cite{my_jump} & Force Plate & Manual & Yes & - & 0.96 & -0.48 & 2.13\\ \hline \multirow{4}{5em}{Ours (Unilateral)} & MMC$_{RMM}$ & OMC & Auto & No & 1.99 & 0.91 & 1.99 & 2.13 \\ & MMC$_{PTM}$ & OMC & Auto & No & 2.13 & 0.86 & 1.96 & 4.05 \\ & MMC$_{RMM}$ & Force Plate & Auto & No & 2.08 & 0.84 & -1.54 & 4.7 \\ & MMC$_{PTM}$ & Force Plate & Auto & No & 2.21 & 0.87 & -1.57 & 3.8\\ \hline \multirow{4}{5em}{Ours (Bilateral)} & MMC$_{RMM}$ & OMC & Auto & No & \textbf{1.47} & \textbf{0.99} & 1.47 & \textbf{1.86} \\ & MMC$_{PTM}$ & OMC & Auto & No & 2.02 & 0.97 & 1.07 & 4.80 \\ & MMC$_{RMM}$ & Force Plate & Auto & No & 2.09 & 0.93 & -1.59 & 5.40 \\ & MMC$_{PTM}$ & Force Plate & Auto & No & 2.82 & 0.93 & -1.99 & 5.45 \\ \hline \multirow{4}{5em}{Ours (BL and UL)} & MMC$_{RMM}$ & OMC & Auto & No & \added{1.66} & \replaced{0.98}{0.97} & \replaced{1.66}{1.68} & \replaced{2.05}{1.31} \\ & MMC$_{PTM}$ & OMC & Auto & No & \added{2.07} & \replaced{0.96}{0.91} & \replaced{-1.41}{-0.35} & \replaced{4.60}{4.65} \\ & MMC$_{RMM}$ & Force Plate & Auto & No & \added{2.09} & \replaced{0.95}{0.81} & \replaced{-1.57}{-2.26} & \replaced{5.15}{4.25} \\ & MMC$_{PTM}$ & Force Plate & Auto & No & \added{2.59} & \replaced{0.95}{0.70} & \replaced{-1.83}{-3.69} & \replaced{4.90}{4.95}\\ \hline \end{tabular} \begin{tablenotes} \item $^1$State of the art as reported in the respective works. \added{MAE (Mean Absolute Errors) are not reported in these works.} \item BL: bilateral; UL: unilateral; RMM: reverse minmax; PTM: pixel-to-metric. Best value for each metric is shown in \textbf{bold} font face. \end{tablenotes} \end{threeparttable} } \end{table*} For quantitative comparison (Table~\ref{tab:comparison}), we use the \added{Mean Absolute Error (MAE)}, intraclass correlation coefficient \cite{icc} (ICC), and Bland-Altman analysis \cite{bland1986statistical} (BA)\replaced{. ICC and BA}{, which} are often used for comparing new methods of measurements with a gold standard \cite{my_jump,Webering_2021_CVPR}. \added{The ICC has also been used to check for consistency and agreement across different methods of measuring jump height \cite{gavin}.} \added{We take the simultaneous capture of each jump by FP, OMC, PTM and RMM each as a "trial". We then compute ICC for four pairs of 'trials': OMC vs RMM, OMC vs PTM, FP vs RMM, and FP vs PTM, where FP and OMC are taken as ground truths.} The ICC $\in [0,1]$ gives the \replaced{consistency of PTM and RMM with the ground truths}{reliability of the new measurement technique}, where a value closer to 1 means higher consistency. \added{While a high ICC does not necessarily mean close agreement, it shows the level of consistency of deviations from the ground truth.} We obtain the ICC using the Pingouin \cite{vallat_2018} \emph{intraclass\_corr} module\deleted{, while we compute the Pearson's correlation coefficient using scipy \cite{SciPy}}. The Bland-Altman analysis is often used in clinical settings to visualize the agreement between two different methods of quantifying measurements based on bias and limits of agreement (LOA) \cite{giavarina_2015}. The bias $b$ for each MMC measurement technique compared to ground truth is given by the mean of the differences between individual measurements. The LOA is obtained from the confidence interval, defined as $c_0=b-1.96SD$ and $c_1=b+1.96SD$, where $SD$ is the standard deviation of the differences between the two measurements. At least 95\% of joint positions measured with MMC will deviate from OMC by a value within the range $[c_0,c_1]$. We define $LOA = (c_1 - c_0)/2$, where a smaller $LOA$ means better agreement with ground truth. We perform Bland-Altman analysis using statsmodels \cite{seabold2010statsmodels}. Based on ICC, bias, and LOA, and simplicity of setup, we put this work in context with similar approaches (Table~\ref{tab:comparison}). \subsubsection{MMC vs OMC} \label{sec:mmc_vs_omc} First, the accuracy of MMC in quantifying jump height is evaluated with OMC as ground truth. \added{Unlike in Section~\ref{sec:com_vs_toe}}, both MMC and OMC are measured using the vertical displacement of the toe \added{in this instance}. As shown in Table~\ref{tab:comparison}, both MMC$_{RMM}$ and MMC$_{PTM}$ achieve results comparable with the work of \cite{Webering_2021_CVPR}, which was also evaluated using OMC equipment. It is worth noting that our PTM approach assumes a simpler setup without manual calibration. \subsubsection{MMC vs Force Plate} \label{sec:mmc_vs_force_plate} The jump height measured from the force plates is taken as the main ground truth in this section. As shown in Table \ref{tab:comparison}, MMC$_{RMM}$ and MMC$_{PTM}$ fall short of the results achieved with MyJump2 \cite{my_jump}, \replaced{especially during unilateral jumps}{which was also evaluated using Force Plates}. This is because the MyJump2 app involves manual selection of start and end frames of jumps, and also requires subjects to be 1\emph{m} away from the camera. In addition, effective usage of MyJump2 may also require a second party holding the camera. On the other hand, our methods are simpler and more convenient, requiring only a tripod stand and one calibrating jump. \section{Discussion} \label{discussion} In this study, we have evaluated the accuracy of 2D markerless motion capture with a single smartphone in quantifying vertical jump height during countermovement jumps. Optical motion capture (OMC) was performed using CODA, and markerless motion capture (MMC) was performed using OpenPose with a single smartphone camera. Jump heights obtained from force plate flight times were used as the first ground truth for evaluating jump height, while OMC was used as the second ground truth. We found that MMC can quantify jump heights \added{with MAE between 1.47 and 2.82 cm without manual segmentation and camera calibration. We also obtained ICC between 0.84 and 0.99. The greatest agreement is found between OMC and MMC$_{RMM}$ (LOA between 1.86 cm and 5.40 cm) because Reverse MinMax is performed based on OMC. On the other hand, MMC$_{PTM}$ is more prone to errors (LOA between 3.8 cm and 5.45 cm) since noise in the jump time series is further amplified by the pixel-to-metric conversion factor, $\mathcal{R}$.} \added[comment=Here we comment on the acceptability of the limits of agreement and provide an example to give context.]{Although our proposed methods achieve comparable results, the acceptability of LOA will depend on measures similar to the \emph{minimally important difference} \cite{carton_filan_2020} (MID) in each application context. In order to be acceptable, the LOA should be smaller than the MID. For example, the MID in an elite sports context would be considerably smaller than the MID in recreational athletes.} \added{There are some limitations to this approach. For example,} the pixel-to-metric conversion \deleted{still requires improvement. The current technique} requires a calibrating jump, and movements towards or away from the camera during each task change the pixel-to-metric scale. \replaced{In general, }{ In addition to the above, we also show that when measured using the flight time of the centre of mass, CMJ jump height can be overestimated by a large margin (Table~\ref{tab:com_vs_toe}). However, the centre of mass could be estimated as the mean of several body parts as in \cite{Webering_2021_CVPR}, but this may introduce more errors. }the main sources of error we identify in MMC are: \begin{enumerate} \item Video quality. The quality of the video and the amount of clutter in the background \deleted{directly} affect the \replaced{confidence}{quality} of \added{detected keypoints during} pose estimation\deleted{output}. \item \added{Video viewpoint. Accurate detection of body parts is affected by video viewpoint. For example, pose estimation sometimes fails when used for unilateral CMJ in the side view (Fig.~\ref{fig:noisy}). Future studies will explore other views for the unilateral CMJ.} \item Noise in HPE output. The noise level could be influenced by HPE model accuracy, background clutter, and lighting conditions. \item Approximations. Preprocessing steps such as smoothing, segmentation, MMC scaling and pixel-to-metric conversion involve approximations, introducing errors. \end{enumerate} \deleted{While we identify these potential sources of error, We also note that }The Force Plate and OMC are \added{also} prone to errors due to human factors. \replaced{For example, OMC coordinates drop to zero when participants' hands or clothes occlude markers. Force values are also affected if participants step outside the force plates momentarily}{The OMC is especially prone to errors due to skin artefacts, clothing, and marker placement}. \section{Conclusion} \label{sec:conclusion} The results of the analyses in this study suggest that markerless motion capture with a single smartphone is feasible. However, its use case will depend on the domain-specific minimally important differences (MID). For example, for applications with very small MID, monocular MMC could provide enhanced feedback and/or augmentation for body-worn sensors and markers. On the other hand, for applications such as measuring countermovement jump height, \replaced{MMC frame-by-frame tracking accuracy}{the accuracy of the raw motion time series} is not critical. Hence, as shown in this study, 2D monocular MMC could potentially replace sensors and physical markers for such applications. This study focuses on two variants of one motor task with \replaced{sixteen}{five} participants. Future studies will focus on improving and generalizing the techniques used to cover a comprehensive range of motor tasks\deleted{involving more participants}. \added{In addition, the videos used in this study were captured in the side view. Future studies will consider other views and their effects on capture techniques.}
{ "attr-fineweb-edu": 2.052734, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUdU05qYVBWth0LYtM
\section*{Abstract} We can construct passing networks when we regard a player as a node and a pass as a link in football games. Thus, we can analyze the networks by using tools developed in network science. Among various metrics characterizing a network, centrality metrics have often been used to identify key players in a passing network. However, a tolerance to players being marked or passes being blocked in a passing network, namely the robustness of the network, has been poorly understood so far. Because the robustness of a passing network can be connected to the increase of ball possession, it would be deeply related to the outcome of a game. Here, we developed position-dependent passing networks of 45 matches by 18 teams belonging to the Japan Professional Football League. Then, nodes or links were continuously removed from the passing networks by two removal methods so that we could evaluate the robustness of these networks against the removals. The results show that these passing networks commonly contain hubs (key players making passes). Then, we analyzed the most robust networks in detail and found that their full backs increase the robustness by often invoking a heavier emphasis on attack. Moreover, we showed that the robustness of the passing networks and the team performance have a positive correlation. \section*{Keywords} Football passing networks, network science, robustness, attack, error \section{Introduction\label{sec:introduction}} Team sports contain complex interactions with the players in their own and opposing teams and the dynamics excite many audiences. Team sports such as football (soccer), basketball, rugby, and hockey can be classified into invasion sports because players score by putting a ball (or puck) into their opponent's goal while they also defend their goal against attacks by their opponent \cite{Gudmundsson2017ACMComputSurv}. In football, each player passes the ball to another player on the team. Thus, if we consider a player as a node and a pass as a link, we can construct a passing network which allows us to scientifically analyze the network by using various tools developed in network science \cite{Gudmundsson2017ACMComputSurv, Buldu2018FrontPsychol, Buldu2019SciRep, Narizuka2014PhysicaA}. The ways of constructing passing networks can be generally classified into three types \cite{Gudmundsson2017ACMComputSurv}: (i) player passing networks where a player corresponds to a node and a pass corresponds to a link \cite{Grund2012SocNetworks}, (ii) pitch passing networks where a specific area in the field corresponds to a node instead of a player and the areas are connected by passes (links) \cite{Cintia2015MLDMSAW}, or (iii) pitch-player passing networks where a player in a certain area at the moment of the pass is a node \cite{Cotta2013JSystSciComplex, Narizuka2014PhysicaA}. Once a network is constructed, we can apply some metrics developed in network science. See, for example, Ref.~\cite{Buldu2019SciRep} where various metrics have been applied. Previous studies have focused on which players are important in passing networks. In those analyses, the most used metric was centrality. There are various kinds of centrality. In football analyses, degree centrality \cite{Grund2012SocNetworks}, betweenness centrality \cite{Pena2012ProcEPSC}, flow centrality \cite{Duch2010PLoSONE}, closeness centrality \cite{Pena2012ProcEPSC}, and eigenvector centrality \cite{Cotta2013JSystSciComplex} have been applied. For example, Grund focused on degree centrality and analyzed a huge amount of passes in English Premier League games. He found that concentrating too much on passes to certain players results in decreasing team performance \cite{Grund2012SocNetworks}. Pe\~{n}a and Touchette focused on betweeness centrality \cite{Pena2012ProcEPSC}. Betweeness centrality measures how the flow of the ball between players depends on certain players. Namely, it measures how many passes are conducted through those players. Thus, we can consider that it reflects the impact of removing those players. They argued that it is important that the betweeness centrality of each player should not be biased to specific players from a tactical point of view. Apart from centrality, clustering coefficient is also used as one of the metrics \cite{Pena2012ProcEPSC}. This metric calculates the degree of triangles for all the neighbors for a certain player. Thus, it reflects the contribution of the player against the local robustness of the passing network. It is also known that the coordination of three professional players which form a triangle is far more elaborated than that of beginners in a ball passing situation \cite{Yokoyama2018PhysRevE}. However, in those traditional analyses of football passing networks, the robustness of the networks has been poorly understood where players or passes are subject to continuous removals. Because the robustness of a passing network can be directly connected to the increase of ball possession, it would be related to the outcome of a game. In network science, with the pioneering work by Albert et al. \cite{Albert2000Nature}, the robustness of networks against continuous node removals has been successively addressed \cite{Albert2002RevModPhys, Holme2002PhysRevE, Strogatz2001Nature, Newman2003SIAMRev, Goh2002PhysRevLett, Paul2004EurPhysJB}. Therefore, we aim to identify the characteristics of passing networks in football by applying those discoveries. In this paper, we constructed position-dependent passing networks from coordinate data of passes in 45 matches of the Japan Professional Football League (J League). Then, we analyzed the robustness of the networks against the two types of continuous node or link removals. As a result, we found that all of those passing networks have hubs. We further conducted the analysis of the most robust team. \section{Methods} \subsection{Dataset} We use the data of 45 matches of the J1 league (the top division of J League) as the dataset provided by DataStadium Inc., Japan. DataStadium has a contract with J League to collect and sell data. We use the dataset for this research under the permission by DataStadium. The dataset is composed of five matches per team for all 18 teams in the second stage of 2018 (from August 10 to September 2), that is, $(18 \times 5)/2=45$. We label the 18 teams ``Kawasaki'', ``Hiroshima'', ``Kashima'', ``Sapporo'', ``Urawa'', ``Tokyo'', ``C Osaka'', ``Shimizu'', ``G Osaka'', ``Kobe'', ``Sendai'', ``Yokohama'', ``Shonan'', ``Tosu'', ``Nagoya'', ``Iwata'', ``Kashiwa'', ``Nagasaki'', respectively. Note that we sorted the teams in ascending order based on the annual ranking in the 2018 season, namely, ``Kawasaki'' was the champion and ``Nagasaki'' was the lowest in that season. We used NetworkX for the visualization and analyses of networks. NetworkX is a Python library. \subsection{Construction of passing networks} We constructed a passing network from coordinate data of incoming and outgoing passes and players' numbers where a player corresponds to a node and a pass corresponds to a link. We only used the data of successful passes and excluded passes stolen by the opponent team or plays out of bounds. We included throw-ins and set-pieces in the data. The obtained data was 36,070 lines in total. The sample of the data is shown in Table \ref{pass-data}. Each line represents pass sender's and receiver's IDs with their locations (X-coordinates and Y-coordinates). From this data, we constructed passing networks. \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Pass sender} & \multicolumn{3}{c|}{Pass receiver} \\ \hline Player ID & X-coordinate & Y-coordinate & Player ID & X-coordinate & Y-coordinate \\ \hline 1200017 & $-6.5$ & 85.5 & 1400359 & $-69$ & 70 \\ 1400359 & $-69$ & 70 & 1100134 & $-110$ & 74.5 \\ 1301053 & $-139.5$ & 104 & 1000646 & $-113.5$ & 85.5 \\ - & - & - & - & - & - \\ \hline \end{tabular} \caption{Sample of football passing data. Only three lines are shown although the data contains 36,070 lines in total.} \label{pass-data} \end{table} We constructed a passing network per team in a match. An example is shown in Fig.~\ref{fig:pass_networks}. This is a match between Nagoya versus Yokohama and the network corresponds to Nagoya. For Nagoya, the direction of offense is upward. Figure \ref{fig:pass_networks}(A) corresponds to the original passing network. In the network, we create a node from a coordinate where a player passes the ball (start point) to the other coordinate where another player receives the ball (end point). \begin{figure}[h] \centering \includegraphics[width=\textwidth]{nagoya_net_color.eps} \caption{Passing network created from the data of Nagoya in Nagoya vs.~Yokohama on August 15, 2018. (A) Original passing network. (B) Position-dependent passing network transformed from (A). Multiple links are allowed for a pair of nodes.} \label{fig:pass_networks} \end{figure} In the Fig.~\ref{fig:pass_networks}(A) style, because each location is regarded as a different node, it is unclear which players collect passes. Thus, for our analyses, we employ position-dependent networks where the field is divided into some areas depending on the locations \cite{Cotta2013JSystSciComplex, Narizuka2014PhysicaA, Narizuka2015JPhysSocJpn}. We can consider several different ways to divide the field. Here, we divided it into $4 \times 6=24$ and constructed networks. In our position-dependent networks, we regard a player with different locations in the same area as the same (one node). In contrast, we distinguish passes with different locations in the same area as different passes. Thus, the position-dependent networks allow multiple links. Figure \ref{fig:pass_networks}(B) is the position-dependent passing network transformed from the original passing network (Fig.~\ref{fig:pass_networks}(A)). In Fig.~\ref{fig:pass_networks}(A), a player with different locations in the same area is regarded as different nodes because their coordinates are different. In Fig.~\ref{fig:pass_networks}(B), the player is regarded as one node. However, passes in the same area are not integrated into one pass. Thus, the network is a weighted network which reflects the number of passes. The line thicknesses represent the weights. In Fig.~\ref{fig:pass_networks}(B), the players (nodes) are aligned from the bottom left to the top right diagonally, and they move horizontally from left to right based on the players' numbers in each line. That is, in the figure, the exact locations of passes are not reflected. In the following, we analyze these position-dependent passing networks. \subsection{Network models for comparison} To identify the characteristics of the robustness of passing networks in football, we constructed two-type model-based networks for comparison. These two networks are Exponential (E) network and Scale-free (SF) network. These were analyzed in the network tolerance study \cite{Albert2000Nature}. We also adopted these two networks as the representatives although we can consider other different types of networks. Those two networks can be considered extreme in the sense that E networks are random and SF networks are centralized. Obviously, football passing networks are neither random nor scale-free. However, those two types of well-known networks are still useful for comparison with our football passing networks to extract the characteristic properties of the passing networks. In the E networks, most of the nodes have similar degrees and the nodes are randomly connected. We generated these networks by using Er\H{o}s-R\'enyi's algorithm \cite{Erdos1960PublMathInstHungAcadSci}. In the original algorithm, all pairs are selected once and each pair is connected with a certain probability $p$. Here, to compare them with football passing networks, we randomly select a pair and connect the pair with $p$. This is repeated for the number of pairs instead of selecting all pairs once. Thus, the generated E networks allow multiple links. On the other hand, in the SF networks, most of the nodes have a few links while a few nodes, called hubs, gather many links. The degree distributions show a power-law. We generated these networks by using Barab\'asi-Albert algorithm \cite{Barabasi1999Science}. Initially, a network starts from a few nodes which are connected by links. Then, a new node is repeatedly introduced. Every new node is connected to existing nodes in proportion to the degree of existing nodes. This mechanism is called a preferential attachment. In the original algorithm, multiple links are not allowed but here we allow them to compare with football passing networks. It means that when a new node is introduced, it can connect to the same node multiple times. Based on those algorithms, we generated 100 networks for each. The size of the networks is $N=150$ and the average degree is $\langle k \rangle \simeq 8$. These parameter values were used because they are suitable to compare with football passing networks. The values of the passing networks are shown later. \subsection{Continuous node or link removals} We analyze the robustness of passing networks of all J1 teams by conducting continuous node or link removals on the networks (e.g., Fig.~\ref{fig:pass_networks}(B)). We compare the robustness of the networks with that of E and SF networks by removing nodes or links in the two types of networks as well. For node removals, we employ the two ways used in Albert's model \cite{Albert2000Nature}. The first one is called ``error'' where a node is randomly selected and removed, one after another. The other one is called ``attack'' where the largest hub is selected and removed, one after another. For link removals, we also employ ``error'' and ``attack.'' In the error, a pair of two nodes are randomly selected. In the attack, the pair of nodes which has the maximum links is selected. In both cases, all links between the selected pair of nodes are removed. Some possible interpretations of the continuous node or link removals in football games are as follows. Continuous node removals correspond to repeated man-marks for specific players when focusing on plays or repeated zone defenses when focusing on positions. By those defenses, players become less functional. On the other hand, continuous link removals may be easier to interpret than node removals. The link removals correspond to the situation that passes between a pair of two players are blocked or lanes are closed one after another by the opponent's team. We model these situations by continuous node or link removals and analyze the robustness of networks. \subsection{Robustness measurement} We use three measures, the diameter of a network, $d$, the algebraic connectivity (the second-smallest eigenvalue of the Laplacian matrix of a network), $\lambda_2$, and the relative size of the largest cluster, $S$, to quantify the robustness of networks against the number of removed nodes, $n_{\rm R}$ \cite{Albert2000Nature}, or links, $l_{\rm R}$. A network diameter denotes the longest path among the shortest paths between any pair of nodes. In general, when a hub node is removed, the diameter becomes longer because the removal makes many of the shortest paths vanish. An algebraic connectivity reflects how well connected the overall network is. If the value of a network is large, it implies the network is well connected. The value becomes zero when a network splits into two components. The largest cluster implies the largest connected subgraph in a network. We use the relative size of the largest cluster. The value of $S$ starts from 1 and it decreases as nodes are removed. We evaluate the robustness of passing networks by analyzing the change of the three measures $d$, $\lambda_2$, and $S$ against continuous node or link removals. Here, we explain how the process of removing nodes or links works to determine the robustness of a network by the three measurements. We continuously conduct node removals or link removals for each passing network. A network is regarded as robust when the diameter $d$ remains low even after nodes or links are largely removed because the shortest paths between nodes are not lost. Also, when the algebraic connectivity $\lambda_2$ remains high even after nodes or links are largely removed because it implies that the network is well connected as a whole. Finally, when the relative size of the largest cluster $S$ remains high even after nodes or links are largely removed because it implies that the network is not split into multiple sub-networks. \section{Results} \subsection{Passing network characteristics} First, we summarize the characteristics of the position-dependent passing networks of J1 teams typically represented by Fig.~\ref{fig:pass_networks}(B) in Table \ref{table:network_char}. In Table \ref{table:network_char}, the teams are sorted in descending order of the number of weighted links (passes). Here, we only show the top three and the bottom three teams in all 18 teams. The data of all teams are provided in Supplementary Material (Table S1). The values are averaged over five matches. Obviously, the number of passes of Kawasaki is much higher than the other teams. Kawasaki is known for possessing the ball by passing to one another \cite{KawasakiURL}. Thus, our results supported this fact. \begin{table}[hbtp] \centering \caption{Characteristics of the position-dependent passing networks of J1 teams sorted by the number of weighted links (passes) in descending order. Only the top three and bottom three teams are shown.} \begin{tabular}{lcrrr} \hline & Team & \multicolumn{1}{c}{\# of links} & \multicolumn{1}{c}{\# of nodes} & \multicolumn{1}{c}{$\left<k\right>$} \rule[-10pt]{0pt}{25pt} \\ \hline & Kawasaki & 719.8 & 164.4 & 8.76 \rule[-10pt]{0pt}{25pt} \\ Top 3 & Kobe & 505.6 & 142.2 & 7.11 \rule[-10pt]{0pt}{20pt} \\ & Sapporo & 452.8 & 154.0 & 5.88 \rule[-10pt]{0pt}{20pt} \\ \hline & Shonan & 313.2 & 147.2 & 4.26 \rule[-10pt]{0pt}{25pt} \\ Bottom 3 & Shimizu & 296.0 & 137.4 & 4.31 \rule[-10pt]{0pt}{20pt} \\ & Tosu & 264.0 & 132.6 & 3.98 \rule[-10pt]{0pt}{20pt} \\ \hline \end{tabular} \label{table:network_char} \end{table} \subsection{Passing network robustness} \subsubsection{Diameter change} Figure \ref{fig:diameter_N} shows the changes of the diameter $d$ for football passing networks (Figs.~\ref{fig:diameter_N}(A) and (C)) and the two network models (Figs.~\ref{fig:diameter_N}(B) and (D)) against node removals. Errors (random node removals) correspond to (A) and (B) while attacks (hub node removals) correspond to (C) and (D). We only show the biggest two and smallest two teams for $d$ at $n_{\rm R}=6$ against errors in Fig.~\ref{fig:diameter_N}(A) and (C). The biggest two teams are G Osaka and Kashima from the top and the smallest two teams are Kawasaki and Sapporo from the bottom in Fig.~\ref{fig:diameter_N}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{diameter_N.eps} \caption{Change of the diameter $d$ against node removals $n_R$ in errors (A and B) and attacks (C and D). (A and C) are the cases of football passing networks. Five matches are averaged for each line. (B and D) are the cases of the E and SF networks. One-hundred realizations are averaged for each line. Note that the scales between the passing networks and the network models are different because the former has a spatial limitation as explained in the text.} \label{fig:diameter_N} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{diameter_L.eps} \caption{Change of the diameter $d$ against link removals $l_R$ in errors (A and B) and attacks (C and D). The same settings as Fig.~\ref{fig:diameter_N} are used.} \label{fig:diameter_L} \end{figure} First, we focus on the change of $d$ against errors. Figure \ref{fig:diameter_N}(A) shows the case of the football passing networks. The value of $d$ in the four teams does not change even if $n_{\rm R}$ is increased, which suggests that football passing networks are robust against errors. On the other hand, the values of $d$ in Kawasaki and Sapporo are smaller than those in G Osaka and Kashima. We compare the passing networks with the two network models. When $n_{\rm R}$ is increased, the value of $d$ does not change in the E networks while that value slightly increases in the SF networks. In the SF networks, when a hub is removed from a network, it greatly increases $d$. This situation sometimes happens because we adopted the small size of network ($N=150$). (This situation does not happen when the network size is large as shown in Albert et al. \cite{Albert2000Nature}). By comparing Fig.~\ref{fig:diameter_N}(A) with Fig.~\ref{fig:diameter_N}(B), we found that $d$ in the passing networks ($8 \leq d \leq 11$) is two to three times larger than that in the E and SF networks ($4 \leq d \leq 5$). The position-dependent passing networks in football have a spatial limitation. In the networks, short passes are more often observed than long passes. In other words, passes to the next area in the field tend to be larger. In contrast, there is no spatial limitation in the E and SF networks. Thus, the values of $d$ in the passing networks are much larger than the ones in the network models. Next, we focus on the change of $d$ against attacks. Figure \ref{fig:diameter_N}(C) shows the case of the football passing networks. The value of $d$ in the four teams increases as $n_{\rm R}$ is increased, which suggests that football passing networks are vulnerable to attacks. However, the amount of change in Kawasaki is small even if $n_{\rm R}$ is increased. It means that Kawasaki's networks are tolerant against attacks to a certain extent. We compare the passing networks with the two network models. The diameter of the E networks was slightly increased against attacks (Fig.~\ref{fig:diameter_N}(D)). Basically, there is not much difference among the degrees of nodes in the E networks. However, because the network size is small ($N=150$), it generates the tendency that some nodes have relatively higher degrees while some other nodes have relatively smaller degrees. Thus, when the higher degree nodes are removed by attacks, the value of $d$ becomes larger even in the E networks. In contrast, the value of $d$ in the SF networks is rapidly increased against attacks because shortest paths vanish due to the hub node removals. Thus, the SF networks are extremely vulnerable to attacks. As shown in Fig.~\ref{fig:diameter_N}(C), we observed that the values of $d$ in the passing networks increase as $n_{\rm R}$ becomes larger although the change in Kawasaki is small. This is because the passing networks are characterized by the existence of some hub nodes (key players making passes). We see the same measurement $d$ against link removals in Fig.~\ref{fig:diameter_L}. The same labeling with Fig.~\ref{fig:diameter_N} is used for Fig.~\ref{fig:diameter_L}. We pick the biggest two and smallest two teams for $d$ at $l_{\rm R}=6$ against errors in Fig.~\ref{fig:diameter_L}(A) and (C). The difference with Fig.~\ref{fig:diameter_N} was that the second biggest team was Hiroshima instead of Kashima. In (A) and (C), we exclude the data when the decrease of $d$ which is larger than 0.05 continues twice in a row because it basically suggests that the network is split into two or more components. Compared to the node removals (Fig.~\ref{fig:diameter_N}) with the link removals (Fig.~\ref{fig:diameter_L}), $d$ becomes larger as $l_R$ increases even in the case of errors. This is because all links between two selected nodes are removed in the case of link removals. It contributes to the increase of $d$ because paths tend to disappear due to the link removals. Thus, in the case of link removals, the difference between errors and attacks is diluted although the increase in attacks is slightly bigger than that in errors, which makes it difficult to characterize passing networks as $d$ of link removals. However, $d$ in Kawasaki's network remains low against errors and attacks, which suggests that Kawasaki's network is robust. In summary, we found that the passing networks were well characterized by node removals rather than link removals for $d$. In the node removals, the passing networks were robust against errors but vulnerable to attacks, suggesting that the passing networks have some hubs. We also found that the smallest change of $d$ was Kawasaki, thus Kawasaki was most robust against continuous node and link removals in the passing networks. \subsubsection{Algebraic connectivity change} Next, we focus on the change of the algebraic connectivity $\lambda_2$ (Fig.~\ref{fig:laplacian}). Algebraic connectivity is represented by $\lambda_2$ which is the second-smallest eigenvalue of the Laplacian matrix of a network. If the value is large, it implies that the network is well connected. When a network is split into two components, $\lambda_2$ becomes zero. Figures \ref{fig:laplacian}(A) and (B) represent the case of node removals ($n_R$) where (A) corresponds to passing networks while (B) is network models. For the passing networks, we only show the biggest two and smallest two teams for $\lambda_2$ at $n_{\rm R}=1$ against errors. The biggest two teams are Kawasaki and Sapporo from the top and the smallest two teams are G Osaka and Kashima from the bottom. In both passing networks (A) and network models (B), the values approach zero as $n_R$ becomes larger. Basically, in both networks, the decreases of $\lambda_2$ in the case of attacks is larger than that in the case of errors. This is because when hubs are removed by attacks, it greatly decreases the connectivity of a network. When we focus on the difference between attacks and errors in network models (B), the SF networks are bigger than the E networks. Because hubs are connected from many other nodes in the SF networks, the effect of hub removals tends to be bigger in the SF networks compared to the E networks. In passing networks (A), the difference between attacks and errors is large because passing networks contain some hubs. Note that the value of $\lambda_2$ in network models (B) is much higher than that in passing networks (A). Passing networks are locally connected due to the position-dependent characteristic, which makes the connectivity throughout the networks weak. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{laplacian.eps} \caption{Change of the algebraic connectivity $\lambda_2$. (A) and (C) are the cases of football passing networks against $n_R$ and $l_R$, respectively. Five matches are averaged for each line. (B) and (D) are the cases of the E and SF networks against $n_R$ and $l_R$, respectively. One-hundred realizations are averaged for each line.} \label{fig:laplacian} \end{figure} Figures \ref{fig:laplacian}(C) and (D) represent the case of link removals ($l_R$) where (C) corresponds to passing networks while (D) is network models. For (C), we only show the biggest two and smallest two teams for $\lambda_2$ at $l_{\rm R}=1$ against errors. The biggest two teams are Kawasaki and Kobe from the top and the smallest two teams are G Osaka and Kashima from the bottom. The difference with (A) was that the second biggest team was Kobe instead of Sapporo. The biggest difference between (A) and (C) is that $\lambda_2$ in the case of attacks are higher than that in the case of errors in the link removals (C). In this case, links around a hub player (key player regarding passes) tend to be selected. Let's label the key player X and assume that the maximum number of links exists between X and another player Y. Then, in attacks, the first removal is conducted for all the links between X and Y. If the second largest number of links were to exist between X and another player Z, then the second attack will be conducted for all the links between X and Z. In this way, links which belong to the neighbors of a hub player tend to be selected for removal in the case of attacks. However, even if these links are removed, these removals do not affect the connectivity of the whole network much because these removals are conducted only between the hub node and the neighbors. In contrast, in the case of errors, links between a randomly selected pair are removed. These removals affect the connectivity of the whole network because it may split the network. This is why $\lambda_2$ in the case of attacks is higher than that in the case of errors. In network models (D), the removal of one pair which has the maximum number of links affects the network connectivity because almost all links are concentrated on only a few nodes. As a result, we do not see any similar properties between (C) and (D). Therefore, we conclude that it is better to use $n_R$ than $l_R$ when we compare passing networks with network models. When hub nodes (key players) in passing networks are removed, it greatly affects the whole connectivity of the networks. \subsubsection{Largest cluster change} Finally, we focus on the change of the relative size of the largest cluster $S$ (Fig.~\ref{fig:cluster}). Figure \ref{fig:cluster}(A) shows the case of the passing networks against node removals ($n_R$). We only show the biggest two and smallest two teams for $S$ at $n_{\rm R}=80$ against errors. The values of $S$ in all four teams linearly decrease as $n_{\rm R}$ becomes larger in the case of errors (random node removals), which means that each network is connected as a whole. On the other hand, the values of $S$ in all four teams suddenly fall as $n_{\rm R}$ becomes larger in the case of attacks. The reason is that when hub players are removed by attacks (hub node removals), the networks become disconnected and divided into quite small subgraphs. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{cluster.eps} \caption{Change of the relative sizes of the largest cluster $S$. (A) and (C) are the cases of football passing networks against $n_R$ and $l_R$, respectively. Five matches are averaged for each line. (B) and (D) are the cases of the E and SF networks against $n_R$ and $l_R$, respectively. One-hundred realizations are averaged for each line.} \label{fig:cluster} \end{figure} We compare the passing networks with the two network models against node removals. We see the changes of $S$ in the E and SF networks (Fig.~\ref{fig:cluster}(B)). In general, in the E networks, it has been shown that there is no difference against errors and attacks (See Fig.~3a in Albert et al.~\cite{Albert2000Nature}) if the network size is large (e.g., $N=10,000$). Here, we observed the difference between errors and attacks in the E networks because there are some differences in the degrees among nodes due to the small network size ($N=150$). In the SF networks, the value of $S$ suddenly falls against attacks but linearly decreases against errors. Thus, the impact of removing hubs is large in the SF networks. In Fig.~\ref{fig:cluster}(A), we found that $S$ nosedives in the passing networks, although the bottom two teams (Shimizu and Tosu) are more intense than the SF networks. These results also suggest that the passing networks have some hubs (key players making passes). Figures \ref{fig:cluster}(C) and (D) show the same measurement $S$ against link removals ($l_R$). The same as with Fig.~\ref{fig:laplacian}(C), $S$ in attacks is larger than that in errors. For the same reason as Fig.~\ref{fig:laplacian}(C), links which belong to the neighbors of a hub player tend to become the targets for removals. Thus, the largest cluster of a network remains because those removals barely split the network. There is quite little difference among the results in the two network models (D). In these networks, each network is well connected as a whole. Thus, those networks are not affected by the link removals unless the number of removals is much larger. We again see the characteristics of passing networks are well captured by node removals rather than link removals. We also found that Kawasaki's network was most robust because the largest cluster of the network decreased to be the slowest against attacks. \subsection{Detailed analysis of Kawasaki passing network} The three analyses above revealed that Kawasaki was the most robust in all 18 teams. Figure \ref{fig:kawasaki_cluster} shows the changes of the largest cluster $S$ against $n_R$ in all teams. Obviously, Kawasaki was the most robust by far. Thus, we focus on Kawasaki's network in detail. \begin{figure}[h] \centering \includegraphics[width=100mm]{kawasaki_cluster.1027.eps} \caption{Change of the relative size of the largest clusters $S$ against attacks for all 18 teams.} \label{fig:kawasaki_cluster} \end{figure} We show Kawasaki's typical network (Fig.~\ref{fig:pass_network_kawasaki}). In the figure, players' numbers are displayed on the nodes. The players (nodes) are aligned from the bottom left to the top right diagonally as the same style Fig.~\ref{fig:pass_networks}. As a player touch the ball earlier, the player appears faster. Namely, a player who touched the ball fastest corresponds to the bottom left node. The colors deepen in proportion to the degree (the number of passes). \begin{figure}[h] \centering \includegraphics[width=150mm]{kawasaki_network_wNum.eps} \caption{Kawasaki's position-dependent passing network. Kawasaki vs. Hiroshima on August 19, 2018. The node color deepens when the degree of the node is high.} \label{fig:pass_network_kawasaki} \end{figure} In the left area, Noborizato (number 2); in the center area, Oshima (number 10) and Morita (number 25); in the right area Ienaga (number 41) and Elsinho (number 18) have deep colors. It means that those players are hubs which collect passes. Here, Noborizato (number 2) and Elsinho (number 18) are full backs but they are often observed in the opponent areas, which means they actively assist attacking the opponent's goal. Also, Ienaga (number 41) is basically a hub on the right side but he is also observed in many other areas. Thus, he is involved in passing the ball in broad areas. In many football teams, offensive midfielders and defensive midfielders play the central role of passing the ball. Not only those central players but also some other players are the hubs of passes in Kawasaki's network. This signature contributes to make Kawasaki's passing networks robust. One thing that needs to be paid attention to is that we aggregated the data and used the average values for the analyses. For example, Kawasaki may show good passing effectiveness when playing against some teams but may not when playing against other teams. To elucidate this question, we separated the aggregated data into five individual matches (Fig.~\ref{fig:each_match_kawasaki}). Only the node removal cases are shown because they accurately capture the robustness of passing networks rather than the case of the link removals. We see variations among the matches to some extent for the three measures. This result suggests that Kawasaki's passing networks change depending on the opponent's teams. Especially, Kawasaki's network is most vulnerable (large $d$, small $\lambda_2$, and small $S$ as a whole) to Tosu. The win-loss record of Kawasaki versus Tosu for that match (August 15, 2018) was a draw (0--0). In this way, we must pay attention to the data at the individual match level because there are variations to some extent. However, we also find that Kawasaki's networks are robust against the five teams (Fig.~\ref{fig:each_match_kawasaki}) compared to the other 17 teams, which implies the aggregated data is still effective to evaluate the robustness of passing networks. \begin{figure}[h] \centering \includegraphics[width=150mm]{each_match_kawasaki.eps} \caption{Five individual matches of Kawasaki. (A) Diameter change against errors, (B) Diameter change against attacks, (C) Algebraic connectivity change, and (D) Largest cluster change.} \label{fig:each_match_kawasaki} \end{figure} \subsection{Correlation analysis} The final analysis is to clarify the relationship between the team performances (evaluated by points) and the robustness of the networks (evaluated by $d$ and $S$). The annual rankings are decided by the points. The team with the highest points is the champion in the season. As we showed above, Kawasaki's network was the most robust. Kawasaki became the champion in the season. Here, we investigate whether the similar tendency is observed even for the other teams. Thus, we conducted correlation analyses between the team performance (points) and the robustness of the networks. We only used the data of node removals for this correlation analysis because it accurately captures the robustness of networks compared to link removals. Figures \ref{fig:corr_diam}(A) and (B) are the scatter plots between the points and the diameter $d$ at $n_{\rm R}=10$ where (A) corresponds to errors and (B) corresponds to attacks. We identify whether larger points lead to the robustness of networks (small $d$). In other words, we check whether a negative correlation is observed. In the figure, the Pearson correlation coefficient, $r$, and its $P$ value are shown. As a result, we observed a weak negative correlation in errors ($r=-0.290$) and a negative correlation in attacks ($r=-0.421$), respectively. However, in both cases, the results are not statistically significant ($P>.05$) although we see a tendency that the diameter is smaller as the points are larger. Next, we show the scatter plots between the points and the size of the largest cluster $S$ at $n_{\rm R}=35$ for (C) errors and (D) attacks in Fig.~\ref{fig:corr_diam}. We identify whether larger points lead to the robustness of networks (large $S$). In other words, we see whether a positive correlation is observed. The correlation results showed that positive correlations are observed in both errors ($r=0.399$) and attacks ($r=0.574$), respectively. In the case of errors, the result is not statistically significant ($P>.05$). However, in the case of attacks the result is statistically significant with $P<.05$. Thus, we can conclude that the size of the largest cluster is higher as the points are larger in the case of attacks. In short, even if key players become less functional, keeping the connection in a network leads to a win. \begin{figure}[h] \centering \includegraphics[width=140mm]{scatters.eps} \caption{Scatter plots between the points and the diameter $d$ (A and B) and the points and the size of the largest cluster $S$ (C and D), respectively. (A and C) are for errors. (B and D) are for attacks. The Pearson correlation coefficient, $r$, and its $P$ value are also shown.} \label{fig:corr_diam} \end{figure} \section{Conclusion} We constructed position-dependent passing networks from 45 matches of all J1 teams. Then, we continuously removed the nodes or links by two methods, errors or attacks, to analyze the robustness of the networks. We focused on the change of the diameter $d$, the algebraic connectivity $\lambda_2$, and the size of the largest cluster $S$ to evaluate the robustness of the networks. The results showed that the passing networks were robust against errors but vulnerable to attacks. Thus, passing networks are greatly affected by stopping the movement of key players and cutting passes to and from them. Especially, we found that Kawasaki's network was distinct where the robustness was maintained even if key players making passes were removed. There were multiple key players that collect passes in Kawasaki's network. Kawasaki is known for frequently passing the ball among players. The style of playing may be similar to tiki-taka used in F.C. Barcelona \cite{Buldu2019SciRep}. Finally, we conducted the correlation analysis between the points and the diameter $d$ or the points and the size of the largest cluster $S$. We found that there is a statistically positive correlation between the points and the robustness of the networks in the case of attacks. We can summarize that the robustness of passing networks is closely tied with the team performance. Similar studies focused on key players in passing networks where all plays in the matches are included \cite{Grund2012SocNetworks, Pena2012ProcEPSC}. In contrast, for the first time, by considering continuous node and link removals, our study evaluated the robustness of passing networks after key players or key passes in different locations are removed one after another. This analysis makes it possible to dynamically evaluate the robustness of passing networks. Thus, this is a key strength of our study. There are some limitations in this paper. Although we adopted an undirected graph for a passing network, a directed graph may be suitable because passes are always to one another. However, in this study, when a node is removed, links connected to the node are also removed. Thus, the effect of directions is small for this type of robustness analysis. From another perspective, football networks are spatio-temporal networks whose structures dynamically change in time and space \cite{Gudmundsson2017ACMComputSurv}. We partially incorporated spatial information in our position-dependent networks. However, we ignored the change of dynamical structures of networks in time. For instance, football passing networks are often different between the first half and the second half. In another case, a passing network would change after scoring or receiving a goal. Thus, the dynamical change of networks over time is important. This can be analyzed by the technique of temporal networks and recent studies used the technique for passing networks \cite{Buldu2019SciRep} or formations \cite{Narizuka2019SciRep} in football games. Lastly, the importance of passing networks is different depending on the formation \cite{Tamura2015EPJDataSci} or the style of play. One example is possession versus counter-attack. The robustness of passing networks may be related to winning in the former compared to the latter. In this way, there are many interesting problems in football networks which can be tackled by network science.
{ "attr-fineweb-edu": 2.365234, "attr-cc_en_topic": 0, "domain": "arxiv" }
BkiUeEjxK0iCl36cXqU6
\section{Introduction} \label{sec:Introduction} \subsection{Background} Thanks to the emerging technologies in positioning, numerous positioned data are available and open to relevant research topics. Such data are usually analyzed to figure out different aspects that are hidden in the data, and machine learning is often used to explore the data distributions, study dependences between different attributes, provide modeling of patterns and make predictions. One specific use case is in sports, where the positions and kinetic measurements of athletes can be recorded. Such data analytics is crucial for both the coach and public audience, since they can provide valuable insights into the performance of athletes, which will further assist in deploying attacks and defenses in matches, monitoring physical condition of athletes and so on. One interesting area is to use approaches in data analytics to analyze the kinetics of athletes in a certain sport, which may include studies on cause of motion, namely forces and torques, human movements with respect to the amount of time taken to carry out the activity and possible interactions between the athletes and their equipments and environments. All of these studies belong to the sports biomechanics research area, which are aiming to prevent injuries and improve performance of athletes. There have been numerous research conducted on sports biomechanics. For instance, \citep{BR07} provides a thorough introduction on sport biomechanics including the geometry of sports movements, forces in sport and how they are related by the law of kinetics and how these are related to the human body in a anatomical way. In \citep{PM14}, the effectiveness of different forces in cross country skiing are evaluated by conducting real measurements. However, most research focuses on explicit analysis from physical and biological points of view, and so far, the data-driven approaches, such as machine learning, haven't been fully explored. Another interesting topic in data analytics is the flow modeling and prediction. In literature, various studies arise around this area, which include human motion patterns \citep{ES09}, traffic flow modeling and prediction \citep{YZYD10}, moving patterns of a swarm of animals \citep{RMRR11}, and so on. Flow modeling and prediction can be done both on-line with real-time data, for instance the streaming probe data \citep{RJH10}, or off-line with cached/stored data, for instance the recorded data from surveillance cameras \citep{ES09}. However, to the best of our knowledge, these have never been explored under sport scenario, where a huge amount of positioned trajectories can be available. Historically, the most widely used machine learning approaches include neural networks \citep{smith94short}, support vector machines (SVM) \citep{zhang2008forecasting}, and Gaussian Processes (GP) \citep{MacKayGP97,RW06}. GP is one important class of kernel-based learning methods. First, it is good at exploring the relationship between a set of variables given a training dataset. Second, GP perfectly fits in the Bayesian framework, which allows for explicit probabilistic interpretation of model outputs. All these advantages make GP a powerful tool to address complex nonlinear regression and classification problems. However, the standard GP is computationally demanding when the dataset is large and its size grows with time. To remedy this drawback, a plethora of low-complex GP algorithms have been proposed over the last decade. Representative solutions include (1) reduced-rank approximations of the covariance matrix \citep{RW06} and (2) sparse representation of the complete training dataset \citep{CR05} and (3) partition of the complete dataset into smaller subsets and fusion of all local GP experts \citep{SNS06}, \citep{DN15} and (4) stochastic variational inference approximated GP \citep{Titsias09} and (5) recursive processing based GP including a grid based algorithm \citep{HMF13, Huber14} and a series of state-space model based algorithms \citep{SSH13}. In this paper, we narrow down our focus to the recursive processing based algorithms as they are more attractive for on-line applications. \subsection{Contribution} In this work, we apply GP regression for modeling and prediction in a sport use case, but the proposed framework is generic for other use cases with positioned device trajectories and kinetic measurements, for instance, to model the temporal and/or spatial aspects of motions, speed, and data flows. In this work, we have used the data trajectories from the Falun Nordic World Ski Championships 2015 and the Men's cross country relay race, $4\times10$ km. The contribution of the work can be summarized as: \begin{enumerate} \item A grey-box modeling is proposed to perform force analysis. In a grey-box modeling approach, the internal working mechanism of the system is partially known. To be more specific, the force model in this work is formulated by combining the known deterministic motion kinetics with Gaussian processes regression to accounts for the unknown forces in skiing races. The model can be further used to investigate the performance of a specific skier and to study the differences between various skiing techniques. \item A black-box modeling approach is proposed to model the ground speed. In the black-box modeling approach, the internal working mechanism of the system is completely unknown. In this work, both the standard and a grid based on-line Gaussian process regression \citep{HMF13} are proposed to provide a model specifying the relationships between the speed and position for each individual. For group of individuals, clustering is performed based on a number of features extracted from the training data. \item Not limited to ski race, the proposed approaches are also applicable to various sport activities, such as track and field, car race, horse race, and so on. This can be utilized both on-line for private coaching/public use with real-time data and off-line for analysis with recorded data in batch. \end{enumerate} \subsection{Paper Organization and Notations} The remainder of this paper is organized as follows: Section~\ref{sec:ForceAnalysis} proposes an approach for force analysis in sports, which combines the kinetics of motion with Gaussian processes. Section~\ref{sec:GP based flow modeling} introduces a modeling algorithm for positioned user trajectories based on both standard and the grid based on-line GP. Section~\ref{sec:ClusterFlow} introduces a novel strategy for clustering skiers based on selected features and the aggregated flow modeling for each cluster. Section~\ref{sec: DataDes} provides detailed descriptions of the dataset. Section~\ref{sec:Results} validates and compares the proposed algorithms in various scenarios with real data. Lastly, Section~\ref{sec:Conclusions} concludes the work. Throughout this paper, matrices are presented with uppercase letters and vectors with boldface lowercase letters. The operator $\left[\cdot\right]^{T}$ stands for vector/matrix transpose and $\left[\cdot\right]^{-1}$ stands for the inverse of a non-singular square matrix. The operator $\parallel \cdot \parallel$ stands for the Euclidean norm of a vector. $\mathcal{N}(\mu, \sigma^{2})$ denotes a Gaussian distribution with mean $\mu$ and variance $\sigma^{2}$. \section{Force Analysis for a Single Individual} \label{sec:ForceAnalysis} The effective force applied by an athlete during a sports competition is complex to analyze. With global navigation satellite system (GNSS) trackers on the athlete, it is possible to obtain trajectories with time series of position and velocity estimates. Typically, the vertical position estimate from GNSS is uncertain. Instead, the horizontal position estimate can be used together with ground height information to estimate the vertical position. Based on kinetic relations, it is possible to model the athlete motion with the effective force as unknown. Hence, we propose to combine the information from kinetic models with Gaussian process regression to estimate the latent forces. To be more specific, we propose a generic way for analyzing forces of athletes at certain stages of the competition. We begin by analyzing a cross country skiing scenario, which can be easily extended to other sports with similar moving patterns. In cross country skiing, the skier moves forward by applying forces through poles and skis. In this process, there is also friction from the ice surface and the resistance from air. In order to ease the analysis, we propose the simplified force models as illustrated in Fig.~\ref{fig:fig1} and \ref{fig:fig2}. \begin{figure*}[t] \centering \includegraphics[width=8.5cm]{Figure1.eps} \caption{Force model for uphill.} \label{fig:fig1} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=8.4cm]{Figure2.eps} \caption{Force model for downhill.} \label{fig:fig2} \end{figure} For uphill, there is a propulsive force $F$ going forward, while at the same time, $f$ in Fig.~\ref{fig:fig1} denotes both the friction from ice and the resistance from air, which is opposite to the moving direction. The mass of skier is denoted by $m$, and $g$ is the gravity of Earth and $\varphi$ is the track incline angle. In Fig.~\ref{fig:fig2}, the force model for downhill is illustrated, where usually there is no propulsive force, but only air resistance and friction from ice (which is denoted by $f$ in Fig.~\ref{fig:fig2}). The skier instead makes use of the gravity $mg$ to move forward with a decline angle of $\varphi$. The skier has to adjust his/her posture to reduce the air resistance. It should be noted that in a real practice, the forces are more complex than what have been illustrated in Fig.~\ref{fig:fig1} and \ref{fig:fig2}. Based on the force models, in what follows, we analyze uphill and downhill scenarios, respectively. Since the forces are changing during different competition stages, the whole track is divided into small segments and we further assume the moving direction, the propulsive forces from a specific skier, air resistance and friction remain the same during each segment. In addition, the time for the skier to finish the segment is measured as $\Delta t$. Hence, according to Newton's law of motion, the change in velocity per time unit in an uphill segment can be formulate as \begin{equation} \frac{\Delta \bm{v}}{\Delta t} m = F-\textrm{f}-mg\sin \varphi, \label{eq:motionLaw} \end{equation} where the mass of skier is usually known and the incline angle $\varphi$ can be computed as the slope of the track is known. The change in velocity per time unit can be denoted as $a\triangleq\frac{\Delta \bm{v}}{\Delta t}$. In modern races, the velocity and position of athletes can be measured at certain fixed time stamps. However, it is typically difficult to measure the propulsive forces, since it comes from both skis and poles. Furthermore, the air resistance and friction on ice are almost impossible to measure, as they depend on many factors, such as the posture of skier, the temperature of ice, and the force from skier that is perpendicular to the track surface. Considering such complexities, we propose to model the resultant force, which is $F-f$, by a Gaussian process, which is formulated as \begin{equation} F_r(d) \triangleq F(d)-\textrm{f}(d) \triangleq r(d)+n_r, \end{equation} where $d$ denotes the distance traveled from the beginning of the track, and it uniquely determines the position of skier when the track is predefined. $n_r$ is additive Gaussian noise with zero mean and variance $\sigma_{n_r}^2$. Hence, the full kinetic is given as \begin{equation} \frac{\Delta \bm{v}}{\Delta t} m = -mg\sin \varphi +r(d)+n_r\label{eq:kineticGP} \end{equation} which consists of both the known gravity and the unknown forces. The function $r(d)$ follows a Gaussian process \cite{RW06} \begin{equation} r(d)\sim \mathcal{GP} (m_r(d), k_r(d,d')) \label{eq:GP_resultantForce} \end{equation} with mean $m_r(d)$ and kernel function $k_r(d,d')=\sigma_r^2\exp\left[-\frac{||d-d'||^2}{l_r^2}\right]$. In order to train the Gaussian process regression model, a set of training dataset is required, which is denoted by $\mathcal{D}\triangleq {(F_{r,1},d_1),\ldots,(F_{r,N},d_N)}$, where $F_{r,i}$ is constructed by $F_{r,i}\triangleq \frac{\Delta \bm{v}_i}{\Delta t_i}m+mg\sin \varphi_i$ for $i=1,\ldots,N$. The joint distribution of all observations $\bm{F}_r \triangleq [F_{r,1},\ldots, F_{r,N}]^T$ is given by \begin{equation} p(\bm{F}_r|\bm{\theta}_r, \mathcal{D}) \sim \mathcal{N} (\mathbf{m}_r(\mathbf{d}), \mathbf{C}_r(\mathbf{d},\mathbf{d})), \label{eq:jointDistGPFA} \end{equation} where \begin{align} \mathbf{d} & \triangleq [d_1, d_2, \ldots, d_N ]^T , \nonumber \\ \bm{\theta}_r &\triangleq [\sigma_r, l_r, \sigma_{n_r}]^T, \nonumber \\ \mathbf{m}_r(\mathbf{d})& \triangleq [m_r(d_1), m_r(d_2), \ldots, m_r(d_N)]^T , \nonumber \\ \mathbf{K}_r (\mathbf{d},\mathbf{d}) & \triangleq \begin{bmatrix} k_r(d_1, d_1) & k_r(d_1, d_2) & \ldots & k_r(d_1, d_{N}) \\[0.2em] \vdots & \vdots & \ddots & \vdots \\[0.3em] k_r(d_{N}, d_1) & k_r(d_{N}, d_2) & \ldots & k_r(d_{N}, d_{N}) \\[0.2em] \end{bmatrix}, \nonumber \\ \mathbf{C}_r(\mathbf{d}, \mathbf{d}) & \triangleq \mathbf{K}_r(\mathbf{d}, \mathbf{d}) + \sigma_{n_r}^2 \mathbf{I}_{N}. \nonumber \end{align} The parameters $\bm{\theta}_r$ can be estimated by maximizing the likelihood function given in \eqref{eq:jointDistGPFA}. The detailed explanations are given in Appendix A. Given a new input location $d_*$ and the training set $\mathcal{D}$, the resultant force can be estimated by \begin{equation} F_r(d_*)|\mathcal{D} \sim \mathcal{N}(\hat{\mu}_r(d_*),\hat{\sigma}_r(d_*) ), \end{equation} where \begin{subequations} \begin{align} \hat{\mu}_r(d_*)&= \mathbf{k}_r^{T}(d_{*}, \mathbf{d}) \mathbf{C}_r^{-1}(\mathbf{d}, \mathbf{d}) (\bm{F}_r - \mathbf{m}_r(\mathbf{d})) + m_r(d_{*}),\\ \hat{\sigma}_r^2(d_*)&= \sigma_{n_r}^2 + \sigma_{r}^2 - \mathbf{k}_r^{T}(d_{*}, \mathbf{d}) \mathbf{C}_r^{-1}(\mathbf{d}, \mathbf{d}) \mathbf{k}_r(d_{*}, \mathbf{d}). \end{align} \end{subequations} To analyze the downhill segment, as shown in Fig.~\ref{fig:fig2}, the following holds according to the law of motion: \begin{align} \frac{\Delta \bm{v}}{\Delta t} m = mg\sin \varphi-\textrm{f}, \label{eq:motionLawDH} \end{align} where $F_r\triangleq-\textrm{f}$ is modeled by \begin{equation} F_r(d) = r(d)+n_r, \end{equation} and $r(d)$ follows a Gaussian process as given in \eqref{eq:GP_resultantForce}. Then, we follow the same procedure applied to estimate the resultant force for uphill. Instead, the training observations $F_{r,i}$ for $i=1,\ldots, N$ are constructed as $F_{r,i}=\frac{\Delta \bm{v}_i}{\Delta t_i} m-mg\sin \varphi_i$. \section{GP Based Flow Modeling and Prediction for a Single Individual} \label{sec:GP based flow modeling} In previous section, we have proposed a grey-box modeling approach for analyzing the forces in, for instance, skiing races. The grey-box modeling approach explores the known kinetic model based on the physical laws and combines it with Gaussian process regression models which are used to account for the unknown forces. It is also possible to perform the modeling of input and output relationship by a black-box approach, where it is not necessary to know explicitly the internal working mechanics. In this section, we propose a black-box approach to analyze the relationship between the ground speed and the position of an individual skier. Concretely, we estimate the ground speed, $v_t = \parallel \mathbf{v}_t \parallel$, for a single skier at a specific position. Since the individual follows a predefined track in this scenario, the position of an individual at time $t$ can be uniquely translated into the distance traveled on the track since the start of the race, denoted by $d_t$ herein. With the definitions given above, the following flow model is formulated \begin{equation} v_t(d) = f(d) + n, \label{eq: speedmodel} \end{equation} where $f(\cdotp)$ is the underlying flow model and $n$ is additive noise, which is assumed to be Gaussian distributed with zero mean and variance $\sigma_n^2$. The focus of this section is to use GP regression to infer the underlying flow model in (\ref{eq: speedmodel}) and predict the ground speed value at any input $d_*$. \subsection{Standard Gaussian Process Regression} \label{subsec: full GP} In this subsection, the standard Gaussian process (SGP) will be introduced and applied to the problem formulated above. Without specifying the time, the previously defined function $f$ can be approximated by a GP, which is given by \begin{equation} f(d) \sim \mathcal{GP}(m(d), k(d,d^\prime)), \nonumber \end{equation} where $m(d) = v_0$ is the mean function (we assume $v_0 = 0$ in this work) and $k(d,d^\prime)$ is the covariance/kernel function. \\ In the training phase, a dataset denoted as, $\mathcal{S} = \left\lbrace (d_1, v_1), \ldots, (d_M, v_M) \right\rbrace$, is collected. Considering the additive noise, the joint distribution of the observed ground speed measured at different distances is given by \begin{equation} p(\mathbf{v} (\mathbf{d})|\mathcal{S}) \sim \mathcal{N} (\mathbf{m} (\mathbf{d}), \mathbf{C} (\mathbf{d}, \mathbf{d})), \nonumber \label{GPdistribution} \end{equation} where $\mathbf{d} \triangleq [d_1, d_2, \ldots, d_M ]^T $, and $\mathbf{v} (\mathbf{d})$, $\mathbf{m} (\mathbf{d})$ and $\mathbf{C} (\mathbf{d}, \mathbf{d})$ can be easily constructed as given in \eqref{eq:jointDistGPFA}. When there comes a novel value of $d_*$, we compute according to \citep{RW06} the Gaussian posterior probability of a ground speed value at a new $d_*$ by \begin{equation} p(v(d_{*})| \mathcal{S}) \sim \mathcal{N}\left(\hat{\mu}(d_{*}), \hat{\sigma}^{2}(d_{*}) \right), \label{eq:GP-posterior} \end{equation} where \begin{subequations} \begin{equation} % \hat{\mu}(d_{*}) = \mathbf{k}^{T}(d_{*}, \mathbf{d}) \mathbf{C}^{-1}(\mathbf{d}, \mathbf{d}) (\mathbf{v}(\mathbf{d}) - \mathbf{m}(\mathbf{d})) + m(d_{*}), % \label{eq:predictedMean} \end{equation} % % \begin{equation} % \hat{\sigma}^{2}(d_{*}) = \sigma_{n}^2 + k(d_*, d_*) - \mathbf{k}^{T}(d_{*}, \mathbf{d}) \mathbf{C}^{-1}(\mathbf{d}, \mathbf{d}) \mathbf{k}(d_{*}, \mathbf{d}), % \label{eq:predictedCov} \end{equation} \end{subequations} and $ k(d_*, \mathbf{d}) \triangleq [k(d_*, d_1), k(d_*, d_2), \ldots, k(d_*, d_M)]^T$. The SGP deals with the training data in a batch manner. The corresponding computational complexity scales as $\mathcal{O}(M^3)$ and the memory requirement scales as $\mathcal{O}(M^2)$. Next, we apply the grid based on-line Gaussian process (OGP) \citep{HMF13} to derive an on-line ground speed model. \subsection{Grid Based On-line Gaussian Process Regression} \label{subsec: OL_GP} The notations, if not re-defined, will follow those used for the SGP. For simplicity and easier comparison with the SGP, we imagine that the training data arrives one by one in time, namely we have a new data point $\{ d_t, v(d_{t}) \}$, at time instance $t=1,2,\ldots,M$. \\ In grid based on-line GP, a set of grids $\bar{\mathbf{d}} = \left[ \bar{d}_1, \bar{d}_2, \ldots, \bar{d}_s \right]$ is introduced to represent some predefined reference distances on track. The corresponding ``clean'' ground speed (without the additive white Gaussian noise) values at these grids are latent variables $\bar{\mathbf{v}}(\bar{\mathbf{d}}) \triangleq \left[ \bar{v}(\bar{d}_1), \bar{v}(\bar{d}_2), \ldots, \bar{v}(\bar{d}_s) \right]^T$. We denote $\mathcal{S}_g \triangleq \{\bar{\mathbf{v}}(\bar{\mathbf{d}}), \bar{\mathbf{d}}\}$. For notational brevity in the sequel, $\bar{\mathbf{v}}$ is short for $\bar{\mathbf{v}}(\bar{\mathbf{d}})$, and its mean and covariance matrix are denoted by $\bar{\mathbf{m}}$ and $\bar{\mathbf{K}}$, respectively. Our aim is to compute the posterior distribution of $\bar{\mathbf{v}}$ at any time instance $t$ ($t \geq 1$) given the training data $\mathcal{S}_{1:t} \triangleq \left\lbrace (d_1, v_1), \ldots, (d_t, v_t) \right\rbrace$. The main steps of the grid based OGP \citep{HMF13} are summarized as follows: \begin{enumerate} % \item \textit{Initialization}: Set initial mean vector $\bm{\mu}_{0}^{g} \triangleq \bar{\mathbf{m}}$ and the covariance matrix $\mathbf{K}_{0}^{g} \triangleq \bar{\mathbf{K}}$. Compute the inverse of $\bar{\mathbf{K}}$ and store it for use later on. Here the prior mean $\bar{\mathbf{m}}$ is set to be a vector of all zeros (of size $s$) and the prior covariance matrix is set to be % \begin{equation} \bar{\mathbf{K}} = \begin{bmatrix} k(\bar{d}_1, \bar{d}_1) & k(\bar{d}_1, \bar{d}_2) & \ldots & k(\bar{d}_1, \bar{d}_{s}) \\[0.2em] \vdots & \vdots & \ddots & \vdots \\[0.3em] k(\bar{d}_{s}, \bar{d}_1) & k(\bar{d}_{s}, \bar{d}_2) & \ldots & k(\bar{d}_{s}, \bar{d}_{s}) \\[0.2em] \end{bmatrix}. \end{equation} % \item \textit{Recursive Processing}: For each $t = 1,2,\ldots,M$, do the following computations: % \begin{subequations} \begin{align} % \mathbf{J}_{t} &= \mathbf{k}(d_{t}, \bar{\mathbf{d}}) \bar{\mathbf{K}}^{-1} \label{eq:onlineGPrecStart}, \\ % \mu_{t}^{p} &= m(d_{t}) + \mathbf{J}_{t} \!\left( \bm{\mu}_{t-1}^{g} - \bar{\mathbf{m}} \right), \\ % \sigma_t^{2,p} &= k(d_t, d_t) + \mathbf{J}_{t} \!\left( \mathbf{K}_{t-1}^{g} - \bar{\mathbf{K}} \right)\! \mathbf{J}_{t}^{T}, \\ % \tilde{\mathbf{g}}_{t} &= \frac{1}{\sigma_{n}^2 + \sigma_t^{2,p}} \mathbf{K}_{t-1}^{g} \mathbf{J}_{t}^{T}, \\ % \bm{\mu}_{t}^{g} &= \bm{\mu}_{t-1}^{g} + \tilde{\mathbf{g}}_{t} \!\left( v(d_{t}) - \mu_{t}^{p} \right), \\ % \mathbf{K}_{t}^{g} &= \mathbf{K}_{t-1}^{g} - \tilde{\mathbf{g}}_{t} \mathbf{J}_{t} \mathbf{K}_{t-1}^{g}. \label{eq:onlineGPrecEnd} % \end{align} % \end{subequations} % After the recursive processing through (\ref{eq:onlineGPrecStart})--(\ref{eq:onlineGPrecEnd}), we have % \begin{equation} % p(\bar{\mathbf{v}} | \bar{\mathbf{d}}, \mathcal{S}) = \mathcal{N} \left( \bar{\mathbf{v}} | \bm{\mu}_{M}^{g}, \mathbf{K}_{M}^{g} \right). % \end{equation} % % \item \textit{Prediction}: At the end of the training phase, namely $t=M$ assumed in this specific example, the posterior distribution of a noisy speed observation $v({d}_{*})$ at a novel input position $d_{*}$, given $\mathcal{S}$ and $\mathcal{S}_g$, can be approximated by % \begin{equation} % p( v(d_{*})| \bar{\mathbf{d}}, \mathcal{S} ) \approx \mathcal{N}(v(d_{*})| \hat{\mu}(d_{*}), \hat{\sigma}^{2}(d_{*}) ), % \label{eq:onlineGP-posterior} % \end{equation} % where % \begin{subequations} % \begin{equation} % \hat{\mu}(d_{*}) \!=\! \bar{\mathbf{k}}^{T}(d_{*}) \bar{\mathbf{K}}^{-1} (\bm{\mu}_{M}^{g} - \bar{\mathbf{m}}) + m(d_{*}), % \label{eq:mu-onlineGP} % \end{equation} % \begin{equation} % \hat{\sigma}^{2}(d_{*}) \!=\! k(d_{*}) + \sigma_{n}^{2} + \bar{\mathbf{k}}^{T}\!(d_{*}) \bar{\mathbf{K}}^{-1} \!\! \left(\! \mathbf{K}_{M}^{g} \bar{\mathbf{K}}^{-1} \!-\! \mathbf{I}_s \!\right)\! \bar{\mathbf{k}}(d_{*}). % \label{eq:var-onlineGP} % \end{equation} % \end{subequations} % Herein, $\bar{\mathbf{k}}(d_{*})$ is short for $\mathbf{k}(d_{*}, \bar{\mathbf{d}})$ and $k(d_{*})$ is short for $k(d_{*}, d_{*})$. % \end{enumerate} The detailed derivations of (\ref{eq:onlineGPrecStart})--(\ref{eq:onlineGPrecEnd}) can be found in \citep{HMF13} and the derivations of (\ref{eq:mu-onlineGP}) and (\ref{eq:var-onlineGP}) are given in Appendix B. It is easy to verify that the computational complexity scales as $\mathcal{O}(s^3)$ for $\bar{\mathbf{K}}^{-1}$ in the initialization step, $\mathcal{O}(s^2)$ for $\bm{\mu}_{t}^{g}$ and $\mathbf{K}_{t}^{g}$ at any time instance $t$ in the recursive processing step. The computational complexity of $\bm{\mu}_{M}^{g}$ and $\mathbf{K}_{M}^{g}$ scales as $\mathcal{O}(s^2 M)$. The computational complexity for prediction in the third step scales as $\mathcal{O}(s^2)$. As compared to the SGP, the grid based OGP is able to reduce the overall computational complexity from $\mathcal{O}(M^3)$ to $\mathcal{O}(s^2 M)$ with $s \ll M$. Moreover, when we have a new observation pair $\{ d_{M+1}, v(d_{M+1}) \}$ at time $M+1$ after the training phase, it requires only $\mathcal{O}(s^2)$ complexity to compute $\bm{\mu}_{M+1}^{g}$ and $\mathbf{K}_{M+1}^{g}$, which is essential for on-line learning. Apart form the reduced computational complexity, there are several other benefits of using OGP as compared to the SGP. For instance, model fitting can be performed in parallel to measurement collection, and we can stop collecting more data when the posterior distribution of ground speed at the predefined grids converges. To summarize, OGP is more flexible to use and more adaptive to new arrival data. While if the underlying model is time invariant and the computational cost is secondary, the SGP using all available training data for both hyper-parameter optimization and prediction will intuitively give the best modeling results. \subsection{Kernel Selection} \label{subsec: kernels} Kernel function is a key component of GP, as it encodes the assumptions about the function which we wish to learn. The kernel function reflects the similarity between data points \citep{RW06}. In this subsection, the selection of different kernels will be discussed. One classic kernel function is the Squared Exponential (SE) kernel, defined by \begin{equation} k(d, d^\prime) = \sigma_s^2 \exp \left[ -\frac{(d-d^\prime)^2}{l_d^2} \right], \nonumber \end{equation} where $\sigma_s^2$ is the variance of the function and $l_d$ is the length scale which determines how rapidly the function varies with $d$. The SE kernel is considered as the most widely used kernel. However, it implies a stationary model which forbids structured extrapolation \citep{KZSH16}. In some specific cases, this kernel function may show poor performance in prediction, for instance, in sport races where there is periodic pattern over laps. Considering this, it is more appropriate to adopt a periodic kernel which can reflect the similarities between different laps. However, strict periodicity is too rigid, because there may be some deviations in each lap (e.g., due to the strength loss of the individual, strategies used in competition, etc). Hence, we adopt a local periodic (LP) kernel, which is a product of an SE kernel and a periodic kernel: \begin{equation} k(d, d^\prime) = \sigma_s^2 \exp \left[ -\frac{\sin^2\left(\frac{\pi (d-d^\prime) }{\lambda}\right)}{l_p^2} \right] \exp \left[ -\frac{{( d-d^\prime)}^2}{l_d^2} \right], \label{eq:localPeriodicKernel} \end{equation} where $l_p$ is the length scale of the periodic kernel and $\lambda$ is the period-length. This kernel considers two inputs are similar if they are similar under both the SE and the periodic kernels. If $l_d \gg \lambda$, this allows encoding a decay in the covariance over several oscillations \citep{KZSH16}. The key benefits of this kernel is that it outperforms SE kernel in prediction when the distance from the data is increasing as illustrated in \citep[Fig. 4]{KZSH16}. Lastly, we note that the LP kernel in (\ref{eq:localPeriodicKernel}) is not necessary optimal in our application, but as will be shown in our simulations it gives very good modeling and prediction results. Interested readers can refer \citep{DLG13, WA13,Yin18} for strategies for selecting an optimal kernel from the training data. \subsection{Hyperparameters Determination} \label{subsec: Hyperparas} Given the SGP model and the kernel in \ref{subsec: full GP} and \ref{subsec: kernels} respectively, the hyperparameters to be calibrated are \begin{equation} \bm{\theta} \triangleq [\sigma_n^2, \sigma_s^2, l_p, l_d]^T. \nonumber \end{equation} The likelihood function of the observed ground speed with respect to the hyperparameters $\bm{\theta}$ can be written as follows: \begin{equation} p(\mathbf{v}(\mathbf{d}); \bm{\theta}) \sim \mathcal{N}({m}(\mathbf{d}), \mathbf{C}(\mathbf{d}, \mathbf{d}; \bm{\theta})). \label{eq:LikelihoodGP} \end{equation} Here, the maximum-likelihood estimate (MLE), $\hat{\bm{\theta}}$, is derived. Details are given in Appendix A. In the OGP, we assumed that the parameters $\bm{\theta}$ are known before the recursive process starts. This can be the case when some historical/expert knowledge is available or a small set of the training data can be used to train the parameters like we did for the SGP. Huber demonstrated in \citep{Huber14} that these parameters can be learned on-line as well. \section{Aggregated Flow Modeling And Prediction for Multiple Individuals} \label{sec:ClusterFlow} In this section, we investigate aggregated flow modeling and prediction for multiple individuals that are clustered. The classic way of clustering data sequences is to extract some common features from the data sequences and then perform K-means algorithm or expectation-maximization (EM) algorithm \citep{Bishop06} based upon some distance metric. Principle component analysis \citep{Bishop06} can be used to reduce the dimension of the feature space before running the K-means or EM algorithm. The drawback of these classic methods is that the number of clusters need to be prescribed before we conduct clustering. A more sophisticated way is to combine the Dirichlet process with Gaussian processes in a Bayesian framework, which is capable of modeling an infinite number mixture stochastic processes (see for instance \citep{HRL15}). One example of sequence clustering will be given in Section \ref{subsec: AggResults}. \subsection{Flow Modeling and Prediction for Multiple Individuals} \label{subsec: FlowMulti} The data of all individuals in the same cluster will be aggregated to form a new dataset, denoted as $D = \left\lbrace (v_1, d_1), (v_2, d_2), \ldots, (v_{N_D}, d_{N_D}) \right\rbrace$. The ground speed will be modeled as a function of the distance on track plus some noise terms: \begin{equation} v(d) = h(d)+n_c+n_w \label{eq:cFlowModel} \end{equation} where $n_w$ is an additive white Gaussian noise with zero mean and variance $\sigma_n^2$, and $n_c$ is an additive correlated Gaussian noise with zero mean and variance $\sigma_c^2$. However, $n_c$ at two positions, namely $d$ and $d^\prime$, is assumed to be correlate, accounting for the interactive effects between individuals. The kernel function is selected as $k_c(d,d')\triangleq \sigma_{c}^{2} \exp\left[ \frac{-( d - d' )^2}{l_{c}^2} \right]$. Compared to the flow model for an individual, the correlations between individuals in one cluster are important since their performance may affect each other during the competition. Such effects can be considered as correlated noise and thus be modeled as given in \eqref{eq:cFlowModel}. Let $h(d)$ be a GP with the mean function $m(d)$ and the kernel function $k(d,d')=\sigma_s^2 \exp (-\frac{{( d-d^\prime )}^2}{l_d^2})$, the joint distribution of observations is given by \begin{equation} p(\mathbf{v} (\mathbf{d})|\mathcal{D}; \bm{\theta}_c) \sim \mathcal{N} (\mathbf{m}_c (\mathbf{d}), \mathbf{C}_c (\mathbf{d}, \mathbf{d})), \nonumber \label{eq:GPdistributionCluster} \end{equation} where $\bm{\theta}_c = [\sigma_n^2, \sigma_s^2, l_d, \sigma_c^2, l_c]^T$ and \begin{align} \mathbf{d} & \triangleq [d_1, d_2, \ldots, d_{N_D} ]^T , \nonumber \\ \mathbf{v}(\mathbf{d}) & \triangleq [v_1 , v_2 , \ldots, v_{N_D} ]^T , \nonumber \\ \mathbf{m}_c(\mathbf{d}) & \triangleq [m(d_1), m(d_2), \ldots, m(d_{N_D})]^T , \nonumber \\ \mathbf{K}_c(\mathbf{d}, \mathbf{d}) & \triangleq \begin{bmatrix} k'(d_1, d_1) & k'(d_1, d_2) & \ldots & k'(d_1, d_{{N_D}}) \\[0.2em] \vdots & \vdots & \ddots & \vdots \\[0.3em] k'(d_{N_D}, d_1) & k'(d_{N_D}, d_2) & \ldots & k'(d_{N_D}, d_{N_D}) \\[0.2em] \end{bmatrix}, \nonumber \\ \mathbf{C}_c(\mathbf{d}, \mathbf{d}) & \triangleq \mathbf{K}_c(\mathbf{d}, \mathbf{d}) + \sigma_{n}^2 \mathbf{I}_{N_D}, \nonumber \end{align} and $k'(d, d') = k(d,d')+k_c(d,d')$. Correspondingly, the posterior probability of an observed ground speed value $v_*$ at a novel input $d_*$ is given by \begin{equation} p(v(d_{*})| \mathcal{D}) \sim \mathcal{N}\left(\hat{\mu}_c(d_{*}), \hat{\sigma}_c(d_{*}) \right), \label{eq:GPposteriorCluster} \end{equation} where \begin{subequations} \begin{equation} % \hat{\mu}_c(d_{*}) = \mathbf{k}'^{T}(d_{*},\mathbf{d})\mathbf{C}_c^{-1}(\mathbf{d}, \mathbf{d}) (\mathbf{v}(\mathbf{d}) - \mathbf{m}_c(\mathbf{d})) + m(d_{*}), % \label{eq:predictedMeanCluster} \end{equation} % % \begin{equation} % \hat{\sigma}_c^2(d_{*}) = \sigma_{n}^2 + \sigma_{c}^2 + \sigma_{s}^2 - \mathbf{k}'^{T}(d_{*},\mathbf{d}) \mathbf{C}_c^{-1}(\mathbf{d},\mathbf{d})\mathbf{k}'^{T}(d_{*},\mathbf{d}), % \label{eq:predictedCovCluster} \end{equation} \end{subequations} Similarly, the MLE method is applied to train the hyperparameters $\hat{\bm{\theta}}_c$, and more details are given in Appendix A. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{Figure3.eps} \caption{Map of two tracks.} \label{fig:figure3} \end{figure} \begin{figure}[t] \centering \subfigure[]{ \includegraphics[width=8.2cm,height=4.2cm]{Figure4a.eps} \label{fig:figure1}} \subfigure[]{ \includegraphics[width=8.2cm,height=4.2cm]{Figure4b.eps} \label{fig:figure1a}} \caption{Altitude maps for track 1 and 2.} \end{figure} \section{Data Description} \label{sec: DataDes} The data used in this work was gathered during the men's $4 \times 10$ kilometers relay race in the 2015 Nordic World Ski Championships, Falun, Sweden. The dataset contains primarily the longitude, latitude, distance on track, ground speed and sampling time instances for $56$ individuals from $17$ national teams. The individuals who did not finish the competition have been excluded from the dataset. The data was conducted regularly at a frequency of 1 Hz. A distance on track is essentially calculated by \textregistered TrackingMaster from the longitude and latitude of a skier obtained from global positioning system (GPS). \textregistered TrackingMaster is a software developed by Swiss Timing to receive raw GPS data from GPS modules and to convert the raw GPS data into data related to the course. The positioning uncertainty of GPS in a wide open outdoor environment can be as low as 5 meters, hence is ignored in our work. The individuals compete on track $1$ for relay $1$ and $2$, while on track $2$ for relay $3$ and $4$. The two tracks are illustrated in Fig. \ref{fig:figure3} with their coordinates expressed using the World Geodetic System (WGS). Each track is 2.5 kilometers in length. In each relay, an individual has to finish $10$ kilometers on one track (i.e., $4$ laps). The length of data is different due to the various finishing time between individuals. It should be noted that on track 1, skiers are applying the \textit{classic style}, while on track 2 they use \textit{skating style}. Altitude maps, see for instance Fig.\ref{fig:figure1} and \ref{fig:figure1a}, are readily available before the race. For individual force analysis, we apply the approach to the \textit{killer hill} (i.e., highlighted with red color) and \textit{steepest downhill} segments (i.e., highlighted with green color). For flow model of multiple individuals, the data of each individual is first segmented. Then, the data segments in the killer hill and steepest downhill segments are extracted. In the same relay, data from each segment is aggregated over all individuals to be used for group clustering and flow modeling. \section{Results} \label{sec:Results} \subsection{Force Analysis} \label{subsec:forceAnalysisResults} In this section, we investigate the relationship between performance and behavior of individual skiers. It is based on the force analysis introduced in Section~\ref{sec:ForceAnalysis}. The force analysis is done for both the killer hill and the steepest downhill segments of two tracks. First of all, the estimated forces on track $1$, the killer hill segment, are plotted in Fig.~\ref{fig:fig3} to \ref{fig:fig5}, with each figure representing i.e., one skier. Fig.~\ref{fig:fig6} to \ref{fig:fig8} depict the estimated forces at the steepest downhill segment. It is noted that for the steepest downhill segment, most parts are declining areas, while there are some small areas that are either inclining or rather flat. In addition, positive forces in the plots indicate resultant forces are acting in alignment with the moving direction, while the negative ones indicate the resultant forces are acting opposite to the moving direction. In total, we compare the forces on both the killer hill and steepest downhill for three different skiers, namely, individual A (best performance), B (competing with individual A) and C (fell behind). \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=6.5cm]{Figure5.eps} \caption{Resultant forces: Individual A, killer hill, track 1} \label{fig:fig3} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=6.5cm]{Figure6.eps} \caption{Resultant forces: Individual B, killer hill, track 1} \label{fig:fig4} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.4cm,height=6.5cm]{Figure7.eps} \caption{Resultant forces: Individual C, killer hill, track 1} \label{fig:fig5} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=6.5cm]{Figure8.eps} \caption{Resultant forces: Individual A, steepest downhill, track 1} \label{fig:fig6} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=6.5cm]{Figure9.eps} \caption{Resultant forces: Individual B, steepest downhill, track 1} \label{fig:fig7} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.4cm,height=6.5cm]{Figure10.eps} \caption{Resultant forces: Individual C, steepest downhill, track 1} \label{fig:fig8} \end{figure} From all the plots for track $1$, where classic style is applied, we have the following observations: (I) Skier A and B have stronger forces on average at the killer hill segment and hence outperform skier C;(II) Larger forces/frictions are estimated at sharper slopes, for instance, between $1100$ to $1200$ meters at killer hill and around $1600$ and $1700$ meters at steepest downhill;(III) Different strategies were applied, for instance, skier A has larger forces in lap 1 and 4, where skier B has larger forces in lap 2 and 3, and skier C has almost evenly distributed forces for all 4 laps. To further verify the observations, histograms of the estimated resultant forces for two segments are shown in Fig.~\ref{fig:fig15} and \ref{fig:fig16}. Due to space limitation, we only show the comparison between individual A and B, who were competing with each other. It can be observed that for the killer hill, skier A and B distribute forces differently over 4 laps. At the steepest down hill, individual B experience larger negative forces, especially for lap 1, 2 and 4. It should be noted that individual B has much larger weight than individual A, which may result in higher friction when declining. \begin{figure}[tb] \centering \includegraphics[width=8.4cm]{Figure11.eps} \caption{CDF of resultant forces: killer hill, track 1} \label{fig:fig15} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.4cm]{Figure12.eps} \caption{CDF of resultant forces: steepest downhill, track 1} \label{fig:fig16} \end{figure} Similarly, the estimated resultant forces are evaluated on track $2$, which are illustrated in Fig.~\ref{fig:fig9} to \ref{fig:fig14} for both a killer hill and a steepest downhill segment, for individual D (best performance), E (competing with individual D), F (fell behind), respectively. The histogram of individual D and E are also compared for both segments in Fig.~\ref{fig:fig17} and \ref{fig:fig18}. It can be observed that for the killer hill, individual D has larger positive forces than E (especially in lap 4, where individual D performed an overtaking of individual E). It is also noted that the higher the weight is (e.g., individual D is heavier than E, and individual F is the lightest), the larger friction the individual will experience when declining (e.g, see Fig.~\ref{fig:fig12} to \ref{fig:fig14} between $1900$ and $2000$ meters). Compared with forces on track $1$, where the classic style is applied, the forces are much smaller on track $2$, this is probably due to the fact that different skiing technique (i.e., skating style) is used. It is also worth mention that for the steepest downhill segment, the forces almost have a uniform distribution between $-100$ and $100$ Newton on track $2$. \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=6.5cm]{Figure13.eps} \caption{Resultant forces: Individual D, killer hill, track 2} \label{fig:fig9} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=6.5cm]{Figure14.eps} \caption{Resultant forces: Individual E, killer hill, track 2} \label{fig:fig10} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.4cm,height=6.5cm]{Figure15.eps} \caption{Resultant forces: Individual F, killer hill, track 2} \label{fig:fig11} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=6.5cm]{Figure16.eps} \caption{Resultant forces: Individual D, steepest downhill, track 2} \label{fig:fig12} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=6.5cm]{Figure17.eps} \caption{Resultant forces: Individual E, steepest downhill, track 2} \label{fig:fig13} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.4cm,height=6.5cm]{Figure18.eps} \caption{Resultant forces: Individual F, steepest downhill, track 2} \label{fig:fig14} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.4cm]{Figure19.eps} \caption{CDF of resultant forces: killer hill, track 2} \label{fig:fig17} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.4cm]{Figure20.eps} \caption{CDF of resultant forces: steepest downhill, track 2} \label{fig:fig18} \end{figure} \subsection{Individual Flow Model and Prediction} \label{sub:IndResults} The ground speed versus distance on track for one specific individual is depicted in Fig. \ref{fig:figure4}. The speed in four laps shows a periodic pattern, while difference can also be observed between different laps. For instance, better performance in lap 4 has been observed, when the individual sprint for the last lap. In order to evaluate the goodness of fit and compare the predictions made by different models, the data from the first three laps are used for training and the data from the last lap is used for validation. The results for the SGP and the OGP with different kernels have been shown in Fig. \ref{fig:figure4} and \ref{fig:figure6}, respectively. For the OGP in Fig. \ref{fig:figure6}, $s = 500$ grid points are uniformly selected within the race distance, i.e., 10 kilometers. \begin{figure}[tb] \centering \includegraphics[width=8.5cm]{Figure21.eps} \caption{Flow model (first $3$ laps) and prediction (lap $4$) for SGP: LP kernel.} \label{fig:figure4} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm]{Figure22.eps} \caption{Flow model (first $3$ laps) and prediction (lap $4$) for OGP: LP kernel.} \label{fig:figure6} \end{figure} From Fig. \ref{fig:figure4} and \ref{fig:figure6}, we can see that both GPs provide good fit for the training data. Both SGP and OGP give good performance in prediction using the LP kernel. We can also see that the OGP with $s=500$ shows similar performance as the SGP with training data of size $M=1219$. For further comparisons between the two GPs, see for instance \cite{YFFFJ16}. The mean predictive standard deviation for the speed difference between two consecutive time instants (i.e., $\Delta v=v_t-v_{t-1}$) is compared in Table~\ref{tab:1}, where the comparison is for skier E, in the killer hill segments for all 4 laps. For the black-box approach for individual skier, we compute the predictive variance for $v_t$ and $v_{t-1}$, namely, $\hat{\sigma}^2(d_t)$, and $\hat{\sigma}^2(d_{t-1})$ according to \eqref{eq:predictedCov}. The predictive variance for the speed difference is computed as $\sigma^2_{\Delta v}=\hat{\sigma}^2(d_t)+\hat{\sigma}^2(d_{t-1})$. The average standard derivations for the killer hill segment is then computed by taking the mean value of $\sigma_{\Delta v}$ during the killer hill time interval. For the grey-box approach, the predictive variance for can be computed from $\hat{\sigma}_r^2(d_t)$ and the force model given in \eqref{eq:kineticGP}, yielding \begin{equation} \sigma^2_{\Delta v}=\frac{\hat{\sigma}_r^2(d_t)}{m^2}\Delta t^2. \end{equation} Similarly, the mean of the standard deviation $\sigma_{\Delta v}$ during the killer hill segment is registered in Table~\ref{tab:1}. From the comparison, we have observed larger predictive standard deviation for the black-box approach. This may due to the fact that in black-box approach, the model is completely unknown, which leads to larger ambiguity in the prediction. \begin{table} \caption{Comparison between black-box and grey-box approach: the mean predictive standard derivation for the speed difference $\Delta v=v_t-v_{t-1}$}. \label{tab:1} \begin{tabular}{lll} \hline\noalign{\smallskip} Lap & \textbf{Black-box (m/s)} &\textbf{Grey-box (m/s)} \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & 0.4148 & 0.3022 \\ 2 & 0.4116 & 0.2433 \\ 3 & 0.4204 & 0.3066 \\ 4 & 0.4069 & 0.2207 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Aggregated Flow Modeling (Multiple Individuals)} \label{subsec: AggResults} Clustering is performed first for individuals on the same track in the same relay. Here we focus on the killer hill and steepest downhill segments as shown in Figure \ref{fig:figure1} where the individuals may perform differently. Clustering of the ground speed curves of two individuals is similar to clustering of line-of-sight and non-line-of-sight signal waveforms described in \citep{WMG12}. The features considered in this work are: maximum, minimum, variance and mean value of speed, energy, skewness and kurtosis. With the features extracted from the corresponding data segments, clustering is performed as described in \cite[Section V.B]{YFFFJ16}. The number of clusters is $N_k = 2$ in this evaluation. \begin{figure}[tb] \centering \includegraphics[width=8.4cm]{Figure23.eps} \caption{Flow models of individuals in lap $1$ track $1$: killer hill.} \label{fig:figure11} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm]{Figure24.eps} \caption{Flow models of individuals in lap $4$ track $1$: killer hill.} \label{fig:figure14} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.7cm]{Figure25.eps} \caption{Flow models of individuals in lap $1$ track $1$: steepest downhill.} \label{fig:figure15} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm]{Figure26.eps} \caption{Flow models of individuals in lap $4$ track $1$: steepest downhill.} \label{fig:figure18} \end{figure} After clustering, the data of individuals in the same cluster are aggregated and the GP model is applied to the aggregated data as proposed in Section \ref{subsec: FlowMulti}. The flow models for relay $1$ on track $1$ are illustrated in Fig. \ref{fig:figure11} to \ref{fig:figure18}. Due to space limitation, we only show the results for lap $1$ and $4$. Besides, the segments for individuals who perform worse (i.e., individual A) and better (i.e., individual B) in the whole competition are also plotted. In the killer hill, there is no significant difference between two clusters in lap $1$. However, the differences between two clusters become more distinct in lap $4$. This is reasonable since at the beginning of the race, all individuals may move in one cluster with similar speed. In the final stage, the difference is larger since some individuals may sprint and some may fall behind due to exhaustion. In the steepest downhill, the two clusters perform quite similarly in all laps so that one cluster is enough to model all individuals. In addition, the variance of the model becomes larger from lap $1$ to $4$. It indicates that in the steepest downhill segment, all individuals have quite uniform speed at the beginning of the race. As the race progresses, the speed of different skiers varies. Besides, it is observed that individuals that outperform in the killer hill have better final results in the whole competition (e.g., individual B, especially in lap $4$, has outperformed others in cluster $2$). The individuals that perform worse in the killer hill have worse final results (e.g., individual A, especially in lap $4$, has much worse performance than others in cluster $1$). This indicates that the performance in the killer hill is a more crucial factor than the steepest downhill in determining the final results. Fig. \ref{fig:figure19} and \ref{fig:figure20} show a comparison between the flow models for lap $1$ and $4$ in both killer hill and steepest downhill segments. In the killer hill, all individuals almost maintain similar speed in lap $4$ as in lap $1$. However, for the steepest downhill, the average speed of all individuals are lower in lap $4$ than in lap $1$. It is probably due to different track conditions (e.g., weather condition) in lap $1$ and $4$. In lap $4$, there may be more protrusions on the track than in lap $1$ (i.e., the track is less smooth in lap $4$). Hence, the performance on the steepest downhill is greatly affected in lap $4$, while for the killer hill, the performance is mainly determined by the slope of the track. \begin{figure}[tb] \centering \includegraphics[width=8.5cm]{Figure27.eps} \caption{Flow models of individuals in lap $1$ and $4$ track $1$: killer hill.} \label{fig:figure19} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=8.5cm]{Figure28.eps} \caption{Flow models of individuals in lap $1$ and $4$ track $1$: steepest downhill.} \label{fig:figure20} \end{figure} \subsection{Discussions on Grey-box and Black-box Modeling} \label{sub:dissGB} So far we have shown the results for both the grey-box and black-box modeling approach. It is clear to see that the grey-box approach explores the partially known physical models, and the unknown part in the model are formulated by Gaussian process. In the black-box modeling approach, the model appears as random function and it can be trained based on the training inputs and outputs. We further compute and compare the predictive variance for the ground speed difference $\Delta v=v_{t+1}-v_{t}$ for both approaches. The grey-box approach yields smaller predictive variance on average. This is due to the fact that part of the model is deterministically known in grey-box approach. Hence, the ambiguity in the model is reduced. In the black-box approach, the model is completely unknown and random, which leads to larger uncertainty in prediction. \section{Conclusions} \label{sec:Conclusions} In this work, we have proposed a grey-box modeling approach for force analysis in skiing races. By analyzing the forces for different skiers, we conclude that they apply different strategies over multiple laps. For instance, skier A exerts larger forces in lap 1 and 4, where skier B exerts larger forces in lap 2 and 3, and skier C has almost evenly distributed forces for all 4 laps. Skiers having better performance are good at maintaining propulsive force at inclining area, while the declining performance is mainly determined by the friction on ice and air resistance, and larger weights lead to larger negative forces when declining. In addition, skiers having better performance are good at maintaining propulsive force at inclining area, while the declining performance is mainly determined by the friction on ice and air resistance. Besides, a black-box modeling approach using Gaussian process has been proposed for flow modeling. Both the standard GP and a grid based on-line GP with the local periodic kernel manifest to be powerful in modeling and predicting the performance of individuals. In particular, the grid based on-line GP reduces the computational complexity greatly while maintains similar performance. Moreover, on-line GP is more appropriate for real-time analytics where data come in sequentially. Then, clustering of individuals are performed based on the \textit{killer hill} and \textit{steepest downhill} segments. Moreover, the aggregated flow models for clusters of individuals have been developed and the results reveal that the individuals may behave differently in the killer hill, while follow a similar flow model in the steepest downhill. Finally, by comparing the two approaches, the grey-box approach is preferred given the real physical model is partially known, since it leads to reduced predictive variance. The black-box modeling approach, compared with the grey-box approach, is simpler and suitable when the relationship between the input and output are not clearly presented. \section*{Appendix A} \label{sec:AppendixA} \subsection*{A.1 Hyperparameters for Force Analysis} \label{sub:HyperFA} The ML estimates for GP in \eqref{eq:GP_resultantForce} can be obtained by maximizing the likelihood function, cf.(\ref{eq:jointDistGPFA}), with respect to $\bm{\theta}_r$, which is equivalent to \begin{equation} \arg \min_{\bm{\theta}_r} \, l(\bm{\theta}_r) \triangleq (\bm{F}_r - \mathbf{m}_r)^T \mathbf{C}_r^{-1} (\bm{F}_r - \mathbf{m}_r) + \ln |\mathbf{C}_r|. \label{eq:costFuncFA} \end{equation} Various existing numerical methods can be adopted to solve this minimization problem, such as the limited-memory BFGS (LBFGS) quasi-Newton method \citep{RW06} and the conjugate gradient (CG) method. In this work, the former is adopted, which requires the first-order derivatives of the likelihood function. The first-order derivatives of $l(\bm{\theta}_r)$ can be given as \begin{subequations} \begin{align} % \frac{\partial l(\bm{\theta}_r)}{ \partial \sigma_{n_r}^{2}} &= \textrm{tr} \! \left\lbrace \left[ \mathbf{C}_r^{-1} \!-\! \left( \mathbf{C}_r^{-1}(\mathbf{v} - \mathbf{m}) \right)\left(\cdot\right)^T \right] \! \frac{\partial \mathbf{C}_r}{\partial \sigma_{n_r}^2} \right\rbrace \\[0.5em] % \frac{\partial l(\bm{\theta}_r)}{ \partial \sigma_{r}^{2}} &= \textrm{tr} \! \left\lbrace \left[ \mathbf{C}_r^{-1} \!-\! \left( \mathbf{C}_r^{-1}(\mathbf{v} - \mathbf{m}) \right)\left(\cdot\right)^T \right] \! \frac{\partial \mathbf{C}_r}{\partial \sigma_{r}^2} \right\rbrace \\[0.5em] % \frac{\partial l(\bm{\theta}_r)}{ \partial l_r} &= \textrm{tr} \! \left\lbrace \left[ \mathbf{C}_r^{-1} \!-\! \left( \mathbf{C}_r^{-1}(\mathbf{v} - \mathbf{m}) \right)\left(\cdot\right)^T \right] \! \frac{\partial \mathbf{C}_r}{\partial l_{r}} \right\rbrace % % % \end{align} \label{eq:costFuncDeri} % \end{subequations} where \begin{align} % \frac{\partial \mathbf{C}_r}{\partial \sigma_{n_r}^2} &= \mathbf{I}_{N}. \nonumber \\ % \left[ \frac{\partial \mathbf{C}_r}{\partial \sigma_{r}^2} \right]_{j,k} \!\!\!\! &= \begin{cases} 1, & \!\! j = k \\[0.5em] \exp \! \left[ \frac{-(d_{j}-d_{k})^2}{l_{r}^2} \right] , & \!\! j \neq k \end{cases} \nonumber \\ % \left[ \frac{\partial \mathbf{C}_r}{\partial l_{r}} \right]_{j,k} \!\!\!\! &= \begin{cases} 0, & \!\! j=k, \\ 2\sigma_{r}^2 \! \exp \! \left[ \frac{-(d_{j} -d_{k})^2}{l_{r}^2} \right] \! \frac{(d_{j}-d_{k})^2}{l_{r}^3} , & \!\! j \neq k. \end{cases} \nonumber \end{align} Here we use $(A)(\cdot)^T$ to denote $(A)(A)^T$ for brevity. \subsection*{A.2 Hyperparameters for Individual Model} \label{sub:HyperIndi} The maximum-likelihood estimate of the GPR model parameters, $\hat{\bm{\theta}}$, can be obtained by maximizing the Gaussian prior likelihood function similarly The first-order derivatives can be given as the same form as in \eqref{eq:costFuncDeri}, except that \begin{align} \frac{\partial \mathbf{C}}{\partial \sigma_{n}^2} &= \mathbf{I}_{M} \nonumber \\ \left[ \frac{\partial \mathbf{C}}{\partial \sigma_{s}^2} \right]_{j,k} \!\!\!\! &= \begin{cases} 1, & \!\! j = k \nonumber \\[0.5em] \exp \! \left[ \frac{-\sin^2\left(\frac{(d_{j}-d_{k})\pi}{\lambda}\right)}{l_{p}^2} \right] \! \exp \! \left[ \frac{-(d_{j}-d_{k})^2}{l_{d}^2} \right] , & \!\! j \neq k \end{cases} \nonumber \\[0.5em] \left[ \frac{\partial \mathbf{C}}{\partial l_{p}} \right]_{j,k} \!\!\!\! &= \begin{cases} 0, & \!\! j=k \nonumber \\ 2k(d_j, d_k) \! \sin^2\left(\frac{(d_{j}-d_{k})\pi}{\lambda}\right) \! \frac{1}{l_{p}^3}, & \!\! j \neq k \end{cases} \nonumber \\[0.5em] \left[ \frac{\partial \mathbf{C}}{\partial l_{d}} \right]_{j,k} \!\!\!\! &= \begin{cases} 0, & \!\! j=k \nonumber \\ 2k(d_j, d_k) \! \frac{(d_{j}-d_{k})^2}{l_{d}^3} , & \!\! j \neq k. \nonumber \end{cases} \end{align} \subsection*{A.3 Hyperparameters for Aggregated Model} \label{sub:HyperMul} Similarly, the ML estimate for the multiple skiers GP model parameters can be obtained by maximizing the likelihood function, cf.(\ref{eq:GPdistributionCluster}), with respect to $\bm{\theta}_c$ The first-order derivatives of the cost function, $l_c(\bm{\theta}_c)$, have similar formats as given in \eqref{eq:costFuncDeri}, except that \begin{subequations} % \begin{align} % \frac{\partial \mathbf{C}_c}{\partial \sigma_{n}^2} &= \mathbf{I}_{N_D}. \nonumber \\ % \left[ \frac{\partial \mathbf{C}_c}{\partial \sigma_{s}^2} \right]_{j,k} \!\!\!\! &= \begin{cases} 1, & \!\! j = k \nonumber \\[0.5em] \exp \! \left[ \frac{-(d_{j}-d_{k})^2}{l_{d}^2} \right] , & \!\! j \neq k \end{cases} \nonumber \\[0.5em] % \left[ \frac{\partial \mathbf{C}_c}{\partial l_{d}} \right]_{j,k} \!\!\!\! &= \begin{cases} 0, & \!\! j=k \nonumber \\ 2\sigma_{s}^2 \! \exp \! \left[ \frac{-(d_{j} -d_{k})^2}{l_{d}^2} \right] \! \frac{(d_{j}-d_{k})^2}{l_{d}^3} , & \!\! j \neq k \end{cases} \nonumber \\[0.5em] % \left[ \frac{\partial \mathbf{C}_c}{\partial \sigma_{c}^2} \right]_{j,k} \!\!\!\! &= \begin{cases} 1, & \!\! j = k \nonumber \\[0.5em] \exp \! \left[ \frac{-(d_{j}-d_{k})^2}{l_{c}^2} \right] , & \!\! j \neq k \end{cases} \nonumber \\[0.5em] % \left[ \frac{\partial \mathbf{C}_c}{\partial l_{c}} \right]_{j,k} \!\!\!\! &= \begin{cases} 0, & \!\! j=k \nonumber \\ 2\sigma_{c}^2 \! \exp \! \left[ \frac{-(d_{j} -d_{k})^2}{l_{c}^2} \right] \! \frac{(d_{j}-d_{k})^2}{l_{c}^3} , & \!\! j \neq k \nonumber. \end{cases} % \end{align} % \end{subequations} \section*{Appendix B} \label{sec:AppendixB} Imagine that $\mathcal{S}_g \triangleq \{\bar{\mathbf{v}}, \bar{\mathbf{d}}\}$ is also a training dataset despite that $\bar{\mathbf{v}}(\bar{\mathbf{d}})$ is latent. Given a novel input, $d_{*}$, the posterior distribution of observing a noisy $v(d_{*})$, given $\mathcal{S}_{g}$, can be easily obtained as follows: \begin{equation} p(v(d_{*}) | \mathcal{S}_{g}) \sim \mathcal{N} \left( \mu_{g}^{p}, \sigma_{g}^{2, p} \right), \end{equation} where \begin{subequations} % \begin{align} % \mu_{g}^{p} &= \mathbf{k}(d_{*}, \bar{\mathbf{d}})^T \bar{\mathbf{K}}^{-1} (\bar{\mathbf{v}} - \bar{\mathbf{m}}) + m(d_{*}) \\ % \sigma_{g}^{2, p} &= \sigma_{s}^{2} + \sigma_{n}^{2} - \mathbf{k}(d_{*}, \bar{\mathbf{d}})^T \bar{\mathbf{K}}^{-1} \mathbf{k}(d_{*}, \bar{\mathbf{d}}). % \end{align} % \end{subequations} The posterior distribution of $v(d_{*})$, given $\mathcal{S}$ and $\bar{\mathbf{d}}$, can be computed analytically via the following marginalization: \begin{equation} p( v(d_{*}) | \mathcal{S}, \bar{\mathbf{d}} ) \!=\!\! \int \! p(\bar{\mathbf{v}} | \bar{\mathbf{d}}, \mathcal{S}) p( v(d_{*}) | \mathcal{S}_{g}, \mathcal{S}) \rm{d} \bar{\mathbf{v}} , \nonumber \end{equation} and approximated with reduced computational complexity, like in \citep{SG06}, by \begin{equation} p( v(d_{*}) | \mathcal{S}, \bar{\mathbf{d}} ) \!\approx\!\!\! \int \! \!p(\bar{\mathbf{v}} | \bar{\mathbf{d}}, \mathcal{S}) p( v(d_{*}) | \mathcal{S}_{g}) \rm{d} \bar{\mathbf{v}}. \nonumber \end{equation} Since both $p(v(d_{*}) | \mathcal{S}_{g})$ and $p(\bar{\mathbf{v}} | \bar{\mathbf{d}}, \mathcal{S})$ are Gaussian distributed, applying Lemma A.1 in \citep{Sarkka13} yields eventually (\ref{eq:mu-onlineGP}) and (\ref{eq:var-onlineGP}). \begin{acknowledgements} \label{sec:ack} This work is funded by European Union FP7 Marie Curie training programme on Tracking in Complex Sensor Systems (TRAX) with grant number 607400 from 2014 to 2017. This work is also funded by ELLIT project, which is a strategic research environment funded by the Swedish government in 2010. Feng Yin is mainly funded by Shenzhen Science and Technology Innovation Council under Grant JCYJ20170307155957688, Guangdong Province Pearl River Talent Team under Grant 2017ZT07X152, and partly by the Shenzhen Fundamental Research Fund under Grant (Key Lab) ZDSYS201707251409055. \end{acknowledgements} \bibliographystyle{plainnat}
{ "attr-fineweb-edu": 2.277344, "attr-cc_en_topic": 0, "domain": "arxiv" }